Degree of Data Normalization

Hi Friends,
Is data normalization the same as database normalization? Is this part of the work of DBA?
How many forms or level does data normalization has? Is it up to 5th or 6th level?
I have been hearing this Data Normalization and I tried reading it over and over but I can not really fully understand it :(
Thanks a lot
Ms K

Hi,
it can assigned to more roles. If the team/project is small it is the DBA, but if bigger there can also be a Data Architect or some Functional Designer who can do the job (initially). I also saw that a good senior programmer (sql/plsql) can do the job, but experience is an advantage. But with good thinking and looking at the requirements, all the above mentioned roles can do the job. But finally the DBA has (normally) the last word, because he has to implement the design. The DBA afterwards has also to make the Technical Design of the database.
Herald ten Dam
http://htendam.wordpress.com

Similar Messages

  • Write 32 channels thermistor data (temp in degree vs date/time) into xls file for all channels continuously.

    i am acquiring 32 channels thermistor data (temp in degree vs date/time) in waveform plot using array to cluster function  ,
    now my problem is how to write this data  into xls file for all channels continuously.
    please help me at the earliest & i am new to Labview.

    Hi Priyanka,
    Writing to excel file continuously is not a good idea, you can use ".CSV" or TDMS file format and once data acquisition is completed you can convert that to excel file using report generation toolkit.
    With Regards
    Miraz
    Kudos is better option to thank somebody on this forum

  • Degree Date not mapped into Peoplesoft HCM 9.2

    We have installed PeopleSoft HCM 9.2 and People Tools 8.53.We are using PeopleSoft HCM 9.2 Demo database in our test environment.We have integrated our Resume Parsing solution with PeopleSoft HCM 9.2.Our Parser is capable of extracting degree date from the resume content.We are finding issue with Mapping "Degree Date" into respective field.
    We are trying to Map the "Degree date" as field in Degree section in Poeplesoft HCM system using HR XML mapping  as per Recruiting_Solutions_9_2_Integrations_Setup.pdf documentation.
    XML LOCATION                 : Candidate.Resume.StructuredXMLResume.EducationHistory.SchoolOrInstitution.Degree.DegreeDate
    DATA OBJECT LOCATION  : CandidateItems.. JPM_DATE_6
    RECORD LOCATION           : HRS_APP_ITEMS. JPM_DATE_6
    Oracle Recruiting solution document says "Degree Date" is mapped to "HRS_APP_ITEMS.JPM_DATE_6" field in Peoplesoft HCM 9.2 Application. (Please Refer page no 65 ).
    We are finding that this mapping is not happening as expected
    Does anyone know how to resolve the issue ?

    As stated in the following doc, this issue has been solved in PeopleTools 8.53.04 patch
    E-SES: Problems Deploying Search Definition With PeopleTools 8.53 And Oracle SES 11.1.2.2.0 [ID 1558997.1]

  • How to normalize data in BW and in Bex Report

    Hello Experts,
    Could you please clarify me with the followings please?
    1. How to normalize data in BW and in Bex Report?
    2. What are the best designs for data normalization and where?
    3. What are the advantages and disadvantages for BW data normalization / denormalization?
    Need your guidance and will appreciate help.
    Thanks in advance.
    Regards,
    Bhakta

    Hi,
    You can do this in the report. See here for details:
    http://help.sap.com/saphelp_nw04s/helpdata/en/1f/25883ce5bd7b1de10000000a114084/content.htm
    Hope this helps...

  • Difference between Consolidation, Harmonization and Central master data man

    Hi ..Expertise
    Consolidation is use to identify duplicate records and merge them into one record.
    Harmonization is the process of pushing the new cleansed data back out to your partner systems.
    Central master data management means you create records within MDM and then you distribute (Syndicate) this centrally created data to your Remote Systems.
    My Question is her after Consolidation both Harmonization and Central master data management is doing the same thing i.e. sending the clean data to the other system. What is the difference between these two? Please explain me with an example or scenario..

    Hi Yugandhar,
    There are three basic scenarios that we follow in MDM:
    Data Consolidation
    Data Harmonization
    Central Master Data Management
    Consolidation :
    Conolidation (matching , normalization, cleansing) and storage of master data imported from client system. Entails capabilities to identify identical and similar objects spread across the local systems , build consolidated master data.
    Key Capabilities:
    1. Cleansing and de-duplication
    2. Data normalization including categorization and taxonomy management
    3. New interactive consolidation capabilities
    Data Harmonization :
    In case of Harmonization we generally aim at reaching high quality master data within heterogeneous system landscapes. Here the main focus is on ensuring high quality of master data within the connected systems and then distributing the Master data
    Key Capabilities:
    1. Automated synchronization of globally relevant master data information
    2. New Interactive distribution capabilities
    Central Master Data Management :
    In case of CMDM, it is not always the scenario that the Client wants to go for the above two scenario`s, but CMDM is always required as it helps us to maintain the Business data effectively and efficiently.
    After MDM is put into the business scenario all the operations related to the Master records are done in MDM only. For Eg: Creating a record, Deleting a record, Distributing a record etc.
    Now it is the work of CMDM to centrally maintain the data and check that no duplicate enteries can be created.
    And all the various systems that are attached to MDM gets the updates from CMDM with the help of Syndication Process or Distribution Process.
    Key Capabilities:
    1. Central creation and maintenance of data
    2. Ongoing master data quality
    Hope this would be helpful !!!!!!!!!
    Regards,
    Parul

  • Normalization and denormalization

    Hi,
    what is normalization and denormalization ?
    How to implement that while design (OLTP,OLAP).
    I read the oracle documents it is very hard to understand that..
    please anyone tell the examples.

    user3266490 wrote:
    Hi,
    what is normalization and denormalization ?
    How to implement that while design (OLTP,OLAP).
    I read the oracle documents it is very hard to understand that..
    please anyone tell the examples.Normalization is not an Oracle issue, it is a basic data design issue. Ideally it would apply to all systems, not just rdbms systems and certainly not to just Oracle. Where have you tried to find information on it? Something as simple as http://lmgtfy.com/?q=data+normalization
    Data normalization is so basic to systems design, I'm astounded that at the number of people who call themselves professionals who don't even know what it is. I won't even venture (in a public forum) a guess on how that happens.

  • Data saggrigation into different columns

    Hi,
    I need help on following:
    i have data in the table as
    col1 col2
    3 343
    3 567
    3 333
    3 987
    3 987
    there are other column too. I have to populate this like:
    col1 c1
    3 (1) ---- 35399
    3 (2) ---- 46388
    3 (3) ---- 37377
    Let me explain this:
    col1 defines how many columns col2 will have. There will be definite 10 rows (per record set).
    the out put will have 3(1) are records of row1 in col2... like wise.
    Col2 can have maximum of 26 columns. This means result set will have max 26 rows.
    Please let me know if there is confusion in requirement.
    Thanks
    Edited by: user2544469 on Feb 9, 2011 4:38 AM

    user2544469 wrote:
    Hi,
    I need help on following:
    i have data in the table as
    col1 col2
    3 343
    3 567
    3 333
    3 987
    3 987
    there are other column too. I have to populate this like:
    col1 c1
    3 (1) ---- 35399
    3 (2) ---- 46388
    3 (3) ---- 37377
    Let me explain this:
    col1 defines how many columns col2 will have. columns do not contain columns. Columns do not contain "fields". I think you need to go back and revisit the basic rules of data normalization, and get this data model to third normal form.
    There will be definite 10 rows (per record set).
    the out put will have 3(1) are records of row1 in col2... like wise.
    Col2 can have maximum of 26 columns. This means result set will have max 26 rows.
    Please let me know if there is confusion in requirement.
    Thanks
    Edited by: user2544469 on Feb 9, 2011 4:38 AM

  • Getting Started Sharing Data with SAP?

    Hi Folks, as will be apparent, I'm a complete newbie with SAP.
    Long and short of it is this: I have an App that generates project plans for maintenance systems, costs and budgets them, and reports 'currently' as a stand alone package.
    A number of Clients who use the package already have SAP handling some of these aspect, and therefore would like some degree of integration. Primarily for our package to pass maintenance dates, costs and other performance data to SAP, and update work-scope progress etc. It would be useful if we could have two way linkage though to.
    My view is that we should be able simply to access the relevant columns in a SAP managed table(s), but I may be being very simplistic and naive?
    We use Java - developed with Servoy RAD, and access JDBC Dbs only.
    Q: Is there a Java Bean available that opens some degree of data sharing to SAP?
    Q: Where should I begin looking for info on how to do this?
    Q: Is it likely that we need a SAP Pro to develop this integration.
    Appreciate any feedback or guidance on where to look next!

    Note, that this forum is dedicated to all other development-related questions which are not directly addressed by other forums. This includes Business Objects SDKs, products, or technologies which do not fall under BusinessObjects Enterprise, BusinessObjects Edge, Crystal Reports Server, or Crystal Reports (for example Desktop Intelligence SDK, Universe Designer SDK, Portal Integration Kits, Java User Function Libraries, and other third party technologies or development languages).
    I don't believe your query addresses any of the above products thus you may want to pose your query to one of the SAP product forums.
    Ludek

  • Guidance to become a Oracle 10g DBA

    Hai...
    I m a lecturer in an Engineering college.I want to change my job as i m interested in DBA. So i have joined a course in Oracle 10g DBA through Oracle University,which will be starting on 25th june.Can anyone guide me how to go abt it, so dat i can prepare well for interviews n place myself in a gud company.Also plz let me knw which has gud scope,DBA or Data warehousing.

    Hi,
    My list of Oracle DBA job skills include the following:
    Excellent communication skills - The Oracle professional is the central technical guru for the Oracle shop. He or she must be able to explain Oracle concepts clearly to all developers and programmers accessing the Oracle database. In many shops, the Oracle DBA is also a manager and is required to have excellent communication skills for participating in strategic planning and database architectural reviews.
    Formal education – Many employers require Oracle professionals to have a bachelor’s degree in computer science or information systems. For advanced positions such as an Oracle DBA, many employers prefer a master’s degree in computer science or a master’s in business administration (MBA).
    Real-world experience - This requirement is the catch-22 for newbie’s who possess only an OCP certificate. A common complaint of people who have OCP certificates but no job experience is that they cannot get experience without the OCP, and they cannot get a job without experience. This is especially true in a tight job market.
    Knowledge of database theory - In addition to mastering the technical details required for the OCP exams, the successful Oracle professional must have an understanding of database and data warehouse design. This includes intimate knowledge of data normalization theory and knowledge of star schema design, as well as object-oriented modeling with Unified Modeling Language (UML) and knowledge of other database design methodologies such as CORBA and J2EE.
    Here is my list of DBA job skills:
    http://www.dba-oracle.com/oracle_tips_dba_job_skills.htm
    Here are notes on becoming a DBA:
    http://www.dba-oracle.com/t_how_to_become_an_oracle_dba.htm
    Can anyone guide me how to go abt it, so dat i can prepare well for interviews n place myself in a gud company.First n foremost, show dat you have gud communications skills.
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

  • SRM Business Content

    Hi All,
    I would like to know the infosource of 0SR_C01 procurement overview(aggregated) cube. while on help.sap it shows 0SR_MC01 as a multiprovider for 0SR_C01. But could not find the infosource for the same.
    I would also like to know if anyone has the list of content infocubes and their infosources for SRM. Please share it with me.
    Thank in Advance
    Niren

    Hi ,
    for all business content, you can start here
    http://help.sap.com/saphelp_nw04s/helpdata/en/3d/5fb13cd0500255e10000000a114084/frameset.htm
    scm
    http://help.sap.com/saphelp_nw04s/helpdata/en/29/79eb3cad744026e10000000a11405a/frameset.htm
    and bi best practice
    http://help.sap.com/bp_biv335/BI_EN/html/bw.htm
    Check this link for SRM Business Content information: http://help.sap.com/saphelp_nw2004s/helpdata/en/3a/7aeb3cad744026e10000000a11405a/frameset.htm
    DataSources
    New DataSources have been created for NW 2004s BI Content Add-On 2 and SRM 5.0. These are described here, along with their extraction logic.
    The following graphic shows the data flow from the DataSources to the DataStore objects
    Structure
    Each DataSource has an extraction structure and a data extractor. Generally, this is an RFC-capable function module.
    There are two types of extraction: Specific extraction and generic extraction:
    Specific extraction is strongly oriented towards the data model of the BI System and the necessary extract structure. It executes DataSource-specific processing steps.
    The logic contained within the generic extraction is implemented centrally and can therefore be reused by the specific extraction modules. The generic extraction deals with access to the SRM database and is similar for each DataSource.
    Generic and specific extraction
    Extraction logic in detail
    Data extraction involves the following six function modules:
    InitLoad
    InitLoad is called as soon as an initial upload is started in the BI System. In other words, it is executed when data is loaded into the BI System for the first time. The module can restrict the data extraction according to filter criteria that were defined in BI during extraction (for example, extract the data for a particular timeframe). InitLoad packages the data before it is transferred to the BI System.
    DeltaLoad
    DeltaLoad is called when a document is modified in the SRM System. The module writes data that has accumulated since the initial loading to a delta queue.
    Data Collector
    The Collector reads the required data from the database in the source system. For performance reasons, the data is then buffered in system-internal tables. The tables have the same structure as in the database. This saves time and effort, because data conversion is kept to a minimum.
    Data Consolidator
    The Consolidator groups data that belongs together from a business point of view. The consolidated data is stored in internal tables and is prepared for further processing.
    Data Normalizer
    The Denormalizer prepares the source data for the BI System. It calculates key figures that are required for the report but are not contained on the database and converts data types in the source system to data types of the BI System. Depending on the BI data model, the system now decides how each row is filled with data.
    Data Mapper
    During mapping of the data, the system defines which fields of the denormalized data structure are assigned to the fields of the extract structure. In this way, flat and one-dimensional lines are generated from the data.
    The following table shows the technical names of the function modules of the individual DataSources:
    Description
    0SRM_TD_PO
    0SRM_TD_CF
    0SRM_TD_IV
    0SRM_TD_PO_ACC
    InitLoad
    BBP_BI_PO_EXTRACT
    BBP_BI_CF_EXTRACT
    BBP_BI_IV_EXTRACT
    BBP_BI_PO_ACC_EXTRACT
    Collector
    BBP_BI_GEX_COLLECT
    BBP_BI_GEX_COLLECT
    BBP_BI_GEX_COLLECT
    BBP_BI_GEX_COLLECT
    Consolidator
    BBP_BI_GEX_CONSOLIDATE
    BBP_BI_GEX_CONSOLIDATE
    BBP_BI_GEX_CONSOLIDATE
    BBP_BI_GEX_CONSOLIDATE
    Denormalizer
    BBP_BI_GEX_DENORMALIZE
    BBP_BI_GEX_DENORMALIZE
    BBP_BI_GEX_DENORMALIZE
    BBP_BI_GEX_DENORMALIZE
    Mapper
    BBP_BI_PO_MAP
    BBP_BI_CF_MAP
    BBP_BI_IV_MAP
    BBP_BI_PO_ACC_MAP
    DeltaLoad
    BBP_BI_DELTA_UPDATE
    BBP_BI_DELTA_UPDATE
    BBP_BI_DELTA_UPDATE
    BBP_BI_DELTA_UPDATE
    Hope this helps,
    Regards
    CSM Reddy

  • Business Partner Relationship

    Hi All,
    I have requirement when in I need to store single value fields, multi-value fields and also standard BP fields (like Address, telecom) within a BP Relationship.
    Requirement: A student and university are two business partners. Student and university are linked by a relationship call qualification. Attributes of the relationship are Degree, receive date, location. Standard BP address fields need to be used for location.
    I have an idea/knowledge about how to store single value fields and multi-value fields as a relationship attribute.
    Question: How to use the standard BP address fields as a relationship attribute. Does anyone know any BAPI that can be used to store the address without the business partner?
    PS: BAPI_BUPA_ADDRESS_ADD, BAPI_BUPA_ADDRESS_CHANGE BAPI requires Business partner as input fields,
    Thanks,
    Raghavendra

    Hi,
    As far as what I know there is no BAPI provided for relationships other than contact person, employee and shareholder for which you can assign address for relationship.
    For contact person relationship there is a BAPI BUPR_CONTP_ADDR_CHANGE
    etc.
    Thanks and warm regards,
    Smita.

  • Method to Use in Creating a Database--need assistance

    I'm new to Oracle Apex and would like to create a database. Let me tell you the scenario. Iim capturing 4 types of informatoin initially from focus groups, survey results, blog post, and drop off surveys. I would like to input the informtion gathered from each session in each table ,some tables with common fields. Then I would like to maintain one master table which would have all fields. At some point, i would like to reference the common fields in each database. What's the best way to approach this? Maybe i do not need multiple tables--maybe I can have one large application and form a report to pull the information. However, I think the recommendation was to make 4 separate tables--please advise

    user13149997 wrote:
    I'm new to Oracle Apex and would like to create a database. Let me tell you the scenario. Iim capturing 4 types of informatoin initially from focus groups, survey results, blog post, and drop off surveys. I would like to input the informtion gathered from each session in each table ,some tables with common fields. Then I would like to maintain one master table which would have all fields. At some point, i would like to reference the common fields in each database. What's the best way to approach this? Maybe i do not need multiple tables--maybe I can have one large application and form a report to pull the information. However, I think the recommendation was to make 4 separate tables--please adviseI'd advise you to google "data normalization" and get familiar with the concept of 'normalizing' your data to "third normal form" (aka TNF or 3NF). A proper analysis of your data, and how the different data elements will dictate how many tables, what they contain, and how they relate. I can almost guarantee trying to use a single table is a mistake. That's why we call them relational databases, not just data dumps.

  • Bex Exception Aggregation in 3.5 equivalent to 'TOTAL'

    Hi All,
    Can anyone help me in getting a functionality equivalent to exception aggregation 'TOTAL' in BW 3.5.
    I have  a requirement which can only be fixed using the exception aggregation 'TOTAL', my bad in my current client they dont have BI 7.0. So can anyone tell me what is the equivalent function of 'TOTAL' in BW 3.5.
    Regards,
    Anand

    I need to develop a report which will show Org unit & degree (education data) wise head count and annual salary.
    As the education records for an employee will be more than one, the annual salary will be doubled or tripled in the DSO or cube.
    If I do an average of annual salary based on employee the overall result goes wrong.
    Say for example, empl 20045789 as three record with KF value 10,000, the result in the report I see as 30,000.
    SAY in the backend
    EMPLOYEE    |  Degree | Annual Salary
    20045788  |     A | 50,000
    20045788  |     B | 50,000
    20045789  |     A |10,000
    20045789  |     B |10,000
    20045789  |     C |10,000
    In the query I get it as
    Org Unit |      Employee |     Annual Salary |  Average(Annual salary/ Number of employee)
    100001 |           20045788 |     100,000 |          50,000
    100001 |          20045789 |     30,000    |          10,000
    overall result                130,000 |          26,000  (130,000/5)
    The expected result is 50,000 + 10,000 = 60,000 / 2 = 30,000.
    Pls provide me solution.
    Regards,
    Anand
    Edited by: araj123 on Jun 3, 2011 3:45 AM
    Edited by: araj123 on Jun 3, 2011 3:46 AM
    Edited by: araj123 on Jun 3, 2011 3:47 AM
    Edited by: araj123 on Jun 3, 2011 3:48 AM
    Edited by: araj123 on Jun 3, 2011 3:50 AM
    Edited by: araj123 on Jun 3, 2011 2:25 PM

  • Wt is the use of Attribute's ColumnType SQLType entries in Programmatic VO?

    Hi all,
    I am using JDeveloper 11.1.1.4.0
    I want to get some clarifications on whether ColumnType and SQLType entries are required for Programmatic VO or not?
    What i tested?
    I created a client interface method, which creates a new view criteria for VO (EffBgnDt should be less than or equal to current date) and apply it. I also added a console output to know how the clause for cache is framed. In VO xml, since we are able to edit the ColumnType and SQLType of attribute, i changed those for Date attribute and applied the VC and here i've given details below on how the class for cache got generated for various SQLTYPE.
        public void applyVc()
            final String _CRITERIA_ = "TestCriteria";
            Date currentDt = new Date("2011-05-23");
            if (this.getViewCriteria(_CRITERIA_) != null)
                this.getViewCriteriaManager().removeViewCriteria(_CRITERIA_);
            ViewCriteria vc = this.createViewCriteria();
            vc.setName(_CRITERIA_);
            vc.setCriteriaMode(ViewCriteria.CRITERIA_MODE_CACHE);
            ViewCriteriaRow vcRow1 = vc.createViewCriteriaRow();
            ViewCriteriaItem row1BgnDtItem = vcRow1.ensureCriteriaItem("EffBgnDt");
            row1BgnDtItem.setOperator(JboCompOper.OPER_LE);
            row1BgnDtItem.setValue(currentDt);
            row1BgnDtItem.setIsBindVarValue(false);
            vc.insertRow(vcRow1);
            System.err.println("Clause For Cache: " + this.getViewCriteriaManager().buildViewCriteriaClauses(vc).getClauseForCache());
            this.applyViewCriteria(vc);
            this.executeQuery();
        }1. with ColumnType="DATE" , SQLType="DATE", filter criteria is working
    Clause For Cache:  ( (TO_CHAR( EffBgnDt, 'yyyy-mm-dd') <= '2011-05-23' ) ) 2. with ColumnType="VARCHAR2", SQLType="VARCHAR", filter criteria is working
    Clause For Cache:  ( (EffBgnDt <= '2011-05-23' ) ) 3. with ColumnType="BIGINT", SQLType="NUMERIC", getting stackoverflow exception.
    Clause For Cache:  ( (EffBgnDt <= 2011-05-23 ) ) 
    java.lang.StackOverflowError
         at sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:443)
         at sun.util.calendar.AbstractCalendar.getCalendarDate(AbstractCalendar.java:147)
         at sun.util.calendar.Gregorian.getCalendarDate(Gregorian.java:55)
         at sun.util.calendar.Gregorian.getCalendarDate(Gregorian.java:19)
         at sun.util.calendar.AbstractCalendar.getTime(AbstractCalendar.java:193)
         at java.util.Date.normalize(Date.java:1237)
         at java.util.Date.normalize(Date.java:1184)
         at java.util.Date.getTimeImpl(Date.java:871)
         at java.util.Date.<init>(Date.java:237)
         at java.sql.Time.<init>(Time.java:40)
         at oracle.sql.DATE.toTime(DATE.java:319)
         at oracle.jbo.domain.Date.convertToJdbc(Date.java:540)
         at oracle.jbo.domain.Date.toString(Date.java:656)
         at oracle.jbo.domain.TypeConvMapEntry.getStringVal(TypeConvMapEntry.java:157)
         at oracle.jbo.domain.TypeConvMapEntry.convert(TypeConvMapEntry.java:102)
         at oracle.jbo.domain.TypeFactory.get(TypeFactory.java:855)
         at oracle.jbo.domain.TypeFactory.getInstance(TypeFactory.java:102)
         at oracle.jbo.RowComparator.compareValues(RowComparator.java:68)
         at oracle.jbo.RowComparator.compareValues(RowComparator.java:72)
         at oracle.jbo.RowComparator.compareValues(RowComparator.java:72)
    Question:
    1. There is no consistency in generation of ViewAttributes entry. For example, while creating new Date attribute, if you browse and select oracle.jbo.domain.date type, xml entries of ViewAttribute will be generated with ColumnType="BIGINT" SQLType="NUMERIC". But, instead of browse button, if you select the attribute type from the Type dropdown itself, entries will be like ColumnType="VARCHAR2" and SQLType="VARCHAR". Why this inconsistency in generation of entries?
    2. Though the view object is programmatic, does the ColumnType and SQLType entry somehow related/needed with/for filter criteria? I could understand from the clause for cache that based on SQLType only, it got generated)
    [NOTE: I came acorss this issue while testing. In the view object creation wizard, adding a new attribute with oracle.jbo.domain.data type, and attempt to click on the Next button leads to Null pointer exception. Details are given below]
    ava.lang.NullPointerException
         at oracle.jbo.dt.objects.JboViewAttr.setColumnType(JboViewAttr.java:1234)
         at oracle.jbo.dt.objects.JboAttribute.setColumnType(JboAttribute.java:1375)
         at oracle.jbo.dt.objects.JboAttribute.setColumnType(JboAttribute.java:1366)
         at oracle.jbo.dt.ui.view.VOAttributePanel.actionPerformed(VOAttributePanel.java:1482)Thanks in Advance
    Raguraman

    The oracle.jbo.domain.Date class accepts dates and times in the same format accepted by java.sql.TimeStamp (either a long milliseconds time value or the year, month, day, hour, minute, second, nano format) and java.sql.Date (either a milliseconds time value or the year, month, day format).
    The doc reference
    http://download.oracle.com/docs/cd/B14099_19/web.1012/b14023/oracle/jbo/domain/Date.html
    So you basically will loose precision in nanoseconds

  • Abrupt, loss in performance.

    Hi all.
    I just found that doing certain database work like data normalization is far faster in java than Sybase.
    Stats I have recorded:
    Records                    | Sybase  | Java
    less than 30,000 records:  | 2000R/S | 3000R/S
    less than 800,000 records: | 2.5R/S ; | +2500R/S
    more than 800,000 records: | ?       | 25R/S
    public void process(){
    UniqueList<Person> persons = new UniqueList<Person>();
    //Create other unique lists for other tables.
    while(log.next()){
    cleaner.clean(log.getLine());
    Record record = new Record(cleaner.getCleanedData());
    record.setPersonsIDKey(persons.add(record.getPerson()));
    //set other IDKeys for other tables
    WhareHouse.storeRecord(record);
    public class UniqueList<Unique>{
      int id=0;
      private HashMap<Unique,Integer> list = null;
      public UniqueList(){
        list = new HashMap<Unique,Integer>();
      public int add(Unique value){
        if(!list.containsValue(value)){
          listID.put(value,id);
          return id++;
        }else{
          return -1 * list.get(value).intValue();
    } I have 2 questions to ask but let me ask the simpler of the 2 first.
    *<Question 1>*
    This is regarding Sets.
    Why is it that Sets use both equals and hashCode to determine uniqueness, even when the documentation says
    (o==null ? e==null : o.equals(e)). *</Question 1>*

    kajbj wrote:
    Execute it with gc printing so that you can see how often the gc is executing and for how long it's executing. It still sounds gc related.It looks that way.
    Here are 3 sections (Begining, Middle, point of performance loss) of the log file from
    java -XX:+PrintGC -Xloggc:log.log Normalizer------------------
    0.075: [GC 896K->183K(5056K), 0.0017151 secs]
    0.108: [GC 1079K->186K(5056K), 0.0005844 secs]
    0.109: [GC 1082K->187K(5056K), 0.0003445 secs]
    0.134: [GC 1083K->190K(5056K), 0.0002221 secs]
    0.130: [GC 1086K->191K(5056K), 0.0001363 secs]
    0.152: [GC 1087K->195K(5056K), 0.0002240 secs]
    0.145: [GC 1091K->197K(5056K), 0.0001318 secs]
    0.165: [GC 1093K->200K(5056K), 0.0002152 secs]
    0.159: [GC 1096K->201K(5056K), 0.0001563 secs]
    0.179: [GC 1097K->203K(5056K), 0.0002125 secs]
    171.522: [GC 55672K->52095K(56024K), 0.0005632 secs]
    171.533: [GC 55679K->52103K(56024K), 0.0004544 secs]
    171.544: [GC 55687K->52111K(56024K), 0.0004587 secs]
    171.554: [GC 55695K->52114K(56024K), 0.0004445 secs]
    171.609: [Full GC 52150K->52150K(56024K), 0.1150563 secs]
    171.736: [GC 56151K->52158K(65088K), 0.0046462 secs]
    171.752: [GC 56254K->52165K(65088K), 0.0004931 secs]
    171.778: [GC 56261K->52172K(65088K), 0.0004828 secs]
    171.790: [GC 56268K->52181K(65088K), 0.0004062 secs]
    171.802: [GC 56277K->52182K(65088K), 0.0003858 secs]
    220.279: [GC 64706K->60616K(65088K), 0.0006192 secs]
    220.292: [GC 64712K->60623K(65088K), 0.0005213 secs]
    220.304: [GC 64719K->60632K(65088K), 0.0005268 secs]
    220.317: [GC 64728K->60638K(65088K), -0.0130646 secs]
    220.329: [GC 64734K->60649K(65088K), 0.0006524 secs]
    220.329: [Full GC 64745K->60655K(65088K), 0.1366094 secs]
    220.481: [Full GC 65087K->60661K(65088K), 0.1351008 secs]
    220.645: [Full GC 65087K->60668K(65088K), 0.1205051 secs]
    220.794: [Full GC 65087K->60663K(65088K), 0.1209526 secs]
    220.930: [Full GC 65087K->60664K(65088K), 0.1348696 secs]
    And from here it only does Full GC which takes about a tenth of a second

Maybe you are looking for

  • Hard Disk Noise

    Hi, I can hear a soft shooshing sound coming from the left side of my MacBook. I am pretty sure it is the hard drive. I can sometimes also feel a slight vibration on the left side also. It is not the clicking sound I normally associate with disk acce

  • Which version should I buy

    I am very new to Photoshop and image editing. I want to edit photos for diy wall art such as adding borders, distressing, etc. Can someone direct me as to which version I should purchase? Thank you in advance!

  • Export files to jpeg

    Hello, I would like to try and automate as much as possible export images to a jpeg format. At the moment I have a couple of requirements. One is to export to specific size for group presentation. The other is for output to social media outlets such

  • Compiling reports6i on Windows and Unix

    Hi Friends, I'm compiling reports6i on windows by opening the browser window online. But to my confusion when I run and save the report program, I only see the .rdf extension of the file. I just thought the executable version is .rep , so I thought I

  • IPhoto not in applications

    Hi, I can't seem to find iPhoto anywhere on my computer. I'm not technical enough to find missing files from utilities and pacifist as someone suggested. Is it possible to download iPhotos 09. I can only find an upgrade to iLife '11 and I'm confused