KKA2 incorrect results

Hi,
Based on the SAP note - 38070. We have configured in our landscape for new line ids. We have found that, if there is new WBS created & simulated
in KKA2, the results are correct. Whereas for the already created wbs, we get incorrect results in KKA2 simulation.The new ids will point to old wbs & new wbs.Is this note only for new wbs?
Please help as this is show stopper for our business.
Kind Regards,
Kalyan

Here's an example of what I'm talking about.
This first query compares a simple geometry with a second, which is defined as almost the full geographic extents (-179 , 179, -89 , 89). Nearly every possible geometry will interact with this in some way. However the result I get is 'Disjoint'.
SELECT  SDO_GEOM.relate(MDSYS.SDO_GEOMETRY(
            2003,
            8307,
            null,
            MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
            MDSYS.SDO_ORDINATE_ARRAY(1,   80,   1,  -80,  160,  -80,  160,  80,  1,  80)),
            'DETERMINE',
            MDSYS.SDO_GEOMETRY(
            2003,
            8307,
            null,
            MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
            MDSYS.SDO_ORDINATE_ARRAY(-179,   89,   -179,  -89,  179, -89,  179,  89,  -179,  89)), '0.005')
      from DUAL;If i make the second geometry Smaller so that it starts at 0, ie (0,179, -89, 89) the I correctly get the result 'Inside'.
SELECT  SDO_GEOM.relate(MDSYS.SDO_GEOMETRY(
            2003,
            8307,
            null,
            MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
            MDSYS.SDO_ORDINATE_ARRAY(1,   80,   1,  -80,  160,  -80,  160,  80,  1,  80)),
            'DETERMINE',
            MDSYS.SDO_GEOMETRY(
            2003,
            8307,
            null,
            MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
            MDSYS.SDO_ORDINATE_ARRAY(0,   89,  0,  -89,  179, -89,  179,  89,  0,  89)), '0.005')
            from DUAL;It would be ideal if someone could confirm or deny this behaviour on a fully patched 10g or even 11g database.

Similar Messages

  • Oracle Discoverer report pulls incorrect result when scheduled.

    Recently the database was migrated to 10.1.2 RAC from 9.2.0.6, so the discoverer EUL is now resides on new database.
    after migration the report which pulls correct results when run interactively is pulling incorrect result when scheduled in Discoverer.
    This report used sysdate and aggregate functions, i had ran the same report simultaneously( Directly in Discoverer Desktop/Plus and scheduled in discoverer), but the data retrieved in both case is not matching.
    here is the query. any help is appreciated.
    SELECT /*+ FIRST_ROWS */ A.SITE_ID as E175108,B."SYSTEM DESCRIPTION" as System_Prefix,
    B."SYSTEM PREFIX" as System_Description,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) < 0 THEN 1 ELSE TO_NUMBER(NULL) END) as Less_than_0_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) > 121 THEN 1 ELSE TO_NUMBER(NULL) END) as 0_to_14 Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),3,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 14_to_30_Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),2,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 31_to_60_Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),1,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 61_to_90_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) BETWEEN 15 AND 30 THEN 1 ELSE TO_NUMBER(NULL) END) as 91_to_120_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) BETWEEN 0 AND 14 THEN 1 ELSE TO_NUMBER(NULL) END) as 120_Days_Plus,
    COUNT(TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE)) as Total
    FROM PSTAGE.ALL_EQUIPMENT A,
    ( SELECT A.SITE "SYSTEM PREFIX", A.DESCRIPTION "SYSTEM DESCRIPTION", A.SITE_ID, B.SITE_DESCRIPTION, A.G2B_ID
    FROM SITE_LIST A, ALL_CF_SITE_CONTROL B
    WHERE A.SITE_ID = B.SITE_ID
    ORDER BY 1, 3
    ) B
    WHERE ( (B.SITE_ID = A.SITE_ID))
    AND (A.EQUIPMENT_STATUS_CODE IN ('T','7'))
    GROUP BY A.SITE_ID,B."SYSTEM DESCRIPTION",B."SYSTEM PREFIX"
    ORDER BY B."SYSTEM DESCRIPTION" ASC ;
    Thanks!

    Hi sunil,
    Rod is referencing the NLS parameters i.e.
    Can you please let me know which NLS parameters you are referring toNLS parameters in this scenerio may be the date and language for that session.Do check out
    SELECT * from NLS_SESSION_PARAMETERS
    how i can check if there any differences in the NLS parameters when report is scheduled or run interactivelyI think you should run the trace file.Iam not sure about it.
    It would be system_context.
    Hope it helps you.
    Kranthi.

  • BEx Query Providing Incorrect Results

    I have two BEx queries that are behaving strangely.
    They are providing incorrect results and I can't figure out why.
    In both cases, when I save query in development under a different technical name the problem disappear.   I can't use this as a permanent solution because users insert queries in workbooks and we have portal links.
    Is there a way to somehow  fix these queries that
    are acting strangely.
    Query 1:
    When run query get error message
    "No roll storage space of length 120 available fro OCCURS area"
    Query2:   Calculates Wrong Average
    When run query:
    Adding Cumulative Qty Customer Balance + Quantity Customer Movement
    together.
    Set Aggregation to Average
    ref char Posting Period.
    The avearge BEx calculates only considers  Cummulative Quantity
    End Customer Balance.  It should consider consider Quantity Customer Movement as well.
    Avg formula :  ((Jan End - (.5* Jan Mov) ) + (Feb End - (.5* Feb Mov))+   (Mar End -(.5* March Mov)))/3
    I have used the same  methodology to calculate average with other queries and it average calculates correctly.  These other queries have same InfoProvider and using the same dimensions as queries that are calculating average incorrectly.

    Hi Mti
    Thank you for your reply.
    I have checked the details as per your suggestion:
    1. KF Aggreagtion tab on Workbench Exception Aggregation is on SUM Summation.
    2. RKF on The Query is on (Nothing Defined).
    Is there anything else that can be checked?
    Thank you

  • Query with Cost Center Hierarchy giving incorrect results

    Hi All,
    I have a universe built based on BEx query on Cost Center cubes. When enabling hierarchy in BEx Query and building Web intelligence Report based on the universe, I get incorrect results.  The levels of the hierarchy is incorrect, many of the cost centers are missing etc. I checked the universe and confirmed that all levels of hierarchy are generated correctly. The Lov generated for these levels are correct and I see the complete hierarchy when using the BEx Variable in Universe for filtering.
    I tried the same query with Hierarchy disabled with a different universe and it is providing correct results. Not sure what I'm missing here. Any inputs regarding this is appreciated.
    Thanks & Regards,
    Sree

    Ingo, Thanks for your suggestion. Of course, I did update the Universe after any changes in the query. Tried different query setting related to hierarchy  to make it work, but didn't many any difference and I get consistently incorrect results.
    One thing what I wanted to confirm is, if there is any known bug in SP 2 Fix Pack 2.7 related to hierarchies. If not, it might be me doing some thing wrong  and I will look into in more detail.
    Thanks & Regards,
    Sree

  • Incorrect result set with using isnull() function  in IQ 16

    Hi team,
    We have IQ 16 on HP UX.
    When we use isnull() function in where clause we get incorrect result set if we do not use column name in the result set.
    In first select we get result with one row but in second one we get an empty result set.
    select ID, dat_start, dat_end, dat_stop
    from table_test
    where ID=1105935925
    and isnull(dat_stop,dat_start) <> dat_end
    select ID
    from table_test
    where ID=1105935925
    and isnull(dat_stop,dat_start) <> dat_end
    It depends on number of row or volume of data in table, It is possible to use option Revert_To_V15_Optimizer to get the correct result.
    Do you have any different idea how to solve it?
    Thanks Milos.

    We have tested two versions:
    Sybase IQ/16.0.0.653/131122/P/sp03/ITANIUM/HP-UXi 11.31/64bit/2013-11-22 01:49:18
    SAP IQ/16.0.0.807/140507/P/sp08/ITANIUM/HP-UXi 11.31/64bit/2014-05-07 21:11:45
    Both versions have given same mistake.
    We have not opened any support case for this issue because it is data depended issue. It is not easy to simulate it as an example.
    Do you think we should open a support case for it?
    Miloš

  • Rp_provide_from_last returns incorrect result

    Hi
    When issuing rp_provide_from_last  for IT2001, we get incorrect result.
    rp-provide-from-last p2001 space '19000101' '99991231'
    This macro does not return the latest record.  Instead it returns the record with the highest subtype #.  (It actually returns the last record shown in a SE16N listing of PA2001).
    Has anyone seen this problem?
    We are on SAP 4.7., SP 85.
    Best regards
    Kirsten

    Pleas Try this
    Usage:
    Only in PNP database reports under GET PERNR, because the personnel number for which data is being read comes from field PERNR-PERNR, while the field being used is PNP-SW-AUTH-SKIPPED-RECORD.
    (RP_READ_ALL_TIME_ITY beg end)
       DATA: BEGDA LIKE P2001-BEGDA, ENDDA LIKE P2001-ENDDA.
       INFOTYPES:  0000, 0001, 0002, ...
                         2001 MODE N, 2002 MODE N, ...
         GET PERNR.
       BEGDA = '19900101'. ENDDA = '19900131'.
       RP_READ_ALL_TIME_ITY BEGDA ENDDA.
       IF PNP-SW-AUTH-SKIPPED-RECORD NE '0'.
          WRITE: / 'Authorization for time data missing'.
          WRITE: / 'for personnel number', PERNR-PERNR. REJECT.
       ENDIF.
    Remarks
    This RMAC module can be used when, for example, the time infotypes were originally defined in MODE N. This was done because the time data (from LOW-DATE to HIGH-DATE) might not all have fitted into the buffer. Now, however, they are read with shorter intervals (for example, in RPCALCx0 with payroll periods).
    -Due to the large amount of data in HR, the infotypes 2000 u2013 2999 should not be read when GET PERNR occurs. Therefore, these infotypes are declared with the enhancement MODE N.
    -As a result, the infotype tables under GET PERNR are not filled. The time infotype tables are filled subsequently using the macro RP_READ_ALL_TIME_ITY, but only for the time interval specified by PN-BEGDA and PN-ENDDA.
    http://help.sap.com/saphelp_45b/helpdata/en/60/d8bb88576311d189270000e8322f96/content.htm
    Best Regards

  • Subtracting two numbers in double format gives incorrect result

    Hello,
    I have a table with two fields in Number (Field Size: Double; Decimal Places: Auto).  When using a query i try to subtract one field from the other I get incorrect results:
    Field1
    Field2
    Result
    2.60299223923846
    2.60259423701324
    3.98002225218796E-04
    0.644498511499839
    0.645908903902985
    -1.41039240314556E-03
    1.51021791504783
    1.51372591514976
    -3.50800010193808E-03
    <tfoot></tfoot>
    When I paste the above into Excel, I get the correct results:
    Field1
    Field2
    Result
    2.60299223923846
    2.60259423701324
                         0.00039800222521880
    0.64449851149984
    0.64590890390299
                       (0.00141039240314556)
    1.51021791504783
    1.51372591514976
                       (0.00350800010193808)
    Would much appriciate any help in how to get the correct values in Access Query.
    Thank you

    Field1
    Field2
    Result
    2.60299223923846
    2.60259423701324
    3.98002225218796E-04
    0.644498511499839
    0.645908903902985
    -1.41039240314556E-03
    1.51021791504783
    1.51372591514976
    -3.50800010193808E-03
    Hi Vasilii,
    In my opinion the results are correct, only they are presented in scientific notation. You can do some formatting, to display the results the way you want.
    See the Help on the use of Format.
    Imb.

  • BI standard Querygiving incorrect results

    Hi all,
    I am working on FI and activated the BI content for FI AR and FI Gl and FI AP.
    i have loaded the data sucessfully and it is reconsiling with R/3 at DATA TARGET level which is cube 0FIAR_C05
    but when i run the standard query 0FIAR_C05_Q0001 i am getting incorrect results
    Cube:
    customer !  num of payments ! payment amt
    426         !      2                     !    10,000
    Query result
    426         !       2                     !   100,000
    This looks like a scaling error but i cant understand why a standar query has this issue,
    is this a bug?
    i know we can adjust the scaling factor but this will cause issues....
    can some one advice me on this problem.
    Thanks in advance
    CG

    Hi Praveen,
    The scaling factor in the query says From key Figure
    and in the properties of KF it says saling factor1
    but this came as standard and i cant see the proper reason.
    Thanks for ur quick reply

  • SQL Server 2014 - Columnstore Incorrect Results

    Hello,
    we are running into a problem with SQL Server 2014 and the columnstore index. We have a partitioned table with about 300 Million records in it. With SQL Server 2012 this has been in use without problems.
    Since we upgraded to SQL Server 2014, the exact same queries on exactly the same data return incorrect results. We can only bypass the problem by either dropping the CS Index or adding a maxdop = 1 query hint.
    I thought this was an old bug in SQL Server 2012? We have not installed the CU Pack 4 for SQL Server 2014, yet but will it solve the problem (assuming others have faced the same problem)
    We are running: Microsoft SQL Server 2014 Enterprise Edition - 12.0.2000.8 (X64) on a 2x6Core Machine
    Thanks in advance!

    SQL Server 2012 only featured non-clustered columnstore indexes which were separate structures.  Have you changed to a clustered columnstore (cs) index in SQL Server 2014?  (ie dropped your non-clustered cs, created a cs)
    There are a number of fixes that reference columnstore indexes in the current CUs (
    CU1,
    CU2, CU3,
    CU4 ), but none which sound exactly like your problem. 
    This sounds similar and is fixed in CU1.  You should review the CU documents yourself to see if any of them mention a similar problem and then consider applying the CU.  You might also try applying them to a test environment, or a temporary Azure
    VM for example to see if one of them solves your problem.
    If you can create a reliable "repro" of the problem, consider raising a
    connect item which is a Microsoft bug report.

  • Hot News: Possible incorrect results in SAP BW system

    Everyone ,
    We recently identified an issue in SAP ASE which potentially causes incorrect results in a SAP BW system running on SAP ASE.
    The issue affects any application running on SAP ASE using optimisation goal 'allrows_dss' or a user created optimisation goal that enables 'advanced_aggregation' . 
    SAP BW specifies optimisation goal 'allrows_dss' for ceratin DSS queries and is affected by the issue.
    SAP ERP system running on ASE are typically not affected as in SAP ERP systems typically optimisation goal 'allrwos_mix' has been configured. 
    Details and corrections are available in SAP note
    2026328 - SYB: Incorrect results with SUM aggregation on decimal fields
    We strongly suggest to implement the corrections in SAP BW as soon as possible.
    With kind regards
    Tilman Model-Bosch

    Hi,
    Yes, I am using the MDX driver. 
    Is there any pre-requisites of importing certain ABAP transports into SAP Server since I haven't done any? Please  recommend.
    Thanks,
    Amogh

  • Incorrect results for calculation based on diff dimensions - 11.1.1.5

    Hello All,
    OBIEE gives incorrect results when i try to perform a calculation (for eg: addition) based on 2 measures. For eg:
    (Note: "->" signifies 1:M)
    Rpd (Physical model & BMM): dim_fe -> dim_gl-> Fact_Legder <- Dim_param
    Fact_Ledger (agg measures) -> YTD_01, YTD_02...... YTD_12 ( here 01,02...12 represent month i.e. if "Feb" selected in prompt then we need to use YTD_02 and so on for other months)
    Answers: Created a report with following columns
    Column Name : Formula
    =================
    Line Item : 'Net Profit'
    Prev Yr Act: (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013}-1 and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=100)/1000) /
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013}-1 and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=100)/1000)
    Curr Yr Act: (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=100)/1000) /
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=100)/1000)
    Curr Yr Plan: case when '@{pmonth}{Jan}='Jan' then
    (filter("Fact Ledger"."YTD_01" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=200)/1000)/
    (filter("Fact Ledger"."YTD_01" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=200)/1000)
    when '@{pmonth}{Jan}='Feb' then
    (filter("Fact Ledger"."YTD_02" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=200)/1000)/
    (filter("Fact Ledger"."YTD_02" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=200)/1000)
    when '@{pmonth}{Jan}='Dec' then
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=200)/1000)/
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=200)/1000)
    endthe results are incorrect. Any help appreciated.
    Qry generated is like
    (select...
    case when year=.. and pl_lin=... and code=100 then ytd_01,
    case when year=.. and pl_lin=... and code=100 then ytd_03,
    case when year=.. and pl_lin=... and code=100 then ytd_04,....,
    case when year=.. and pl_lin=... and code=200 then ytd_01,
    case when year=.. and pl_lin=... and code=200 then ytd_03,
    case when year=.. and pl_lin=... and code=200 then ytd_04,....,
    from...
    where ... year in (2013-1,2013) and pl_line('Item1,'Item2','Item3') or fe.item('l1','l2','l3') and code in (100,200)... ) D1
    (select
    case when 'Apr'='Jan' thne d1.c1 when 'Apr'='Feb' then d1.c2......
    from D1
    Regards..
    Shruti

    See if this explains it better for my crosstab with page items of Vendor Number 1234.
    Vendor 1234
    Dc Nbr 1 2 4 AAAA
    Sum Invoice Amt 1387.04 300.82 327.29 2015.15
    Sum Cost 44.86 57.43 25.54 127.83
    Sum Advanced Cost 102.44 0 0 102.44
    Sum Consolidation Cost 30.37 0 0 30.37
    Sum Allowance Amt 27.74 6.02 6.54 40.30
    Net Freight Cost 149.93 51.41 19 220.34
    Freight Percent 10.81 17.09 5.81 ****
    As stated before, Frieght Percent is a calculation I created in Discoverer that looks like this :
    ( NVL(Sum Cost,0)+NVL(Sum Advanced Cost,0)+NVL(Sum Consolidation Cost,0)-NVL(Sum Allowance Amt,0) )/NVL(Sum Invoice Amt,0)*100
    Column AAAA was created in Discoverer using Sum of field and show to the right.
    What I need is for the **** to be the correct calculation for the totals in column AAAA. If I use do a total for Freight Percent using the Cell Sum I get 33.70., what I want is it to be 10.93, which is (127.83 + 102.44 + 30.37 - 40.30)/2015.15*100.
    If I use an Average Total row for Freight Percent, I get 11.24 which is 33.70 / 3 (the 3 would be the # of dc nbr's)
    I did start with using the detail level data to create this crosstab. Then I made a new version and used the SUM data. I seem to get the same results but am still having issues with the one **** value.
    Hopefully this explains it better.
    Thanks for the ideas so far.

  • LessFilter and  ReflectionExtractor API giving incorrect results

    I am using Oracle Coherence version 3.7. We are storing DTO objects in cache having "modificationTime" property/instance variable of "java.util.date" type. In order to fetch data from cache passing "java.util.date" variable as input for comparison, LessFilter and ReflectionExtractor api's are used. Cache.entryset(filter) returns incorrect results.
    Note: we are using "com.tangosol.io.pof.PofWriter.writeDateTime(int arg0, Date arg1) " api to store data in cache and "com.tangosol.io.pof.PofReader.readDate(int arg0)" to read data from cache. There is no readDateTime api available ?
    We tested same scenario updating DTO class. Now it has another property in DTO of long(to store milliseconds). Now long is passed as input for comparison to LessFilter and ReflectionExtractor api's and correct results are retrieved.
    Ideally, java.util.Date or corresponding milliseconds passed as input should filter and return same and logically correct results.
    Code:
    1) Test by Date: returns incorrect results
    public void testbyDate(final Date startDate) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
    LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
    final Filter lessFilter = new LessFilter(extractor, startDate);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    2) Test by milliseconds: returns correct results
    public void testbyTime(final Long time) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
    LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
    final Filter lessFilter = new LessFilter(extractor, time);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    }

    Hi Harvy,
    Thanks for your reply. You validated it against a single object in cache using ExternalizableHelper.toBinary/ExternalizableHelper.fromBinary. But we are querying against a collection of objects in cache.
    Please have a look at below code.
    *1)* We are using TestDTO.java extending AbstractCacheDTO.java as value object for our cache.
    import java.io.IOException;
    import java.util.Date;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * The Class AbstractCacheDTO.
    * @param <E>
    *            the element type
    * @author apanwa
    public abstract class AbstractCacheDTO<E> extends AbstractEvolvable implements EvolvablePortableObject {
        /** The Constant IDENTIFIER. */
        private static final int IDENTIFIER = 0;
        /** The Constant CREATION_TIME. */
        private static final int CREATION_TIME = 1;
        /** The Constant MODIFICATION_TIME. */
        private static final int MODIFICATION_TIME = 2;
        /** The version number of cache DTO implementation **/
        private static final int VERSION = 11662;
        /** The id. */
        private E id;
        /** The creation time. */
        private Date creationTime = new Date();
        /** The modification time. */
        private Date modificationTime;
         * Gets the id.
         * @return the id
        public E getId() {
            return id;
         * Sets the id.
         * @param id
         *            the new id
        public void setId(final E id) {
            this.id = id;
         * Gets the creation time.
         * @return the creation time
        public Date getCreationTime() {
            return creationTime;
         * Gets the modification time.
         * @return the modification time
        public Date getModificationTime() {
            return modificationTime;
         * Sets the modification time.
         * @param modificationTime
         *            the new modification time
        public void setModificationTime(final Date modificationTime) {
            this.modificationTime = modificationTime;
         * Read external.
         * @param reader
         *            the reader
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#readExternal(com.tangosol.io.pof.PofReader)
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            id = (E) reader.readObject(IDENTIFIER);
            creationTime = reader.readDate(CREATION_TIME);
            modificationTime = reader.readDate(MODIFICATION_TIME);
         * Write external.
         * @param writer
         *            the writer
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#writeExternal(com.tangosol.io.pof.PofWriter)
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            writer.writeObject(IDENTIFIER, id);
            writer.writeDateTime(CREATION_TIME, creationTime);
            writer.writeDateTime(MODIFICATION_TIME, modificationTime);
        @Override
        public int getImplVersion() {
            return VERSION;
    import java.io.IOException;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestDTO extends AbstractCacheDTO<TestIdentifier> {
        private Long timeinMillis;
        private static final int TIME_MILLIS_ID = 3;
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            super.readExternal(reader);
            timeinMillis = Long.valueOf(reader.readLong(TIME_MILLIS_ID));
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            super.writeExternal(writer);
            writer.writeLong(TIME_MILLIS_ID, timeinMillis.longValue());
         * @return the timeinMillis
        public Long getTimeinMillis() {
            return timeinMillis;
         * @param timeinMillis
         *            the timeinMillis to set
        public void setTimeinMillis(final Long timeinMillis) {
            this.timeinMillis = timeinMillis;
    }*2)* TestIdentifier.java as key in cache for storing TestDTO objects.
    import java.io.IOException;
    import org.apache.commons.lang.StringUtils;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestIdentifier extends AbstractEvolvable implements EvolvablePortableObject {
        private String recordId;
        /** The Constant recordId. */
        private static final int RECORD_ID = 0;
        /** The version number of cache DTO implementation *. */
        private static final int VERSION = 11660;
        @Override
        public void readExternal(final PofReader pofreader) throws IOException {
            recordId = pofreader.readString(RECORD_ID);
        @Override
        public void writeExternal(final PofWriter pofwriter) throws IOException {
            pofwriter.writeString(RECORD_ID, recordId);
        @Override
        public int getImplVersion() {
            return VERSION;
        @Override
        public boolean equals(final Object object) {
            if (object instanceof TestIdentifier) {
                final TestIdentifier id = (TestIdentifier) object;
                return StringUtils.equals(recordId, id.getRecordId());
            } else {
                return false;
         * @see java.lang.Object#hashCode()
        @Override
        public int hashCode() {
            return recordId.hashCode();
         * @return the recordId
        public String getRecordId() {
            return recordId;
         * @param recordId
         *            the recordId to set
        public void setRecordId(final String recordId) {
            this.recordId = recordId;
    }*3) Use Case*
    We are fetching TestDTO records from cache based on LessFilter. However, results returned from cache differs if query is made over property "getModificationTime" of type java.util.Date or over property "getTimeinMillis" of type Long(milliseconds corresponding to date). TestService.java is used for the same.
    import java.io.IOException;
    import java.util.Collection;
    import java.util.Date;
    import java.util.Map;
    import java.util.Set;
    import org.apache.log4j.Logger;
    import com.ladbrokes.dtos.cache.TestDTO;
    import com.ladbrokes.dtos.cache.TestIdentifier;
    import com.cache.services.CacheService;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.ValueExtractor;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.LessFilter;
    * @author nkhatw
    public class TestService implements CacheService<TestIdentifier, TestDTO, Object> {
        private static final String TEST_CACHE = "testcache";
        private static final NamedCache CACHE = CacheFactory.getCache(TEST_CACHE);
        private static final Logger LOGGER = Logger.getLogger(TestService.class);
         * Push DTO objects with a) modTime of java.util.Date type b) timeInMillis of Long type
         * @throws IOException
        public void init() throws IOException {
            for (int i = 0; i < 30; i++) {
                final TestDTO dto = new TestDTO();
                final Date modTime = new Date();
                dto.setModificationTime(modTime);
                final Long timeInMillis = Long.valueOf(System.currentTimeMillis());
                dto.setTimeinMillis(timeInMillis);
                final TestIdentifier testId = new TestIdentifier();
                testId.setRecordId(String.valueOf(i));
                dto.setId(testId);
                final CacheService testService = new TestService();
                testService.createOrUpdate(dto, null);
                LOGGER.debug("Pushed record in cache with key: " + i + " modTime: " + modTime + " Time in millis: "
                    + timeInMillis);
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting time property
         * b) java.util.Date value to be compared with
         * 2) Verify extracted entryset
         * @throws IOException
        public void testbyDate(final Date startDate) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
            LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
            final Filter lessFilter = new LessFilter(extractor, startDate);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting "time in millis  property"
         * b) java.Long value to be compared with
         * 2) Verify extracted entryset
        public void testbyTime(final Long time) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
            LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
            final Filter lessFilter = new LessFilter(extractor, time);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
        @Override
        public void createOrUpdate(final TestDTO testDTO, final Object arg1) throws IOException {
            CACHE.put(testDTO.getId(), testDTO);
        @Override
        public void createOrUpdate(final Collection<TestDTO> arg0, final Object arg1) throws IOException {
            // YTODO Auto-generated method stub
        @Override
        public <G>G read(final TestIdentifier arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public Collection<?> read(final Map<TestIdentifier, Object> arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public void remove(final TestDTO arg0) throws IOException {
            // YTODO Auto-generated method stub
    Use Case execution Results:
    "testbyTime" method returns correct results.
    However, "testbyDate" method gives random and incorrect results.

  • Does compare aggregates mode produce incorrect results?

    Has anyone encountered a problem with using compare aggregates mode with arrays. 
    For example, if compare aggregates mode is selected in using the "In Range and Coerce.vi" with the inputs (upperLimit, lowerLimit, and x value)  being an array of integers, then compare aggregates doesnot return correct results.  I've also noticed this with the greater than and less than comparison vi's.
    I've attached a sample which further illustrates the incorrect results.
    Attachments:
    compareAggregatesTest.vi ‏9 KB
    compareAggregatesTest1.vi ‏9 KB

    I talked to some people at NI and here's how I understand it:
    Compare aggregates simply does not do what we think it does. It is NOT the same as comparing all elements and then ANDing the results. Instead, it compares the elements in the cluster in order. This is actually identical to ANDing the results when you do an equality comparison, but it's different if you do a less-than or greater-than comparison.
    The LabVIEW help provides the example of a phone book, where "Smith, John" is greater than "Smith, Jane" and where "Smith, Jane" is also greater than "Doe, John" because Doe comes before Smith.
    This helps to explain the results of my example:
    In the first array element, the comparison fails because 10 is the first element in the cluster and it is less than 40.
    In the second array element, 40 and 40 are equal, so the decision is moved to the next element (like having two "Smith"s, and since 40 is greater than 30, the comparison returns true.
    So again, the order is important!
    Try to take over the world!
    Attachments:
    Compare Aggregates.png ‏16 KB

  • Rownum giving Incorrect  Result in 11gR2 but working ok in 10gR2

    Hi All,
    We have following query which is working fine in 10g but in 11g it is showing incorrect result.
    select x.*,rownum from (select rat.rating_agency_id from bus_ca_cpty_rating rat,MST_CP_RATING mst
    where rat.org_id=618
    and
    rat.rating_agency_id=mst.rating_agency_id
    and
    rat.rating_value=mst.rating_value
    and
    rat.heritage_system=mst.heritage_system
    order by rat.rating_date,rat.rating_time)x
    where rownum=1;
    Result Without last Check <where rownum=1> in the query (in both 10g and 11g)
    RATING_AGENCY_ID     ROWNUM
    3     1
    1     2
    Result of the query in 11gR2 (11.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    1     1
    Result of the query in 10gR2 (10.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    3     1
    Request your help to resolve the issue(please tell me the bug name if it is a bug) and please let me know how it is processing the query in 11g.
    Edited by: 906061 on Jun 19, 2012 2:22 AM

    T.PD wrote:
    906061 wrote:
    Result Without last Check <where rownum=1> in the query (in both 10g and 11g)
    RATING_AGENCY_ID     ROWNUM
    3     1
    1     2
    Result of the query in 11gR2 (11.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    1     1
    Result of the query in 10gR2 (10.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    3     1Your desired result depends on the wrong idea if implicid ordering of the results. there is no such!
    Database does not sort returned rows any how (unless you use order by in your query). The order of returned rows may be consistent over a long period but if the table contents is reorganized or (as I assume) you import data to anotehr database the order may change.
    To make the long storry short: you need another filter condition than <tt>rownum = 1</tt>.
    bye
    TPDLook closely: it looks like a standard top-n query with the order by in the sub-query.

  • Incorrect result being returned for a formula

    I'm getting incorrect result for a simple formula. Changing the value of Rs is not affecting the output correctly. I am new to labview and can use some help. VI attached.
    Attachments:
    testformula.vi ‏21 KB

    I'm getting incorrect result for a simple formula. Changing the value of Rs is not affecting the output correctly. I am new to labview and can use some help. VI attached.
    Input values used:
    Rp2 = -131.763
    Rs2 = 0.321
    Isc2= 8.21
    Vmp= 26.3
    Imp= 7.61
    Io2= 9.735E-8
    exp2= 8.404E+8
    Output being shown as:
    Pmax3 = 200.143
    It should be Pmax3 = 192.688
    Attachments:
    testformula.vi ‏22 KB

Maybe you are looking for

  • Can't install Lion (Windows XP and Parallells 6 being used)

    Hi, This is my first post here, so please forgive me if I'm not too used to the forum habits. I've been checking the discussions before, but haven't been able to find anything that matched my case... Let me explain it to you, so that hopefully somebo

  • R/3 4.6B Installation Problem

    Dear all, I have installation problem when system carry out the final step "Initializing the Workbench Organizer with RFC" after I click Continue to skip importing non-latin language. the error message is showing below INFO 2009-01-22 11:28:00     St

  • Continuous WebJob running but not processing

    Greetings all, I have an Azure WebJob that runs continuously. It detects a file uploaded to a specific folder on my website, and then transfers it to an azure storage container, which is also being listened to. When the container detects a new file,

  • Please help my desktop images, wallpapers gone.

    Please community i need your help. I restored my computer from Lion to lion, before i had Leopard and i had all my favority desktop wallpapers, for example aurora , earth horizon , earth etc..      When i installed for the first time Lion , i got eve

  • Disscussion abt web servises

    Hello every body, IF I am using axis and I have deployed a web service implemented in java using jax-rpc soap. Do I explicitly need wsdl files for java client to communicate with the java server If you can guide me what I need to do step by step it w