Incorrect result in query

Hi all,
i have loaded data from PSA into DSO. I can see all the data correctly.The DSO is defined as part of a Multiprovider on which I have created a Query. When i call my query, i can see only 2 columns displayed correctly. if i include any other KF in my query definition, i dont see any output. Besides my output gives an aggregated display in the report unlike how the data exists in the DSO. Do i need to make any setting changes ? pls help.
SD
points will be awarded.

Query output will be aggregated one.Its aggregated based on what you drilldown.
I think only those two key figs are coming from that DSO.Check the identification for the key figs which are not populated in the multiprovider.
Also check if there are any conditions in the query.The conditions used should be present in the DSO,I mean that field.
Hope this helps.

Similar Messages

  • Incorrect results in query

    Hi All,
    I've created a calculatedKF named 'Asset Cost' which is a sum of 6 KFs.
    I am using the same in my query as a 'New selection' with restriction based on a varaible on 'posting period'. Means if i give the posting period value as '8', i'll get the results only for posting period for this Calculated KF and not the sum of all posting periods.
    Now the issue is that once when the users started querying the result was appearing double to them though the query definition remained the same i.e no changes made to CKF etc.
    I regenerated the query as in RSRT or if i make some redundant changes to query(i.e putiing a characteristic out and then put in) the query once again gave the correct results.
    Can anyone tell me how the query results got doubled as this happened to me again and again i had to regenerate the queries?Full points to be awarded.
    It is to be noted that the users are using this calculated CKf in other queries with or without any restrictions on posting period.
    Thanks in advance,
    Ajay

    the CKF used in other queries doens't apply at all.
    don't fully agree with the point of the load and aggregates and so on: how frequent is your datarget loaded? is it a multiprovider? Does it have aggregates?
    Why don't you let the "calculate result as" with "Nothing Defined"?
    For the cache:
    what is the cache mode of your Qry? RSRT / properties...
    You can reset the the cache of this query just by resaving its definition.
    So:
    - save your query
    - run it and check the result. is it correct?
    - if not, drilldown some chars and find out which char makes it correct.
    please let me know if you still have the issue
    regards,
    Olivier.

  • BEx Query Providing Incorrect Results

    I have two BEx queries that are behaving strangely.
    They are providing incorrect results and I can't figure out why.
    In both cases, when I save query in development under a different technical name the problem disappear.   I can't use this as a permanent solution because users insert queries in workbooks and we have portal links.
    Is there a way to somehow  fix these queries that
    are acting strangely.
    Query 1:
    When run query get error message
    "No roll storage space of length 120 available fro OCCURS area"
    Query2:   Calculates Wrong Average
    When run query:
    Adding Cumulative Qty Customer Balance + Quantity Customer Movement
    together.
    Set Aggregation to Average
    ref char Posting Period.
    The avearge BEx calculates only considers  Cummulative Quantity
    End Customer Balance.  It should consider consider Quantity Customer Movement as well.
    Avg formula :  ((Jan End - (.5* Jan Mov) ) + (Feb End - (.5* Feb Mov))+   (Mar End -(.5* March Mov)))/3
    I have used the same  methodology to calculate average with other queries and it average calculates correctly.  These other queries have same InfoProvider and using the same dimensions as queries that are calculating average incorrectly.

    Hi Mti
    Thank you for your reply.
    I have checked the details as per your suggestion:
    1. KF Aggreagtion tab on Workbench Exception Aggregation is on SUM Summation.
    2. RKF on The Query is on (Nothing Defined).
    Is there anything else that can be checked?
    Thank you

  • Query with Cost Center Hierarchy giving incorrect results

    Hi All,
    I have a universe built based on BEx query on Cost Center cubes. When enabling hierarchy in BEx Query and building Web intelligence Report based on the universe, I get incorrect results.  The levels of the hierarchy is incorrect, many of the cost centers are missing etc. I checked the universe and confirmed that all levels of hierarchy are generated correctly. The Lov generated for these levels are correct and I see the complete hierarchy when using the BEx Variable in Universe for filtering.
    I tried the same query with Hierarchy disabled with a different universe and it is providing correct results. Not sure what I'm missing here. Any inputs regarding this is appreciated.
    Thanks & Regards,
    Sree

    Ingo, Thanks for your suggestion. Of course, I did update the Universe after any changes in the query. Tried different query setting related to hierarchy  to make it work, but didn't many any difference and I get consistently incorrect results.
    One thing what I wanted to confirm is, if there is any known bug in SP 2 Fix Pack 2.7 related to hierarchies. If not, it might be me doing some thing wrong  and I will look into in more detail.
    Thanks & Regards,
    Sree

  • Incorrect result between maintain master data and bex query, how can i fix?

    Hi ALL,
    i get some messages from the users there is incorrect result between SAP R/3 and Report on BW. i controlled the monitor and i saw there was a job for 0CUSTOMER_ATTRIBUTE that it finish correctly but the processing it was only in PSA, i started the full update immediately from PSA into Data Targets and is finished correctly. after when i control the content of the 0CUSTOMER (right click maintain master data) i get the correct attribute result that match the data in SAP R/3, but the problem is when i execute a query Bex on this master data it will not return the same attributes data.
    Can SomeBody Help please
    Bilal

    hi,
    For any master data attributes loaded you will have to run "Attributes Change Run" for that.Execute for Master data 0CUSTOMER.
    The same is avilable in rsa1->Tools(top menu)->apply hierarchy/attribute run.
    hope it helps,
    regards,
    Parth.

  • Oracle Discoverer report pulls incorrect result when scheduled.

    Recently the database was migrated to 10.1.2 RAC from 9.2.0.6, so the discoverer EUL is now resides on new database.
    after migration the report which pulls correct results when run interactively is pulling incorrect result when scheduled in Discoverer.
    This report used sysdate and aggregate functions, i had ran the same report simultaneously( Directly in Discoverer Desktop/Plus and scheduled in discoverer), but the data retrieved in both case is not matching.
    here is the query. any help is appreciated.
    SELECT /*+ FIRST_ROWS */ A.SITE_ID as E175108,B."SYSTEM DESCRIPTION" as System_Prefix,
    B."SYSTEM PREFIX" as System_Description,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) < 0 THEN 1 ELSE TO_NUMBER(NULL) END) as Less_than_0_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) > 121 THEN 1 ELSE TO_NUMBER(NULL) END) as 0_to_14 Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),3,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 14_to_30_Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),2,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 31_to_60_Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),1,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 61_to_90_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) BETWEEN 15 AND 30 THEN 1 ELSE TO_NUMBER(NULL) END) as 91_to_120_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) BETWEEN 0 AND 14 THEN 1 ELSE TO_NUMBER(NULL) END) as 120_Days_Plus,
    COUNT(TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE)) as Total
    FROM PSTAGE.ALL_EQUIPMENT A,
    ( SELECT A.SITE "SYSTEM PREFIX", A.DESCRIPTION "SYSTEM DESCRIPTION", A.SITE_ID, B.SITE_DESCRIPTION, A.G2B_ID
    FROM SITE_LIST A, ALL_CF_SITE_CONTROL B
    WHERE A.SITE_ID = B.SITE_ID
    ORDER BY 1, 3
    ) B
    WHERE ( (B.SITE_ID = A.SITE_ID))
    AND (A.EQUIPMENT_STATUS_CODE IN ('T','7'))
    GROUP BY A.SITE_ID,B."SYSTEM DESCRIPTION",B."SYSTEM PREFIX"
    ORDER BY B."SYSTEM DESCRIPTION" ASC ;
    Thanks!

    Hi sunil,
    Rod is referencing the NLS parameters i.e.
    Can you please let me know which NLS parameters you are referring toNLS parameters in this scenerio may be the date and language for that session.Do check out
    SELECT * from NLS_SESSION_PARAMETERS
    how i can check if there any differences in the NLS parameters when report is scheduled or run interactivelyI think you should run the trace file.Iam not sure about it.
    It would be system_context.
    Hope it helps you.
    Kranthi.

  • Subtracting two numbers in double format gives incorrect result

    Hello,
    I have a table with two fields in Number (Field Size: Double; Decimal Places: Auto).  When using a query i try to subtract one field from the other I get incorrect results:
    Field1
    Field2
    Result
    2.60299223923846
    2.60259423701324
    3.98002225218796E-04
    0.644498511499839
    0.645908903902985
    -1.41039240314556E-03
    1.51021791504783
    1.51372591514976
    -3.50800010193808E-03
    <tfoot></tfoot>
    When I paste the above into Excel, I get the correct results:
    Field1
    Field2
    Result
    2.60299223923846
    2.60259423701324
                         0.00039800222521880
    0.64449851149984
    0.64590890390299
                       (0.00141039240314556)
    1.51021791504783
    1.51372591514976
                       (0.00350800010193808)
    Would much appriciate any help in how to get the correct values in Access Query.
    Thank you

    Field1
    Field2
    Result
    2.60299223923846
    2.60259423701324
    3.98002225218796E-04
    0.644498511499839
    0.645908903902985
    -1.41039240314556E-03
    1.51021791504783
    1.51372591514976
    -3.50800010193808E-03
    Hi Vasilii,
    In my opinion the results are correct, only they are presented in scientific notation. You can do some formatting, to display the results the way you want.
    See the Help on the use of Format.
    Imb.

  • BI standard Querygiving incorrect results

    Hi all,
    I am working on FI and activated the BI content for FI AR and FI Gl and FI AP.
    i have loaded the data sucessfully and it is reconsiling with R/3 at DATA TARGET level which is cube 0FIAR_C05
    but when i run the standard query 0FIAR_C05_Q0001 i am getting incorrect results
    Cube:
    customer !  num of payments ! payment amt
    426         !      2                     !    10,000
    Query result
    426         !       2                     !   100,000
    This looks like a scaling error but i cant understand why a standar query has this issue,
    is this a bug?
    i know we can adjust the scaling factor but this will cause issues....
    can some one advice me on this problem.
    Thanks in advance
    CG

    Hi Praveen,
    The scaling factor in the query says From key Figure
    and in the properties of KF it says saling factor1
    but this came as standard and i cant see the proper reason.
    Thanks for ur quick reply

  • SQL Server 2014 - Columnstore Incorrect Results

    Hello,
    we are running into a problem with SQL Server 2014 and the columnstore index. We have a partitioned table with about 300 Million records in it. With SQL Server 2012 this has been in use without problems.
    Since we upgraded to SQL Server 2014, the exact same queries on exactly the same data return incorrect results. We can only bypass the problem by either dropping the CS Index or adding a maxdop = 1 query hint.
    I thought this was an old bug in SQL Server 2012? We have not installed the CU Pack 4 for SQL Server 2014, yet but will it solve the problem (assuming others have faced the same problem)
    We are running: Microsoft SQL Server 2014 Enterprise Edition - 12.0.2000.8 (X64) on a 2x6Core Machine
    Thanks in advance!

    SQL Server 2012 only featured non-clustered columnstore indexes which were separate structures.  Have you changed to a clustered columnstore (cs) index in SQL Server 2014?  (ie dropped your non-clustered cs, created a cs)
    There are a number of fixes that reference columnstore indexes in the current CUs (
    CU1,
    CU2, CU3,
    CU4 ), but none which sound exactly like your problem. 
    This sounds similar and is fixed in CU1.  You should review the CU documents yourself to see if any of them mention a similar problem and then consider applying the CU.  You might also try applying them to a test environment, or a temporary Azure
    VM for example to see if one of them solves your problem.
    If you can create a reliable "repro" of the problem, consider raising a
    connect item which is a Microsoft bug report.

  • LessFilter and  ReflectionExtractor API giving incorrect results

    I am using Oracle Coherence version 3.7. We are storing DTO objects in cache having "modificationTime" property/instance variable of "java.util.date" type. In order to fetch data from cache passing "java.util.date" variable as input for comparison, LessFilter and ReflectionExtractor api's are used. Cache.entryset(filter) returns incorrect results.
    Note: we are using "com.tangosol.io.pof.PofWriter.writeDateTime(int arg0, Date arg1) " api to store data in cache and "com.tangosol.io.pof.PofReader.readDate(int arg0)" to read data from cache. There is no readDateTime api available ?
    We tested same scenario updating DTO class. Now it has another property in DTO of long(to store milliseconds). Now long is passed as input for comparison to LessFilter and ReflectionExtractor api's and correct results are retrieved.
    Ideally, java.util.Date or corresponding milliseconds passed as input should filter and return same and logically correct results.
    Code:
    1) Test by Date: returns incorrect results
    public void testbyDate(final Date startDate) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
    LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
    final Filter lessFilter = new LessFilter(extractor, startDate);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    2) Test by milliseconds: returns correct results
    public void testbyTime(final Long time) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
    LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
    final Filter lessFilter = new LessFilter(extractor, time);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    }

    Hi Harvy,
    Thanks for your reply. You validated it against a single object in cache using ExternalizableHelper.toBinary/ExternalizableHelper.fromBinary. But we are querying against a collection of objects in cache.
    Please have a look at below code.
    *1)* We are using TestDTO.java extending AbstractCacheDTO.java as value object for our cache.
    import java.io.IOException;
    import java.util.Date;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * The Class AbstractCacheDTO.
    * @param <E>
    *            the element type
    * @author apanwa
    public abstract class AbstractCacheDTO<E> extends AbstractEvolvable implements EvolvablePortableObject {
        /** The Constant IDENTIFIER. */
        private static final int IDENTIFIER = 0;
        /** The Constant CREATION_TIME. */
        private static final int CREATION_TIME = 1;
        /** The Constant MODIFICATION_TIME. */
        private static final int MODIFICATION_TIME = 2;
        /** The version number of cache DTO implementation **/
        private static final int VERSION = 11662;
        /** The id. */
        private E id;
        /** The creation time. */
        private Date creationTime = new Date();
        /** The modification time. */
        private Date modificationTime;
         * Gets the id.
         * @return the id
        public E getId() {
            return id;
         * Sets the id.
         * @param id
         *            the new id
        public void setId(final E id) {
            this.id = id;
         * Gets the creation time.
         * @return the creation time
        public Date getCreationTime() {
            return creationTime;
         * Gets the modification time.
         * @return the modification time
        public Date getModificationTime() {
            return modificationTime;
         * Sets the modification time.
         * @param modificationTime
         *            the new modification time
        public void setModificationTime(final Date modificationTime) {
            this.modificationTime = modificationTime;
         * Read external.
         * @param reader
         *            the reader
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#readExternal(com.tangosol.io.pof.PofReader)
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            id = (E) reader.readObject(IDENTIFIER);
            creationTime = reader.readDate(CREATION_TIME);
            modificationTime = reader.readDate(MODIFICATION_TIME);
         * Write external.
         * @param writer
         *            the writer
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#writeExternal(com.tangosol.io.pof.PofWriter)
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            writer.writeObject(IDENTIFIER, id);
            writer.writeDateTime(CREATION_TIME, creationTime);
            writer.writeDateTime(MODIFICATION_TIME, modificationTime);
        @Override
        public int getImplVersion() {
            return VERSION;
    import java.io.IOException;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestDTO extends AbstractCacheDTO<TestIdentifier> {
        private Long timeinMillis;
        private static final int TIME_MILLIS_ID = 3;
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            super.readExternal(reader);
            timeinMillis = Long.valueOf(reader.readLong(TIME_MILLIS_ID));
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            super.writeExternal(writer);
            writer.writeLong(TIME_MILLIS_ID, timeinMillis.longValue());
         * @return the timeinMillis
        public Long getTimeinMillis() {
            return timeinMillis;
         * @param timeinMillis
         *            the timeinMillis to set
        public void setTimeinMillis(final Long timeinMillis) {
            this.timeinMillis = timeinMillis;
    }*2)* TestIdentifier.java as key in cache for storing TestDTO objects.
    import java.io.IOException;
    import org.apache.commons.lang.StringUtils;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestIdentifier extends AbstractEvolvable implements EvolvablePortableObject {
        private String recordId;
        /** The Constant recordId. */
        private static final int RECORD_ID = 0;
        /** The version number of cache DTO implementation *. */
        private static final int VERSION = 11660;
        @Override
        public void readExternal(final PofReader pofreader) throws IOException {
            recordId = pofreader.readString(RECORD_ID);
        @Override
        public void writeExternal(final PofWriter pofwriter) throws IOException {
            pofwriter.writeString(RECORD_ID, recordId);
        @Override
        public int getImplVersion() {
            return VERSION;
        @Override
        public boolean equals(final Object object) {
            if (object instanceof TestIdentifier) {
                final TestIdentifier id = (TestIdentifier) object;
                return StringUtils.equals(recordId, id.getRecordId());
            } else {
                return false;
         * @see java.lang.Object#hashCode()
        @Override
        public int hashCode() {
            return recordId.hashCode();
         * @return the recordId
        public String getRecordId() {
            return recordId;
         * @param recordId
         *            the recordId to set
        public void setRecordId(final String recordId) {
            this.recordId = recordId;
    }*3) Use Case*
    We are fetching TestDTO records from cache based on LessFilter. However, results returned from cache differs if query is made over property "getModificationTime" of type java.util.Date or over property "getTimeinMillis" of type Long(milliseconds corresponding to date). TestService.java is used for the same.
    import java.io.IOException;
    import java.util.Collection;
    import java.util.Date;
    import java.util.Map;
    import java.util.Set;
    import org.apache.log4j.Logger;
    import com.ladbrokes.dtos.cache.TestDTO;
    import com.ladbrokes.dtos.cache.TestIdentifier;
    import com.cache.services.CacheService;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.ValueExtractor;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.LessFilter;
    * @author nkhatw
    public class TestService implements CacheService<TestIdentifier, TestDTO, Object> {
        private static final String TEST_CACHE = "testcache";
        private static final NamedCache CACHE = CacheFactory.getCache(TEST_CACHE);
        private static final Logger LOGGER = Logger.getLogger(TestService.class);
         * Push DTO objects with a) modTime of java.util.Date type b) timeInMillis of Long type
         * @throws IOException
        public void init() throws IOException {
            for (int i = 0; i < 30; i++) {
                final TestDTO dto = new TestDTO();
                final Date modTime = new Date();
                dto.setModificationTime(modTime);
                final Long timeInMillis = Long.valueOf(System.currentTimeMillis());
                dto.setTimeinMillis(timeInMillis);
                final TestIdentifier testId = new TestIdentifier();
                testId.setRecordId(String.valueOf(i));
                dto.setId(testId);
                final CacheService testService = new TestService();
                testService.createOrUpdate(dto, null);
                LOGGER.debug("Pushed record in cache with key: " + i + " modTime: " + modTime + " Time in millis: "
                    + timeInMillis);
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting time property
         * b) java.util.Date value to be compared with
         * 2) Verify extracted entryset
         * @throws IOException
        public void testbyDate(final Date startDate) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
            LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
            final Filter lessFilter = new LessFilter(extractor, startDate);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting "time in millis  property"
         * b) java.Long value to be compared with
         * 2) Verify extracted entryset
        public void testbyTime(final Long time) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
            LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
            final Filter lessFilter = new LessFilter(extractor, time);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
        @Override
        public void createOrUpdate(final TestDTO testDTO, final Object arg1) throws IOException {
            CACHE.put(testDTO.getId(), testDTO);
        @Override
        public void createOrUpdate(final Collection<TestDTO> arg0, final Object arg1) throws IOException {
            // YTODO Auto-generated method stub
        @Override
        public <G>G read(final TestIdentifier arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public Collection<?> read(final Map<TestIdentifier, Object> arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public void remove(final TestDTO arg0) throws IOException {
            // YTODO Auto-generated method stub
    Use Case execution Results:
    "testbyTime" method returns correct results.
    However, "testbyDate" method gives random and incorrect results.

  • Rownum giving Incorrect  Result in 11gR2 but working ok in 10gR2

    Hi All,
    We have following query which is working fine in 10g but in 11g it is showing incorrect result.
    select x.*,rownum from (select rat.rating_agency_id from bus_ca_cpty_rating rat,MST_CP_RATING mst
    where rat.org_id=618
    and
    rat.rating_agency_id=mst.rating_agency_id
    and
    rat.rating_value=mst.rating_value
    and
    rat.heritage_system=mst.heritage_system
    order by rat.rating_date,rat.rating_time)x
    where rownum=1;
    Result Without last Check <where rownum=1> in the query (in both 10g and 11g)
    RATING_AGENCY_ID     ROWNUM
    3     1
    1     2
    Result of the query in 11gR2 (11.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    1     1
    Result of the query in 10gR2 (10.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    3     1
    Request your help to resolve the issue(please tell me the bug name if it is a bug) and please let me know how it is processing the query in 11g.
    Edited by: 906061 on Jun 19, 2012 2:22 AM

    T.PD wrote:
    906061 wrote:
    Result Without last Check <where rownum=1> in the query (in both 10g and 11g)
    RATING_AGENCY_ID     ROWNUM
    3     1
    1     2
    Result of the query in 11gR2 (11.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    1     1
    Result of the query in 10gR2 (10.2.0.3)
    RATING_AGENCY_ID     ROWNUM
    3     1Your desired result depends on the wrong idea if implicid ordering of the results. there is no such!
    Database does not sort returned rows any how (unless you use order by in your query). The order of returned rows may be consistent over a long period but if the table contents is reorganized or (as I assume) you import data to anotehr database the order may change.
    To make the long storry short: you need another filter condition than <tt>rownum = 1</tt>.
    bye
    TPDLook closely: it looks like a standard top-n query with the order by in the sub-query.

  • Furmula displaying incorrect result at Hierarchy Node level

    Hi Experts,
    I have created query which is using simple formula as Amount = Price * Quantity.
    The query is using hierarchy on Material.
    When I drill down to the lowesr level (leaf node) the formula is showing correct result.
    But at all the node levels its displaying incorrect result.
    Ex.
    Mataterial 1  Rate = 4 Quantity = 10
    Maaterial 2   Rate = 3 Quantity = 10
    In hierarchy both the se materials are under same node (Parent node) say Node 1
    The query output is
    Material                Amount
    Node 1                140   (7 * 20)
        Material 1          40   (4 * 10)
        Material 2          30   (3 * 10)    
    It should be
    Material                Amount
    Node 1                 70
        Material 1         40
        Material 2         30
    Please suggest how can this be achieved.
    Regards
    SSS

    Hi SSS,
    Have you ben able to resolve this?? I am also facing the same issue..If you have resolved pls reply.
    Regards,
    Tapan

  • KKA2 incorrect results

    Hi,
    Based on the SAP note - 38070. We have configured in our landscape for new line ids. We have found that, if there is new WBS created & simulated
    in KKA2, the results are correct. Whereas for the already created wbs, we get incorrect results in KKA2 simulation.The new ids will point to old wbs & new wbs.Is this note only for new wbs?
    Please help as this is show stopper for our business.
    Kind Regards,
    Kalyan

    Here's an example of what I'm talking about.
    This first query compares a simple geometry with a second, which is defined as almost the full geographic extents (-179 , 179, -89 , 89). Nearly every possible geometry will interact with this in some way. However the result I get is 'Disjoint'.
    SELECT  SDO_GEOM.relate(MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(1,   80,   1,  -80,  160,  -80,  160,  80,  1,  80)),
                'DETERMINE',
                MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(-179,   89,   -179,  -89,  179, -89,  179,  89,  -179,  89)), '0.005')
          from DUAL;If i make the second geometry Smaller so that it starts at 0, ie (0,179, -89, 89) the I correctly get the result 'Inside'.
    SELECT  SDO_GEOM.relate(MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(1,   80,   1,  -80,  160,  -80,  160,  80,  1,  80)),
                'DETERMINE',
                MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(0,   89,  0,  -89,  179, -89,  179,  89,  0,  89)), '0.005')
                from DUAL;It would be ideal if someone could confirm or deny this behaviour on a fully patched 10g or even 11g database.

  • Inconsistent SDO_RELATE results when querying 2.5D data

    Oracle 11.1.0.7 with Patch 8343061 on Windows Server 2003 32bit.
    I'm getting inconsistent results from SDO_RELATE results when querying 2.5D data. Some geometries I expect to be OVERLAPBDYDISJOINT, are not always being returned by SDO_RELATE when using the OVERLAPBDYDISJOINT mask. It seems that the order of the tables makes a difference to the result.
    Here's a table with one 2.5D geometry and a 2D index:
    CREATE TABLE TEST1 (
    ID                NUMBER PRIMARY KEY,
    GEOMETRY     SDO_GEOMETRY);
    INSERT INTO TEST1 (id, geometry) VALUES (
    1,
    SDO_GEOMETRY(3002, 2157, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(561695.935, 834005.726, 25.865,
    561696.229, 834005.955, 25.867, 561686.278, 834015.727, 26.088, 561685.179, 834019.771, 26.226, 561680.716, 834022.389, 26.226,
    561674.434, 834025.125, 26.171, 561671.963, 834032.137, 25.667, 561670.832, 834037.185, 25.619, 561667.946, 834042.976, 25.84,
    561666.717, 834047.218, 26.171, 561664.229, 834051.781, 26.778, 561660.041, 834055.935, 26.64, 561657.514, 834061.742, 26.53,
    561658.59, 834067.116, 27.882, 561657.67, 834070.739, 28.821, 561653.028, 834073.777, 29.042, 561653.234, 834078.769, 28.379,
    561658.336, 834080.105, 29.511, 561664.582, 834079.468, 31.94, 561669.257, 834075.821, 33.707, 561672.716, 834074.456, 33.707,
    561676.875, 834077.262, 33.735, 561675.868, 834081.55, 33.707, 561673.131, 834087.641, 33.679, 561672.208, 834093.502, 33.238,
    561668.578, 834100.894, 33.735, 561666.013, 834106.399, 33.679, 561661.408, 834111.23, 33.514, 561654.854, 834117.181, 33.486,
    561651.695, 834122.292, 33.569, 561649.112, 834128.847, 33.431, 561645.982, 834134.786, 33.293, 561642.485, 834141.235, 33.072,
    561642.138, 834150.085, 33.293, 561646.072, 834159.721, 36.578, 561647.274, 834165.532, 37.02, 561646.359, 834170.867, 37.02,
    561645.42, 834175.485, 36.799, 561642.44, 834180.977, 36.826, 561638.677, 834185.419, 36.771, 561636.693, 834194.824, 37.158,
    561635.462, 834202.105, 37.241, 561631.998, 834208.745, 37.268, 561628.871, 834213.994, 37.241, 561627.554, 834220.393, 37.82,
    561625.79, 834226.697, 39.532, 561620.561, 834236.494, 39.891, 561619.265, 834249.687, 39.697, 561619.883, 834260.02, 41.326,
    561620.977, 834264.399, 43.093, 561622.557, 834270.723, 43.452, 561622.172, 834276.978, 43.452, 561621.347, 834285.541, 43.479,
    561622.214, 834292.055, 43.645, 561619.718, 834302.583, 43.755, 561616.762, 834316.47, 43.755, 561608.842, 834328.241, 43.7,
    561606.346, 834334.93, 43.7, 561605.27, 834341.929, 43.7, 561603.925, 834350.648, 43.728, 561602.462, 834358.405, 43.838,
    561599.552, 834366.629, 44.031, 561594.551, 834374.291, 43.396, 561590.644, 834383.986, 43.065, 561588.48, 834392.21, 44.942,
    561586.923, 834397.32, 46.737, 561584.608, 834402.898, 49.299, 561581.389, 834410.194, 50.077, 561580.437, 834419.49, 51.907,
    561580.438, 834427.63, 53.127, 561582.245, 834433.389, 55.791, 561586.664, 834433.397, 57.503, 561593.88, 834433.608, 57.475,
    561596.305,834439.653, 57.42, 561591.804, 834445.862, 57.309, 561589.097, 834447.689, 57.014)));
    SELECT sdo_geom.validate_geometry_with_context(geometry, 0.0005) FROM TEST1;
    DELETE FROM user_sdo_geom_metadata WHERE table_name = 'TEST1' AND column_name = 'GEOMETRY';
    INSERT INTO user_sdo_geom_metadata VALUES ('TEST1','GEOMETRY', 
         MDSYS.SDO_DIM_ARRAY(
         MDSYS.SDO_DIM_ELEMENT('X',400000,750000,0.0005),
         MDSYS.SDO_DIM_ELEMENT('Y',500000,1000000,0.0005),      
         MDSYS.SDO_DIM_ELEMENT('Z',-10000,10000,0.0005)     
    ), 2157);
    DROP INDEX TEST1_SPIND;
    CREATE INDEX TEST1_SPIND ON TEST1(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX PARAMETERS ('layer_gtype=line sdo_indx_dims=2');And here's another table with a 2D geometry and a 2D index:
    CREATE TABLE TEST2 (
    ID                NUMBER PRIMARY KEY,
    GEOMETRY     SDO_GEOMETRY);
    INSERT INTO TEST2 (id, geometry) VALUES (
    1,
    SDO_GEOMETRY(2002, 2157, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(561816.516, 834055.581, 561819.504, 834057.173,
    561817.942, 834060.818, 561810.044, 834078.997, 561805.576, 834087.634, 561801.572, 834094.299, 561798.558, 834100.467,
    561796.254, 834107.637, 561793.754, 834115.605, 561794.049, 834123.694, 561793.698, 834130.518, 561792.905, 834138.883,
    561787.867, 834145.772, 561782.544, 834150.548, 561777.707, 834156.53, 561773.945, 834161.32, 561771.061, 834166.957,
    561768.155, 834173.131, 561764.735, 834178.744, 561759.603, 834187.782, 561756.146, 834195.493, 561753.416, 834198.821,
    561754.141, 834205.691, 561756.768, 834209.681, 561757.217, 834216.701, 561753.086, 834232.46, 561744.371, 834254.589,
    561740.936, 834263.001, 561737.198, 834272.208, 561732.231, 834284.915, 561730.52, 834297.01, 561728.339, 834310.053,
    561727.825, 834328.069, 561730.461, 834342.992, 561729.808, 834367.948, 561730.216, 834396.988, 561732.273, 834419.047,
    561732.783, 834424.668, 561731.647, 834432.212, 561731.872, 834439.436, 561731.39, 834449.269, 561732.041, 834462.813,
    561733.583, 834471.926, 561733.229, 834485.049, 561730.868, 834498.462, 561726.379, 834512.59, 561725.776, 834528.932,
    561727.488, 834555.23, 561729.357, 834577.873, 561731.05, 834595.931, 561731.163, 834611.928, 561734.057, 834637.031,
    561732.67, 834636.4, 561725.401, 834633.796, 561721.039, 834632.493, 561718.777, 834632.167, 561710.437, 834632.888,
    561647.929, 834636.658, 561644.963, 834630.085, 561632.796, 834629.813, 561625.553, 834627.647, 561620.473, 834626.711,
    561608.718, 834624.94, 561599.935, 834619.684, 561596.67, 834613.843, 561594.27, 834607.774, 561592.513, 834601.752,
    561591.349, 834593.899, 561597.265, 834584.888, 561595.956, 834571.479, 561595.075, 834556.196, 561593.997, 834539.68,
    561594.316, 834528.071, 561595.261, 834516.44, 561595.538, 834504.804, 561597.227, 834497.417, 561599.3, 834490.416,
    561601.265, 834482.61, 561605.126, 834475.502, 561599.232, 834473.683, 561593.076, 834471.379, 561599.154, 834451.112,
    561589.097, 834447.689, 561591.804, 834445.862, 561596.305, 834439.653, 561593.88, 834433.608, 561582.245, 834433.389,
    561580.438, 834427.63, 561580.437, 834419.49, 561581.389, 834410.194, 561584.608, 834402.898, 561586.923, 834397.32,
    561588.48, 834392.21, 561590.644, 834383.986, 561594.551, 834374.291, 561599.552, 834366.629, 561602.462, 834358.405,
    561603.925, 834350.648, 561605.27, 834341.929, 561606.346, 834334.93, 561608.842, 834328.241, 561616.762, 834316.47,
    561619.718, 834302.583, 561622.214, 834292.055, 561621.347, 834285.541, 561622.172, 834276.978, 561622.557, 834270.723,
    561620.977, 834264.399, 561619.883, 834260.02, 561619.265, 834249.687, 561620.561, 834236.494, 561625.79, 834226.697,
    561627.554, 834220.393, 561628.871, 834213.994, 561631.998, 834208.745, 561635.462, 834202.105, 561636.693, 834194.824,
    561638.677, 834185.419, 561642.44, 834180.977, 561645.42, 834175.485, 561646.359, 834170.867, 561647.274, 834165.532,
    561646.072, 834159.721, 561642.138, 834150.085, 561642.485, 834141.235, 561645.982, 834134.786, 561649.112, 834128.847,
    561651.695, 834122.292, 561654.854, 834117.181, 561661.408, 834111.23, 561666.013, 834106.399, 561668.578, 834100.894,
    561672.208, 834093.502,561673.131, 834087.641, 561675.868, 834081.55, 561676.875, 834077.262, 561672.716, 834074.456,
    561669.257, 834075.821, 561664.582, 834079.468, 561658.336, 834080.105, 561653.234, 834078.769, 561653.028, 834073.777,
    561657.67, 834070.739, 561658.59, 834067.116, 561657.514, 834061.742, 561660.041, 834055.935, 561664.229, 834051.781,
    561666.717, 834047.218, 561667.946, 834042.976, 561670.832, 834037.185, 561671.963, 834032.137, 561674.434, 834025.125,
    561680.716, 834022.389, 561685.179, 834019.771, 561686.278, 834015.727, 561696.229, 834005.955, 561695.935, 834005.726,
    561677.805, 833994.91, 561683.163, 833985.817, 561703.01, 833949.434, 561725.891, 833961.856, 561744.35, 833971.197,
    561768.396, 833983.86, 561777.842, 833988.883, 561798.333, 833999.743, 561797.243, 834005.725, 561783.574, 834040.515,
    561798.127, 834046.391, 561807.001, 834050.509, 561816.516, 834055.581)));
    SELECT sdo_geom.validate_geometry_with_context(geometry, 0.0005) FROM TEST2;
    DELETE FROM user_sdo_geom_metadata WHERE table_name = 'TEST2' AND column_name = 'GEOMETRY';
    INSERT INTO user_sdo_geom_metadata VALUES ('TEST2','GEOMETRY', 
         MDSYS.SDO_DIM_ARRAY(
         MDSYS.SDO_DIM_ELEMENT('X',400000,750000,0.0005),
         MDSYS.SDO_DIM_ELEMENT('Y',500000,1000000,0.0005)
    ), 2157);
    DROP INDEX TEST2_SPIND;
    CREATE INDEX TEST2_SPIND ON TEST2(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX PARAMETERS ('layer_gtype=line sdo_indx_dims=2');Now if I check how these two geometries relate to each other, the answer is OVERLAPBDYDISJOINT, which makes sense when inspecting the geometries.
    SQL> SELECT
      2  sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate_1_to_2,
      3  sdo_geom.relate(t2.geometry, 'determine', t1.geometry,  0.0005) relate_2_to_1
      4  FROM test2 t2, test1 t1
      5  WHERE t1.id = t2.id;
    RELATE_1_TO_2        RELATE_2_TO_1
    OVERLAPBDYDISJOINT   OVERLAPBDYDISJOINT
    1 row selected.So, I'd expect this query to return something...
    SELECT /*+ ORDERED */ t1.id, t2.id, sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate
    FROM test2 t2, test1 t1
    WHERE sdo_relate(t1.geometry, t2.geometry, 'mask=overlapbdydisjoint') = 'TRUE'
    AND t1.id = 1
    AND t2.id = 1;Nada. And this...
    SELECT /*+ ORDERED */ t1.id, t2.id, sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate
    FROM test1 t1, test2 t2
    WHERE sdo_relate(t2.geometry, t1.geometry, 'mask=overlapbdydisjoint') = 'TRUE'
    AND t1.id = 1
    AND t2.id = 1;Nada.
    And this...
    SQL> SELECT /*+ ORDERED */ t1.id, t2.id, sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate
      2  FROM test2 t2, test1 t1
      3  WHERE sdo_relate(t2.geometry, t1.geometry, 'mask=overlapbdydisjoint') = 'TRUE'
      4  AND t1.id = 1
      5  AND t2.id = 1;
            ID         ID RELATE
             1          1 OVERLAPBDYDISJOINT
    1 row selected.This version gives the right answer.
    Can anyone explain this?

    Hi,-
    I think you are running into these bugs 7158518 and 7710726.
    Could you please request the patch for the bugs so that they are published on Metalink
    if they dont exist there?
    Please let us know if these fix your problem.
    Best regards
    baris

  • Query Performance and Result Set Query/Sub-Query

    Hi,
    I have an infoset with as many as <b>15 joins</b> with different ODS and Master Data..<b>The ODS are quite huge with 20 million, 160 million records)...</b>Its taking a lot of time even for a few days and need to get atleast 3 months data in a reasonable amount of time....
    Could anyone please tell me whether <b>Sub-Query or Result Set Query have to be written against the same InfoProvider</b> (Cube, Infoset, etc)...or they can be used in queries
    on different infoprovider...
    Please suggest...Thanks in Advance.
    Regards
    Anil

    Hi Bhanu,
    Needed some help defining the Precalculated Value Set as I wasnt succesful....
    Please suggest answers if possible for the questions below
    1) Can I use Filter Criteria for restricting the value set for the characteristic when I Define a Query on an ODS. When i tried this it gave me errors as below  ..
    "System error in program CL_RSR_REQUEST and form  EXECUTE_VTABLE:COB_PRO_GET_ALWAYS....                     Error when filling value set DELIVERY..                                               
    Job cancelled after system exception ERROR_MESSAGE"                                           
    2) Can I create a create a Value Set predefined on an Infoset -  Not Succesful as Infoset have names such as "InfosetName_F000232" and cannot find the characteristic in Paramter Tab for Value Set in Reprting Agent.
    3) How does the precalculated value Set variable help if I am not using any Filtering Criteria and is storing the List of all values for the Variable from the ODS which consists of 20 millio records.
    Thanks for your help.
    Kind Regards
    Anil

Maybe you are looking for

  • Problem inserting date in oracle forms

    I have a data block called CUSTOMER and a TEXT ITEM called DOB which is used to enter the Date of birth (date). I have a SAVE button, on its trigger WHEN-BUTTON-PRESSED, I have the following code insert into test values (to_date(:customer.dob,'dd-mon

  • Send report output via email as zipped attachment

    Is it possible for the report output to be zipped before being attached to an email when running on a windows reports_server? On unix we theoretically have the ability to modify the mail.sh shell script that performs the mail operation for reports.

  • FM replacement for TRANSFER_NAMES_TO_FIELDS

    Hi, The FM TRANSFER_NAMES_TO_FIELDS is obsole in ECC 6.Can any one suggest the replacement for this FM. Thanks Ansari

  • Work Flow diagram for setup

    Looking for a flowed out diagram to get VoIP up and running, using 26xx, CCM 3.3, 8 FXO ports and 7970 phones. Looking for 3 different configs, 1 for 2 special numbers, 2 for reg commercial numbers and 3rd for another tenant with 4 numbers. Keeping e

  • Column of ascending numbers?

    How do you create a column of ascending numbers? I have searched high and low in iWork help and can't find the method. Help!