SDO_relate mask EQUAL giving erroneus results

I'm working in Oracle 9i attempting to test our list of maps to find duplicate maps using the SDO_RELATE with equal mask. We're trying to notice if we have any changes between one version of a map for a given territory from it's older/newer versions.
When i compare what i know to be equal maps with the above described query i get a "TRUE" response as expected. When i compare a given map to the entire table full of maps i sometimes get one extra map that says it is equal as well. When i pull up the supposedly equal map it is not at all equal but shares one common border. Its like Missouri and Illinois being called equal. According to all the documentation this should not be the result. TRUE results should be truly EQUAL. Is there any type of known bug about this in Oracle 9i or a way to make it work correctly? How equal are equal results likely to be?

You should try going up to 9208 as many of these type of bugs are fixed in that
patch set.
siva

Similar Messages

  • Goto link query is not giving exact results

    Hi Folks
    I am having issue with GOTO query.
    My main query gives details of Employee seperation in particular year.
    For this query i have goto query.
    When i am checking the details of goto query ,it is giving incorrect results.
    Your help is appreciated.
    Thanks & Regards,
    Hari Reddy

    Hi Hari,
        Check in RSBBS, whether you specified the receiver query correctly...
    Check the link:
    [http://help.sap.com/saphelp_nw04/helpdata/en/99/08629bd3e41d418530c6849df303c9/content.htm]
        Hope this helps you.
    Regards,
    Yokesh.

  • OMBRETRIEVE is giving incorrect results

    Hi my fellow OMBPLUS developers.
    I have written an tcl script to retrieve the TARGET_LOAD_ORDERING property (a checkbox) from a number of mappings in my OWB repository. The commands which returns True or False is:
    OMBRETRIEVE MAPPING '$Mapping Name' GET PROPERTIES (TARGET_LOAD_ORDERING)
    The results are fine for around 90% of the mappings but I am finding cases where the following error is returned for some mappings:
    OMB02918: Property TARGET_LOAD_ORDERING of <Mapping Name> does not exist: MMM1034: Property TARGET_LOAD_ORDERING does not exist.
    When looking at the mappings that are throwing this error I can see via OWB that the mappings do have the Target Load Order checked. Simply unchecking/checking this and commiting the change to the repository then fixes the issue and the OMBRETIEVE commands retuns the correct result. Does anybody know why this issue is happening?
    Any help would be most appreciated.
    Regards
    Mitesh

    Re: OMBRETRIEVE is giving incorrect results
    Posted: 23.02.2012 00:30 in response to: mi**** in response to: mi****           
    Click to edit this message...      Edit      Click to report abuse...           Click to reply to this thread      Reply
    Hi,
    Enable the “Use Target Load Ordering” configuration option in the mapping configuration.
    http://www.oracle.com/technetwork/developer-tools/warehouse/owb-feature-management-licensing-344706.pdf
    BR,
    IM

  • SharePoint 2013 public facing site - need to mask url in search results

    Hi, we have sp2013 public facing site. Can we mask the display url in search results? Actually we are pointing two urls(ex: A & B) to same web application. Search results are already crawled based on A URL, so when people search in  browsing 'B'
    URL search results is giving URL with 'A' site. Please let me know if anyone have possible solution.
    Can we have more than one url for internet zone in AAM? i browsed in blogs most of them say it is not possible.
    Thanks,
    JB
    JB

    Please don't create multiple questions for same issue, below one is the duplicate thread
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/25a31628-1a96-4d6d-a792-3493af5bdd83/unable-to-find-manage-site-feature-in-sharepoint-2013-public-facing-site?forum=sharepointgeneral
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful

  • Query giving wrong results

    Below is the query thats giving me wrong results ...can anyone help me figure where i am going wrong :(
    SQL> select count(*) from t where source='LP1.1';
      COUNT(*)
            69
    SQL> select count(*) from tblspring where line_a='LP1.1';
      COUNT(*)
           233now when i join these two table ...the total counts exceeds the expected records that i want
    SQL> select count(*) from t , tblspring where t.source=tblspring.line_a and t.source='LP1.1';
      COUNT(*)
         16077

    the thing is i want to filter only those records from table t where the column named  - source is equal to the column named line_a from table tblspring from the query in table t i get 69 records ....
    now when i match the it table tblspring with the condition
    i want the number of records should be less than or equal to 69 but not more than 69
    Edited by: Suhail Faraaz on Mar 5, 2011 7:31 AM

  • Group by Query not giving desired results

    Hi,
    I've a requirement to find minimum month based on status:
    The following query is giving error :
    SELECT
              b.app_name,
              DECODE (a.status,
              'C','Closed',
              'O','Open',
              'F','Future',
              'W','Pending',
              'N','Not Opened') decode_status
              MIN(a.period_name)
    FROM     table a,
         table b
    WHERE     a.app_id     =     b.app_id
    AND          b.app_name      =     'NAME1'
    AND          a.book_id     =     &book_id
    AND          a.status      =      'O'
    GROUP BY      b.app_name,
              DECODE (a.status,
              'C','Closed',
              'O','Open',
              'F','Future',
              'W','Pending',
              'N','Not Opened') decode_status
    for ex: in the above query if I've four records with status 'O' and period_name as 'May-12', 'Jun-12','Jul-12' ,'Aug-12' then I need to pick 'May'
    Thanks,
    Kiran

    Hi, Kiran,
    user518071 wrote:
    Hi,
    I've a requirement to find minimum month based on status:
    The following query is giving error :
    SELECT
              b.app_name,
              DECODE (a.status,
              'C','Closed',
              'O','Open',
              'F','Future',
              'W','Pending',
              'N','Not Opened') decode_status
              MIN(a.period_name)
    FROM     table a,
         table b
    WHERE     a.app_id     =     b.app_id
    AND          b.app_name      =     'NAME1'
    AND          a.book_id     =     &book_id
    AND          a.status      =      'O'
    GROUP BY      b.app_name,
              DECODE (a.status,
              'C','Closed',
              'O','Open',
              'F','Future',
              'W','Pending',
              'N','Not Opened') decode_status
    for ex: in the above query if I've four records with status 'O' and period_name as 'May-12', 'Jun-12','Jul-12' ,'Aug-12' then I need to pick 'May'
    Thanks,
    KiranIt looks like you're almost there.
    If period_name is a DATE, then
    MIN (a.period_name)is finidng the earliest period name, such as 5 May 2012 17:03:49. If you just want to see 'May 2012', then change that to
    TO_CHAR ( MIN (a.period_name)
            , 'Mon YYYY'
            )If a.period_name is a VARCHAR2, then change it to a DATE. The best way to do this is permanently. There is no reason to store date information in VARCHAR2 columns. Oracle supplies DATE columns; there's no extra cost for using them. DATE columns were designed for storing date information, use them to do that.
    if you must keep your date information in VARCHAR2 a column, then use TO_DATE in the query. It will be slow, and you'll have run-time errors if any of the information is in the wrong format. That's what happens when you store date information in VARCHAR2 columns.
    I hope this answers your question.
    If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all tables involved, and also post the results you want from that data.
    Explain, using specific examples, how you get those results from that data.
    Always say which version of Oracle you're using.
    See the forum FAQ {message:id=9360002}

  • SPEL not giving desired result

    Hi ALL,
    SPEL
    *${oa.TestPVO1.xxDisable}* on Disable Property for a button
    In CO
    OAApplicationModule am = (OAApplicationModule)pageContext.getApplicationModule(webBean);
    am.invokeMethod("hide");
    In AM
    public void hide()
    OAViewObject disvo = (OAViewObject)getTestPVO1();
    if (!disvo.isPreparedForExecution())
    disvo.executeQuery();
    Row drow = disvo.createRow();//.getCurrentRow();
    disvo.insertRow(drow);
    drow.setAttribute("xxDisable", Boolean.FALSE);
    NOT giving desired o/p to disable the button
    Thanks,
    Sombit.
    Edited by: Sombit on Oct 1, 2009 7:33 AM

    Try assigning the SPEL value in the *VORowImpl.
    First, in order to assign a value to SPEL, you'll need to create a custom VO. Here's a sample query I've used to create my VO to pull data pertainining to if an Attribute should be mandatory/rendered.
    SELECT min(descriptive_flex_context_code) descriptive_flex_context_code, min(Enabled) Enabled, min(END_USER_COLUMN_NAME) END_USER_COLUMN_NAME, min(required_flag) required_flag, min(application_column_name) application_column_name,
    min(xxx.LOV) LOV
    from
    (SELECT descriptive_flex_context_code,'Y' Enabled, END_USER_COLUMN_NAME, required_flag, application_column_name,
    decode(ffvv.flex_value,null,'N','Y') LOV
    FROM FND_DESCR_FLEX_COL_USAGE_VL fdfc,
    fnd_flex_values_vl ffvv
    WHERE (fdfc.APPLICATION_ID=20045)
    and (fdfc.DESCRIPTIVE_FLEXFIELD_NAME LIKE 'SPL Account Map')
    and fdfc.enabled_flag = 'Y'
    and fdfc.display_flag = 'Y'
    and fdfc.application_column_name = 'ATTRIBUTE5'
    and ffvv.flex_value_set_id(+) = fdfc.flex_value_set_id
    and rownum = 1) xxx
    Once the VO is created, I created several transient "boolean" attributes in my VO. These attributes will be assigned a value based on code I place in my *VORowImpl since this is the file which controls attribute assignments.
    But before that, in my *VOImpl, I've defined a custom method which is used to execute the VO from my AM.
    public class Ref5PVOImpl extends OAViewObjectImpl
    * This is the default constructor (do not remove)
    public Ref5PVOImpl()
    public void initRef5PVO()
    // setWhereClause("descriptive_flex_context_code in (select vendor_name from spl_account_map where cont_cd = :1)");
    setWhereClauseParams(null); //Always reset
    // setWhereClauseParam(0, acct);
    // System.out.println("value_selected customer>> "+acct);
    executeQuery();
    In my *VORowImpl, I've added custom code to my Attribute methods so that they return the proper values when invoked...
    public String getMakeRequired()
    if (getRequiredFlag()!= null)
    if ("Y".equals(getRequiredFlag()))
    //if(getMessageChoiceAttribute()==null)
    System.out.println("Required returned yes");
    return "yes";
    else
    System.out.println("Required returned no");
    return "no";
    else
    return "no"; //poplist will be disabled - DFF is not configured
    public Boolean getOptionRender()
    if (getLov()!= null)
    // In this method check if the Message Text Input is valid
    if ("N".equals(getLov()))
    System.out.println("OptionRender returned true "+ getLov());
    return Boolean.TRUE; //text input will be enabled if no LOV exists
    else
    System.out.println("OptionRender returned false "+ getLov());
    return Boolean.FALSE; //text input will be disabled if LOV exists
    else
    return Boolean.FALSE; //text input will be disabled - DFF is not configured
    Then add your SPEL directive into your field's Property inpector...
    ${oa.Ref5PVO1.MakeRequired}
    ${oa.Ref5PVO1.OptionRender}

  • % Change is not giving correct results.

    Hi,
    I am using Ago Function to calculate Current month cx count , previous month cx count, present month acct count, previous month acct count and in the report parameter selecting Reporting Month as DEC 09. The results are coming perfectly fine... but when I create % Change Previous Month Cx count and % change Previous month Account count, It is not showing the correct results and getting only 0% listed even if the Difference is huge and should be around 10% ...
    The code written to calculate % change is :
    CASE WHEN ("EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count" IS NULL OR "EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count" = 0) AND NOT "EIP BDW Analytics"."EIP Reporting FACT"."Current Month Account Count" IS NULL AND "EIP BDW Analytics"."EIP Reporting FACT"."Current Month Account Count" <> 0 THEN 100.0 WHEN ("EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count" IS NULL OR "EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count" = 0) AND ("EIP BDW Analytics"."EIP Reporting FACT"."Current Month Account Count" IS NULL OR "EIP BDW Analytics"."EIP Reporting FACT"."Current Month Account Count" = 0) THEN 0.0 WHEN "EIP BDW Analytics"."EIP Reporting FACT"."Current Month Account Count" IS NULL AND NOT "EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count" IS NULL AND "EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count" <> 0 THEN -(100.0) ELSE 100.0 * ("EIP BDW Analytics"."EIP Reporting FACT"."Current Month Account Count" - "EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count") / "EIP BDW Analytics"."EIP Reporting FACT"."Previous Month Account Count" END
    Can some one tell me what I need to do in order to get the desired results.
    I am using a time dimension with the following hierarchy ( Year - Month - Period_ID) joined with the Reporting Table Fact which has count of cx and Count of Accts.

    We have two assets one acquired on 01/04/2014 and another on 26/04/2014.  Everything else is same for both assets like asset class , keys etc.
    Now here are two calculations.
    System is giving calculation ok , what should we do so that for both assets we are left with same scrap value (1047)  (and not 1119.68 as in last line )
    Can anyone help how to get value in last field of screen shot given below asset vale date or it is always blank.
    wanted to add something in our case DEP to the DAY is also ticked in dep key.
    regards
    Sanjeev Mehndiratta

  • LessFilter and  ReflectionExtractor API giving incorrect results

    I am using Oracle Coherence version 3.7. We are storing DTO objects in cache having "modificationTime" property/instance variable of "java.util.date" type. In order to fetch data from cache passing "java.util.date" variable as input for comparison, LessFilter and ReflectionExtractor api's are used. Cache.entryset(filter) returns incorrect results.
    Note: we are using "com.tangosol.io.pof.PofWriter.writeDateTime(int arg0, Date arg1) " api to store data in cache and "com.tangosol.io.pof.PofReader.readDate(int arg0)" to read data from cache. There is no readDateTime api available ?
    We tested same scenario updating DTO class. Now it has another property in DTO of long(to store milliseconds). Now long is passed as input for comparison to LessFilter and ReflectionExtractor api's and correct results are retrieved.
    Ideally, java.util.Date or corresponding milliseconds passed as input should filter and return same and logically correct results.
    Code:
    1) Test by Date: returns incorrect results
    public void testbyDate(final Date startDate) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
    LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
    final Filter lessFilter = new LessFilter(extractor, startDate);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    2) Test by milliseconds: returns correct results
    public void testbyTime(final Long time) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
    LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
    final Filter lessFilter = new LessFilter(extractor, time);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    }

    Hi Harvy,
    Thanks for your reply. You validated it against a single object in cache using ExternalizableHelper.toBinary/ExternalizableHelper.fromBinary. But we are querying against a collection of objects in cache.
    Please have a look at below code.
    *1)* We are using TestDTO.java extending AbstractCacheDTO.java as value object for our cache.
    import java.io.IOException;
    import java.util.Date;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * The Class AbstractCacheDTO.
    * @param <E>
    *            the element type
    * @author apanwa
    public abstract class AbstractCacheDTO<E> extends AbstractEvolvable implements EvolvablePortableObject {
        /** The Constant IDENTIFIER. */
        private static final int IDENTIFIER = 0;
        /** The Constant CREATION_TIME. */
        private static final int CREATION_TIME = 1;
        /** The Constant MODIFICATION_TIME. */
        private static final int MODIFICATION_TIME = 2;
        /** The version number of cache DTO implementation **/
        private static final int VERSION = 11662;
        /** The id. */
        private E id;
        /** The creation time. */
        private Date creationTime = new Date();
        /** The modification time. */
        private Date modificationTime;
         * Gets the id.
         * @return the id
        public E getId() {
            return id;
         * Sets the id.
         * @param id
         *            the new id
        public void setId(final E id) {
            this.id = id;
         * Gets the creation time.
         * @return the creation time
        public Date getCreationTime() {
            return creationTime;
         * Gets the modification time.
         * @return the modification time
        public Date getModificationTime() {
            return modificationTime;
         * Sets the modification time.
         * @param modificationTime
         *            the new modification time
        public void setModificationTime(final Date modificationTime) {
            this.modificationTime = modificationTime;
         * Read external.
         * @param reader
         *            the reader
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#readExternal(com.tangosol.io.pof.PofReader)
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            id = (E) reader.readObject(IDENTIFIER);
            creationTime = reader.readDate(CREATION_TIME);
            modificationTime = reader.readDate(MODIFICATION_TIME);
         * Write external.
         * @param writer
         *            the writer
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#writeExternal(com.tangosol.io.pof.PofWriter)
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            writer.writeObject(IDENTIFIER, id);
            writer.writeDateTime(CREATION_TIME, creationTime);
            writer.writeDateTime(MODIFICATION_TIME, modificationTime);
        @Override
        public int getImplVersion() {
            return VERSION;
    import java.io.IOException;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestDTO extends AbstractCacheDTO<TestIdentifier> {
        private Long timeinMillis;
        private static final int TIME_MILLIS_ID = 3;
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            super.readExternal(reader);
            timeinMillis = Long.valueOf(reader.readLong(TIME_MILLIS_ID));
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            super.writeExternal(writer);
            writer.writeLong(TIME_MILLIS_ID, timeinMillis.longValue());
         * @return the timeinMillis
        public Long getTimeinMillis() {
            return timeinMillis;
         * @param timeinMillis
         *            the timeinMillis to set
        public void setTimeinMillis(final Long timeinMillis) {
            this.timeinMillis = timeinMillis;
    }*2)* TestIdentifier.java as key in cache for storing TestDTO objects.
    import java.io.IOException;
    import org.apache.commons.lang.StringUtils;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestIdentifier extends AbstractEvolvable implements EvolvablePortableObject {
        private String recordId;
        /** The Constant recordId. */
        private static final int RECORD_ID = 0;
        /** The version number of cache DTO implementation *. */
        private static final int VERSION = 11660;
        @Override
        public void readExternal(final PofReader pofreader) throws IOException {
            recordId = pofreader.readString(RECORD_ID);
        @Override
        public void writeExternal(final PofWriter pofwriter) throws IOException {
            pofwriter.writeString(RECORD_ID, recordId);
        @Override
        public int getImplVersion() {
            return VERSION;
        @Override
        public boolean equals(final Object object) {
            if (object instanceof TestIdentifier) {
                final TestIdentifier id = (TestIdentifier) object;
                return StringUtils.equals(recordId, id.getRecordId());
            } else {
                return false;
         * @see java.lang.Object#hashCode()
        @Override
        public int hashCode() {
            return recordId.hashCode();
         * @return the recordId
        public String getRecordId() {
            return recordId;
         * @param recordId
         *            the recordId to set
        public void setRecordId(final String recordId) {
            this.recordId = recordId;
    }*3) Use Case*
    We are fetching TestDTO records from cache based on LessFilter. However, results returned from cache differs if query is made over property "getModificationTime" of type java.util.Date or over property "getTimeinMillis" of type Long(milliseconds corresponding to date). TestService.java is used for the same.
    import java.io.IOException;
    import java.util.Collection;
    import java.util.Date;
    import java.util.Map;
    import java.util.Set;
    import org.apache.log4j.Logger;
    import com.ladbrokes.dtos.cache.TestDTO;
    import com.ladbrokes.dtos.cache.TestIdentifier;
    import com.cache.services.CacheService;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.ValueExtractor;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.LessFilter;
    * @author nkhatw
    public class TestService implements CacheService<TestIdentifier, TestDTO, Object> {
        private static final String TEST_CACHE = "testcache";
        private static final NamedCache CACHE = CacheFactory.getCache(TEST_CACHE);
        private static final Logger LOGGER = Logger.getLogger(TestService.class);
         * Push DTO objects with a) modTime of java.util.Date type b) timeInMillis of Long type
         * @throws IOException
        public void init() throws IOException {
            for (int i = 0; i < 30; i++) {
                final TestDTO dto = new TestDTO();
                final Date modTime = new Date();
                dto.setModificationTime(modTime);
                final Long timeInMillis = Long.valueOf(System.currentTimeMillis());
                dto.setTimeinMillis(timeInMillis);
                final TestIdentifier testId = new TestIdentifier();
                testId.setRecordId(String.valueOf(i));
                dto.setId(testId);
                final CacheService testService = new TestService();
                testService.createOrUpdate(dto, null);
                LOGGER.debug("Pushed record in cache with key: " + i + " modTime: " + modTime + " Time in millis: "
                    + timeInMillis);
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting time property
         * b) java.util.Date value to be compared with
         * 2) Verify extracted entryset
         * @throws IOException
        public void testbyDate(final Date startDate) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
            LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
            final Filter lessFilter = new LessFilter(extractor, startDate);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting "time in millis  property"
         * b) java.Long value to be compared with
         * 2) Verify extracted entryset
        public void testbyTime(final Long time) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
            LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
            final Filter lessFilter = new LessFilter(extractor, time);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
        @Override
        public void createOrUpdate(final TestDTO testDTO, final Object arg1) throws IOException {
            CACHE.put(testDTO.getId(), testDTO);
        @Override
        public void createOrUpdate(final Collection<TestDTO> arg0, final Object arg1) throws IOException {
            // YTODO Auto-generated method stub
        @Override
        public <G>G read(final TestIdentifier arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public Collection<?> read(final Map<TestIdentifier, Object> arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public void remove(final TestDTO arg0) throws IOException {
            // YTODO Auto-generated method stub
    Use Case execution Results:
    "testbyTime" method returns correct results.
    However, "testbyDate" method gives random and incorrect results.

  • Query giving no result as well as no error

    hi...good evening all...
    i have a database server in a machine. the clients which are connected to those reported me that they cannot execute any queries from the schema connected to that database. i have tried and got the same result.
    There was no error but the status was EXECUTING. a simple select count(1) from table name was executing through a huge time but giving no output.
    i have bounced the database and after that it is working fine.
    My question is...may I get any idea about why the database behaved like that??
    please help.

    Hi,
    Your DB Version ?
    Secondly, if you checked the session_waits - then you might got some results/information with respect to session.
    - Pavan Kumar N

  • Opening Illustrator 8 eps is giving wrong results

    I have an eps (in Illustrator 8 format) where I manually changed the encoding to include east-european characters.
    When I open this eps in the latest Illustrator version, I don't see those characters.
    The attempts I made are:
    in a TE block
    39/quotesingle 96/grave 130/quotesinglbase/florin/quotedblbase/ellipsis
    /dagger/daggerdbl/circumflex/perthousand/Scaron/guilsinglleft/OE 145/quoteleft
    /quoteright/quotedblleft/quotedblright/bullet/endash/emdash/tilde/trademark
    /scaron/guilsinglright/oe/dotlessi 159/Ydieresis /space 164/currency 166/brokenbar
    168/dieresis/copyright/ordfeminine 172/logicalnot/hyphen/registered/macron/ring
    /plusminus/twosuperior/threesuperior/acute/mu 183/periodcentered/cedilla
    /onesuperior/ordmasculine 188/onequarter/onehalf/threequarters 192/Agrave
    /Aacute/Acircumflex/Atilde/Adieresis/Aring/AE/Ccedilla/Egrave/Eacute
    /Ecircumflex/Edieresis/Igrave/Iacute/Icircumflex/Idieresis/Eth/Ntilde
    /Ograve/Oacute/Ocircumflex/Otilde/Odieresis/multiply/Oslash/Ugrave
    /Uacute/Ucircumflex/Udieresis/Yacute/Thorn/germandbls/agrave/aacute
    /acircumflex/atilde/adieresis/aring/ae/ccedilla/egrave/eacute/ecircumflex
    /edieresis/igrave/iacute/icircumflex/idieresis/eth/ntilde/ograve/oacute
    /ocircumflex/otilde/odieresis/divide/oslash/ugrave/uacute/ucircumflex
    /udieresis/yacute/thorn/ydieresis
    TE
    and further down: (128 80 \277) Tx
    I tried to replace the Agrave with Lslash, but Illustrator seems to ignore that part when opening the eps, and still displays a À
    Also tried with the TZ block to reencode the font:
    %AI3_BeginEncoding: _Helvetica-Condensed-Bold Helvetica-Condensed-Bold
    [ 191/Lslash /_Helvetica-Condensed-Bold/Helvetica-Condensed-Bold 0 0 1 TZ
    %AI3_EndEncoding AdobeType
    and further down: (128 80 \277) Tx
    That change is also ignored by Illustrator.
    The character is properly displayed when I open the eps and typing/pasting it in Illustrator, so I know the font supports it.
    Am I doing something wrong here?
    An alternate route could be to use the Illustrator SDK as platform for writing eps files, as I assume that the SDK properly supports unicode characters.
    Did some research on that, but found that the SDK is only to make Illustrator plugins, and not supporting standalone applications, so, for automated eps generating it's not suitable.
    Background:
    I've created software which generates eps files with cartographic maps in them for publishing, printing or other purposes.
    Recently, we got a new customer originating in eastern europe which wants their special, non-isolatin characters printed correcly.
    Hope someone can give me any advice on how to proceed...
    Greetings,
    Paul

    I just realized that order by case x.rest rest isn't giving me 's','m','t','w','th','f','sh' 
    substring(Rest, 1, charindex('~', Rest+'~') - 1) is giving me 's','m','t','w','th','f','sh' 
    so I switched the order by to be
    ORDER BY CASE substring(Rest, 1, charindex('~', Rest+'~') - 1) WHEN 'S' THEN 1 WHEN 'M' THEN 2 WHEN 'T' THEN 3 WHEN 'W' THEN 4
    WHEN 'TH' THEN 5 WHEN 'F' THEN 6 WHEN 'SH' THEN 7 ELSE 8 END, x.FirstWord
    That got the desired results.
    Debra has a question

  • Query giving wrong result

    Hi
    Could anybody tell me why this query is giving the wrong result? The column 'Spend for Selected Period' is showing as far too much (ie exactly 11 times too much!!!) when totalling the 37 invoices involved
    SELECT T0.[CardCode], T0.[CardName],T0.[MailCity] AS 'Town', T2.[SlpName] AS 'Rep',
    SUM(T1.DocTotal - T1.VatSum)  AS 'Spend for Selected Period', MAX(T3.CreateDate) AS 'Last Visit Date'
    FROM OCRD T0  left JOIN OINV T1 ON T0.CardCode = T1.CardCode LEFT JOIN OSLP T2 ON T0.SlpCode = T2.SlpCode LEFT JOIN OSCL T3 ON T0.CardCode = T3.Customer
    where  T0.[CardCode] = 'wyk027/34' AND T1.DocDate BETWEEN '20090101' AND '20091231'
    GROUP BY T0.[CardCode], T0.[CardName],T0.[MailCity], T2.[SlpName]
    Thanks
    Steve

    Hi ,
    It seems like it is duplicated by the number of service calls.
    Try this one:
    SELECT T0.CardCode, T0.CardName,T0.MailCity AS 'Town', T2.SlpName AS 'Rep',
    SUM(T1.DocTotal - T1.VatSum) AS 'Spend for Selected Period', (select MAX(T3.CreateDate) from OSCL T3 where T0.CardCode = T3.Customer) AS 'Last Visit Date'
    FROM OCRD T0 inner JOIN OINV T1 ON T0.CardCode = T1.CardCode inner JOIN OSLP T2 ON T0.SlpCode = T2.SlpCode
    where T0.CardCode = '10001' AND T1.DocDate BETWEEN '20090101' AND '20091231'
    GROUP BY T0.CardCode, T0.CardName,T0.MailCity, T2.SlpName
    Best regards,
    Maya

  • Aggregation giving unstable results

    Hi all,
    I am facing a problem with aggregation in OWB(not just in the tool in the Oracle database also).
    While I am trying to aggregate huge amount of data the aggregation result is not proper. But when i aggregate for the data where the mismatch occurs it is happening properly.
    It is giving the same result if write the query directly in the database also. The mismatch is not happening for the same subset of data everytime. It is varying.
    For less amount of data the aggregation is happening properly
    If anyone encountered this type of problem Kindly help
    Regards,
    Arjun Jkoshi

    okie, i have worked on oracle for a few good years and worked wtith data worth millions and billions still never had any problems.
    First thing that i would suggest is : Re-visit your DB design.
    If your DB is going to have records in millions or crores then it might be worth having to set a precision and scaling. i.e
    If your column specific is 1 unit = 1000000
    then 3500000 = 3.5 so your db should store the value as 3.5 this not only saves space but also is very easy for calculation purpose.
    This is how normally enterprises work.
    secondly even if you use the whole number, i havent ever come across a scenario where oracle does not aggregate correctly, this can only happen if you are having a join or if there is cartesian product in your query or data inconsistency.
    If you still very sure that there is no problem with your query then raise a SR on metalink because this could be a serious issue if it is genuinely a problem which i reckon it is not.
    Edited by: Darthvader-647181 on Dec 15, 2008 4:43 AM

  • Same query giving different results

    Hi
    I m surprised to see the behaviour of oracle. I have two different sessions for same scheema on same server. In both sessions same query returns different results. The query involves some calculations like sum and divisions on number field.
    I have imported this data from another server using export / import utility available with 9i server. Before export every thing was going fine. Is there some problem with this utility.
    I m using Developer 6i as the front end for my client server application. The behaviour of my application is very surprizing as once it shows the correct data and if I close the screen and reopen, it shows wrong data.
    I m really stucked with the abnormal behaviour. Please tell me the possiblities and corrective action for these conditions.
    Regards
    Asad.

    There is nothing uncommitted in both the sessions. But still different results are returned.
    I m sending u the exact query and result returned in both sessions.
    Session 1:
    SQL> rollback;
    Rollback complete.
    SQL> SELECT CC.CREDIT_HRS,GP.GRADE_PTS
    2 FROM GRADE G, COURSE_CODE CC, GRADE_POLICY GP
    3 WHERE G.COURSE_CDE=CC.COURSE_CDE
    4 AND G.SELECTION_ID=45 AND G.GRADE_TYP=GP.GRADE_TYP
    5 AND G.TERM_PROG_ID=17 AND GP.TERM_ID=14
    6 /
    CREDIT_HRS GRADE_PTS
    3 4
    4 3.33
    4 3.33
    3 4
    3 4
    3 4
    3 4
    7 rows selected.
    SQL>
    SESSION 2:
    SQL> rollback;
    Rollback complete.
    SQL> SELECT CC.CREDIT_HRS,GP.GRADE_PTS
    2 FROM GRADE G, COURSE_CODE CC, GRADE_POLICY GP
    3 WHERE G.COURSE_CDE=CC.COURSE_CDE
    4 AND G.SELECTION_ID=45 AND G.GRADE_TYP=GP.GRADE_TYP
    5 AND G.TERM_PROG_ID=17 AND GP.TERM_ID=14
    6 /
    CREDIT_HRS GRADE_PTS
    3 4
    4 3.33
    3 4
    3 4
    3 4
    3 4
    6 rows selected.
    SQL>
    U can see in session 1, seven rows are returned while in session 2 six rows are returned. I have issued a rollback before query to be sure that data in both sessions is same.

  • Mac Pro not recognizing bootable DVDs and Disk Utility giving odd results.

    My issue is complex, but I'll try my best to explain it as I can.
    One has been resolved it seems, but I am including it so that the whole issue can be seen in context.
    Friday, May 16, 2014
    Last night, after rebooting my Mac Pro from my Bootcamp partition (I'm using Windows 7 Professional if that information is helpful) I received a kernel panic upon Mac OS X booting (Mac OS X 10.6.8).
    My first action was to launch Disk Utility to verify the Mac HD, verify was stopped by disk utility citing that I should insert my Mac OS X install DVD and repair the disk.
    I tried booting from my Snow Leopard install DVD. After the grey Apple logo appeared with the gear spinning below it remained for roughly ten to fifteen seconds (a little long) — then the Apple logo changed to. A grey prohibitory sign (circle with a diagonal line through it, the spinning gear remained. — I tried booting from the Snow Leopard DVD a few time, same result.
    Following this I tried booting using Disk Warrior, same result again.
    I tried booting both the Snow Leopard and Disk Warrior DVDs in both my upper and lower optical drives, with no change.
    I decided to leave the issue and call Apple Support in the morning
    Saturday, May 17, 2014
    This morning I woke my Mac Pro from sleep, opened Disk Utility and tried verifying the hard drive to see if it was temporary — same result, 'please insert the Mac OS X install DVD and repair the drive.' — I also tried booting the install DVD again with no result.
    I then booted my Mac into Safe Mode to check my hardware.
    Upon opening Disk Utility and verifying my Mac HD — the result "Macintosh HD appears to be OK", rand the test again to see if this was an anomaly, but disk returned another pass.
    I performed a normal restart of my system, although Finder was a little slower to load than normal, the system booted correctly. — I then launched Disk Utility and verified the Mac HD, it returned another pass.
    This confused me, after a call to Apple Support the tech explained that 'sometime a Safe Mode boot will fix problems because it disables all non-essential processes when booting. — That makes sense to me.
    However.
    The issue of my Mac not recognizing bootable DVDs remains — it reads disks correctly, it just will not boot them.
    Other steps I have taken to try and resolve this remaining issue.
    1. a PRAM reset. — No change.
    2. an SMC reset. — No change.
    3. Removal of newly installed RAM. — I have tried both running the old and new RAM separately, no change using either combination.
    4. Running bootable DVDs in different optical bays. — No change from both bays.
    a. My upper drive is an MCE Blu-Ray/DVD combo drive, but I have booted from this drive before. (Less than six months old.)
    b. My lower drive is LG DVD-RAM combo drive, I have also booted from this drive before. (Over one year old, replacement for an Apple optical drive.)
    None of the above steps have helped, I still cannot boot from my optical drives — I always receive a prohibitory sign shortly after the Apple logo.
    Other system information:
    Mac OS X 10.6.8
    8GB of RAM (4 x 2GB)
    500GB Western Digital Caviar Blue HDD (less than a year old, boot drive for Mac OS X 10.6.8)
    1TB Samsung HDD (for data storage and also containing Bootcamp partition)
    2TB HDD (cannot remember manufacturer, contains Time Machine backups, as well as data storage)
    ATI Radeon HD 5770 with 1GB of VRAM
    Sorry for the long post but I really need to be able to boot from my optical drives.

    Update: found an old Mountain Lion installation USB flash drive I created a while back.
    I am able to boot from the ML USB flash drive, but am still unable to boot from my either my Snow Leopard Install DVD or my Disk Warrior DVD. — I have verified both DVD and they pass verification tests.
    This leads me to believe that the problem resides either in the Mac, or in BOTH  the Snow Leopard and Disk Warrior DVDs.
    Since I am able to boot from a USB drive, I will look into the possibility of creating a bootable Snow Leopard drive, and perhaps also a Disk Warrior drive — at least until I can resolve the 'not able to boot from DVD' issue.

Maybe you are looking for

  • Help - delay when previewing oam-file in browser

    Hi, I have created a drag-and-drop exercise in Edge Animate. I've then placed the OAM-file in Muse, but when I select "preview page in browser", my OAM-file has a delay - I can't drag and drop the objects until about 4 seconds has passed. The drag-an

  • Enable ABAP Class in 7x Web Template

    Hi All, We are upgrading Web Templates from 3x to 7x and we need inputs for amending Custom ABAP Class in 7x web templates. In existing 3x version there are couple of web templates which concatenate columns based on custom abap class, this option is

  • Customize MDM Standard ivew on Portal

    Hello , We are implementing MDM on Enterprise Portal and have deployed the Business Packages for the same. Now we have a requirement to customize the Standard iview 'Search 'Text' for MDM item Search. This iview has a Dropdown which has the value 'Pr

  • Reminder app doesn't alert correctly

    I have the iPhone 4S. The reminder app either alerts late (varying from 8-20 minutes) or not at all for a time-based alert.  Very frustrating when relying on it to remind me of something.  Any ideas?

  • Install error in Air application created in Flex builder

    Hi all, When I export the release version of an AIR project, and attempt to run the resulting AIR file, I get the error "The application could not be installed because the AIR file is damaged. Try obtaining a new AIR file from the publisher." This ha