SSAS source Query, Simple Query with never ending refresh time.

Hello...
A very simple query based on SSAS cube do not give answer. :-(
I start a trace to see the exact mdx statement transmited to SSAS, run it in a query editor, the result is... immediate !
Is there any clue to explain such a behavior ? cube meta data definition ?
Any help is welcome
===========
Version: 2.21.3974.242

hello,
find below the capture of the trace, cannot see any references to rowsets either cellsets...
thank you for your help
select{[Measures].[Fact Count]}on 0,nonempty(crossjoin([Invoice Attributes].[Scenario].[Scenario].allmembers,[Period Date].[Month].[Month].allmembers,[Period Date].[Quarter].[Quarter].allmembers,[Period Date].[Year].[Year].allmembers,[Product Line Techno].[Product
Line Code].[Product Line Code].allmembers),{[Measures].[Fact Count]})properties member_caption,member_unique_name on 1 from(select([Invoice Attributes].[Scenario].&[ACTUALS],[Period Date].[Month].&[20150101],[Product Line Techno].[Product Line Code].&[EB])on
0 from [Revenue Detailed])cell properties value
<PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis">
          <Catalog>OLAP12</Catalog>
          <SspropInitAppName>Microsoft SQL Server Management Studio - Query</SspropInitAppName>
          <LocaleIdentifier>1036</LocaleIdentifier>
          <ClientProcessID>5992</ClientProcessID>
          <Format>Native</Format>
          <AxisFormat>TupleFormat</AxisFormat>
          <Content>SchemaData</Content>
          <Timeout>0</Timeout>
          <DbpropMsmdActivityID>284bd120-34f1-4fac-83cb-e8cf443261ad</DbpropMsmdActivityID>
          <DbpropMsmdRequestID>f11764e9-b092-41ea-aae6-a91763960eae</DbpropMsmdRequestID>
        </PropertyList>

Similar Messages

  • Never ending "unknown time remaining"  message after submit

    Has anyone reached a satisfactory resolution on this issue w/ compressor. What do apple's techs have to say about it? It's not like apple to force their users to deal with major bugs and not chime in on the situation. This is more like the sort of garbage that I always here windows users complaining about.

    Ok, no resolution from within Compressor exists as far as I can see. Here is what I suggest you do...
    Compressor is the biggest pain in the but on the Suite, I don't see it as a conflict that can be solved with FCP Suite, it exists between OS and Program, so having said that I will say this, every time I have an issue with compressor (Which is quite often). I throw my hands up, grab my OS disk, do an Archive and Install. This resets the OS AND FCP Suite. Of course you have to every freaking update again. But it resets everything from the ground up and badaboom, new install of FCP Suite like new. You will have to put your user license back on too. You will have to tell your OS disk to custom install, meaning, you only need essential programs installed not language, etc. etc. The whole process will take about and hour and half, depending on your net speed to update...
    But sometimes "I got no fight as long as any man does what he's told...."

  • SQL Server Multiple JOINS with Table Value Function - query never ends

    I have a query with 4 joins using a table value function to get the data and when I execute it the query never ends.
    Issue Details 
    - Table value function 
        CREATE FUNCTION [dbo].[GetIndicator] 
          @indicator varchar(50),
          @refDate datetime
    RETURNS 
    TABLE
    AS
    RETURN
    SELECT
    T1.Id ,T1.ColINT_1, T1.ColNVARCHAR_1 collate DATABASE_DEFAULT as ColNVARCHAR_1 ,T1.ColNVARCHAR_2 ,T1.ColSMALLDATETIME_1, T1.ColDECIMAL_1, T1.ColDECIMAL_1
    FROM TABLE2 T2
    JOIN TABLE3 T3
    ON T2.COLFKT3 = T3.Id
    AND T3.ReferenceDate = @RefDate
    AND T3.State != 'Deleted'
    JOIN TABLE4 T4
    ON T2.COLFKT4 = T4.Id AND T4.Name=@indicator
    JOIN TABLE1 T1
    ON T2.COLFKT1=T1.Id
    - Query
        DECLARE @RefDate datetime
    SET @RefDate = '30 April 2014 23:59:59'
    SELECT DISTINCT OTHERTABLE.Id As Id
    FROM 
    GetIndicator('ID#1_0#INDICATOR_X',@RefDate) AS OTHERTABLE
    JOIN GetIndicator('ID#1_0#INDICATOR_Y',@RefDate) AS YTABLE  
    ON OTHERTABLE.SomeId=YTABLE.SomeId
    AND OTHERTABLE.DateOfEntry=YTABLE.DateOfEntry
    JOIN GetIndicator('ID#1_0#INDICATOR_Z',@RefDate) AS ZTABLE
    ON OTHERTABLE.SomeId=ZTABLE.SomeId
    AND OTHERTABLE.DateOfEntry=ZTABLE.DateOfEntry
    JOIN GetIndicator('ID#1_0#INDICATOR_W',@RefDate) AS WTABLE 
    ON OTHERTABLE.SomeId=WTABLE.SomeId
    AND OTHERTABLE.DateOfEntry=WTABLE.DateOfEntry
    JOIN GetIndicator('ID#1_0#INDICATOR_A',@RefDate) AS ATABLE  
    ON OTHERTABLE.SomeId=ATABLE.SomeId
    AND OTHERTABLE.DateOfEntry=ATABLE.DateOfEntry
    Other details:
    - SQL server version: 2008 R2
    - If I execute the table function code outside the query, with the same args, the execution time is less the 1s.
    - Each table function call return between 250 and 500 rows.

    Hi,
    Calling function in general is a costly query. And definitely joining with a function 5 times in not an efficient one.
    1. You can populate the results for all parameters in a CTE or table variable or temporary table and join (instead of funtion) for different parameters
    2. Looks like you want fetch the IDs falling to different indicators for the same @Refdate. You can try something like this
    WITH CTE
    AS
    SELECT
    T1.Id ,T1.ColINT_1, T1.ColNVARCHAR_1 collate DATABASE_DEFAULT as ColNVARCHAR_1 ,T1.ColNVARCHAR_2 ,T1.ColSMALLDATETIME_1, T1.ColDECIMAL_1, T1.ColDECIMAL_1, T4.Name
    FROM TABLE2 T2
    JOIN TABLE3 T3
    ON T2.COLFKT3 = T3.Id
    AND T3.ReferenceDate = @RefDate
    AND T3.State != 'Deleted'
    JOIN TABLE4 T4
    ON T2.COLFKT4 = T4.Id AND T4.Name=@indicator
    JOIN TABLE1 T1
    ON T2.COLFKT1=T1.Id
    SELECT * FROM CTE WHERE Name = 'ID#1_0#INDICATOR_X' AND Name = 'ID#1_0#INDICATOR_Y' AND Name = 'ID#1_0#INDICATOR_Z' AND Name = 'ID#1_0#INDICATOR_W' AND Name = 'ID#1_0#INDICATOR_A' AND ReferenceDate = @RefDate.
    Or you can even simplify more depends on your requirement.
    Regards,
    Brindha.

  • 11.2.2.4.0 - Problem with temporary space in simple query

    ttVersion
    TimesTen Release 11.2.2.4.0 (64 bit Linux/x86_64) (timesten:53396) 2012-09-24T08:28:05Z
    Instance admin: root
    Instance home directory: /opt/TimesTen/timesten
    World accessible
    Daemon home directory: /var/TimesTen/timesten
    I get "TT0802: Database temporary space exhausted" error in simple query with small data amount; Timesten try to allocate *40000312* bytes
    describe adm.peer
    Table ADM.PEER:
    Name Null Type
    PEER_ID NOT NULL TT_SMALLINT
    CLUSTER_ID NOT NULL TT_TINYINT
    DIALECT NOT NULL TT_INTEGER
    HOST NOT NULL TT_VARCHAR(256 BYTE)
    REALM NOT NULL TT_VARCHAR(256 BYTE)
    ADDRESS TT_VARCHAR(256 BYTE)
    PORT NOT NULL TT_INTEGER
    PROTOCOL NOT NULL TT_INTEGER
    AUTO_CONNECT NOT NULL TT_TINYINT
    ENABLED NOT NULL TT_TINYINT
    PRIORITY NOT NULL TT_TINYINT
    MANDATORY NOT NULL TT_TINYINT
    TSTAMP BINARY(8)
    1 rows selected
    describe adm.session
    Table ADM.SESSION:
    Name Null Type
    SESSION_ID NOT NULL TT_VARCHAR(64 BYTE) inline
    OBJ_ID NOT NULL TT_BIGINT
    PR_OBJ_ID NOT NULL TT_BIGINT
    SUBSCRIBER_ID NOT NULL TT_VARCHAR(32 BYTE) inline
    IP NOT NULL TT_VARCHAR(15 BYTE) inline
    IPV6_PREFIX TT_VARCHAR(39 BYTE) inline
    IPV6_PREFIX_LEN NOT NULL TT_TINYINT
    CREATE_TIME NOT NULL TT_TIMESTAMP
    UPDATE_TIME NOT NULL TT_TIMESTAMP
    RULES_SET_ID NOT NULL TT_BIGINT
    PEER_ID NOT NULL TT_SMALLINT
    MY_PEER_ID NOT NULL TT_SMALLINT
    PROFILE_HASHC NOT NULL TT_BIGINT
    FLAGS NOT NULL TT_INTEGER
    QOS_POLICY_NAME NOT NULL TT_VARCHAR(32 BYTE) inline
    BSID NOT NULL TT_BIGINT
    CONGESTION_FLAG NOT NULL TT_TINYINT
    SERVICE_CATEGORY_ID TT_VARCHAR(32 BYTE) inline
    EVENT_CAUSE NOT NULL TT_TINYINT
    EVENT_TIME TT_TIMESTAMP
    TSTAMP BINARY(8)
    1 rows selected
    select * from adm.peer;
    PEER_ID CLUSTER_ID DIALECT HOST REALM ADDRESS PORT PROTOCOL AUTO_CONNECT ENABLED PRIORITY MANDATORY TSTAMP
    21 2 0 ddf1.server.com diameter.realm ddf1.server.com 3868 6 1 1 0 1 (null)
    22 2 0 ddf2.server.com diameter.realm ddf2.server.com 3868 6 1 1 1 1 (null)
    101 233 0 peer_101 testik.com peer_101.testik.com 3886 0 0 1 101 0 (null)
    102 233 0 peer_102 testik.com peer_102.testik.com 3886 0 0 1 102 0 (null)
    1 1 0 vr-t500.testik.com diameter.realm vr-t500.testik.com 3868 6 1 1 0 1 (null)
    5 rows selected
    select * from adm.session;
    SESSION_ID OBJ_ID PR_OBJ_ID SUBSCRIBER_ID IP IPV6_PREFIX IPV6_PREFIX_LEN CREATE_TIME UPDATE_TIME RULES_SET_ID PEER_ID MY_PEER_ID PROFILE_HASHC FLAGS QOS_POLICY_NAME BSID CONGESTION_FLAG SERVICE_CATEGORY_ID EVENT_CAUSE EVENT_TIME TSTAMP
    TEST_SESSION 13300000000020027 0 TEST_SUBSCRIBER 94.25.209.27 0 2012-10-18 12:56:07.155381000 2012-10-18 12:56:07.155381000 1 101 1 0 0 0 0 DEFAULT 0 (null) (null)
    TEST_SESSION2 13300000000020028 13300000000020027 TEST_SUBSCRIBER 94.25.209.27 0 2012-10-18 12:56:07.155687000 2012-10-18 12:56:07.155687000 1 102 1 0 4 0 0 DEFAULT 0 (null) (null)
    2 rows selected
    SELECT p.address, count(*) as session_count from session s, peer p where p.peer_id = s.peer_id group by p.address failed,
    TT0802: Database temporary space exhausted
    dssize
    PERM_ALLOCATED_SIZE:     307200.0
    PERM_IN_USE_SIZE:     61763.0
    PERM_IN_USE_HIGH_WATER:     69393.0
    TEMP_ALLOCATED_SIZE:     37888.0
    TEMP_IN_USE_SIZE:     13494.0
    TEMP_IN_USE_HIGH_WATER:     21307.0
    This is additional error info when this code exuted inside C code:
    [TimesTen][TimesTen 11.2.2.4.0 ODBC Driver][TimesTen]TT0802: Database temporary space exhausted -- file "blk.c", lineno 3477, procedure "sbBlkAlloc"
    ODBC Error/Warning = S1000, Additional Error/Warning = 802
    [TimesTen][TimesTen 11.2.2.4.0 ODBC Driver][TimesTen]TT6221: Temporary data partition free space insufficient to allocate *40000312* bytes of memory -- file "blk.c", lineno 3477, procedure "sbBlkAlloc"
    ODBC Error/Warning = S1000, Additional Error/Warning = 6221
    Edited by: Vladimir Romanov on 18.10.2012 13:13
    Edited by: Vladimir Romanov on 18.10.2012 13:51

    This may well be
    Bug 14634954 - SELECT WITH GROUP BY REQUESTS LARGE TEMP MEMORY GETS TT0802 / TT6221
    The bug is fixed in 11.2.2.4.1 which is hopefully due before the end of October. Can you run your test on 11.2.1 as well? The problem should not reproduce there as it is specific to 11.2.2

  • Query SQL datastore with XML where clause source

    Hope I am in the right place.  New to Bus Obj Data Services Designer....I have cerated xml schemas, added it to the page as an xml source in.  Mapped a test xml file and all is well there.  I have added a query that grabs the xml.
    I need to then query the MS SQL datastore ans use  the data form the xml query as the where clause.  How is this done?  Or do I put a query on the datastore for all the data in a table then do anotehr query filtering one with the other?  seems like that would be rather heavy and low performance.  The results will then be sent back out as xml (schema and test file already set up as an xml out)
    Thanks!

    Thanks for the tips.
    I'm trying to implement this option, using your ViewDefHelper.
    I´m running into a problem though. After I create my dynamic View Object using a ViewDef, I need to create some view links.
    So I get the AttributeDefs of the columns (source, and destination) from the method findAttributeDef (which is the whole purpose of performance in my post). This method is returning the correct Attribute Def, but when I create the view Link with the method createViewLinkBetweenViewObjects(java.lang.String vlName,
    java.lang.String accessorName,
    ViewObject master,
    AttributeDef[] srcAttrs,
    ViewObject detail,
    AttributeDef[] destAttrs,
    java.lang.String assocClause)
    My destination query is generating the where clause as:
    null = ?
    Any Ideas what I'm doing wrong ?
    Thanks again.
    John.

  • Simple query with like return wrong result

    Hi,
    I run simple query with like.
    If I use parameter I get wrong results.
    If I use query without parameter results are ok.
    My script:
    ALTER SESSION SET NLS_SORT=BINARY_CI;
    ALTER SESSION SET NLS_COMP=LINGUISTIC;
    -- drop table abcd;
    create table abcd (col1 varchar2(10));
    INSERT INTO ABCD VALUES ('122222');
    insert into abcd values ('111222');
    SELECT * FROM ABCD WHERE COL1 LIKE :1; -- wrong result with value 12%
    COL1
    122222
    *111222*
    select * from abcd where col1 like '12%'; -- result ok
    COL1
    122222
    I use Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    and query run in Oracle SQL Developer 3.1.07.

    Hi,
    welcome to the forum.
    When you put some code please enclose it between two lines starting with {noformat}{noformat}
    i.e.:
    {noformat}{noformat}
    SELECT ...
    {noformat}{noformat}
    You should specify exactly how you run your code.
    If I run this statement in SQL Plus:SQL> ALTER SESSION SET NLS_SORT=BINARY_CI;
    Session altered.
    SQL> ALTER SESSION SET NLS_COMP=LINGUISTIC;
    Session altered.
    SQL>
    SQL> -- drop table abcd;
    SQL> create table abcd (col1 varchar2(10));
    Table created.
    SQL>
    SQL> INSERT INTO ABCD VALUES ('122222');
    1 row created.
    SQL> insert into abcd values ('111222');
    1 row created.
    SQL>
    SQL> SELECT * FROM ABCD WHERE COL1 LIKE :1;
    SP2-0552: Bind variable "1" not declared.
    SQL>
    I got this error. So I wonder how you set value 12%
    Please specify exactly how you run your test as we cannot reproduce your problem.
    Regards.
    Al                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Trouble with a simple Query

    Hello, I'll start by saying that I am a noob. Anyways, I am trying to do what I thought would be a simple query to get records that are greater than or equal to the current date: this is my query...
    <cfquery name="getUpcoming" datasource="events">
    SELECT title, eventDate FROM event WHERE eventDate >= #Now()# ORDER BY eventDate ASC
    </cfquery>
    It works, sort of, I do get records that are greater than the current date, but any records that are equal to do not show up.
    I am assuming that it is looking at time as well, or I am doing it completely wrong. I don't know? Any help would be greatly appreciated.

    I didn't use the cfqueryparam as suggested, is there something dangerous about doing it this way?
    Nothing dangerous, no.  Just "less than ideal" (in a sloppy / lazy sort of way).  As I suggested, one should not hard-code dynamic values into the SQL string, one should pass them as parameters.  it's just "the way it should be done".
    When the DB receives your SQL string (with the dynamic values hard-coded), the DB engine needs to compile the SQL to make an execution plan before executing the query.  Any change to the SQL string requires recompilation.  However if you pass your parameter as parameters, then the SQL does not need to be recompiled.
    It's the same sort of thing as not using global variables unless one has to, despite the fact they're "easier", or duplicating code instead of refactoring code.  One should try to write decent code.
    Adam

  • Help with a simple query

    Hi ,
    I'm running the following simple query in sql*plus on ORACLE9i. But this query stopped running after 30minutes, and the sql*plus die at the same time .I have no idea about this. Could somebody tell me how I can solve this problem. Thank you very much for your help.
    Select Distinct PERSADDRUSE. ADDRUSECD as "Application", PERS.PERSNBR as "Account",
    (PERS.FIRSTNAME || ' '|| PERS.MDLINIT ||' ' || PERS.LASTNAME ) as "Name1",' 'as "Name2",' 'as "Name3",
    AL1.TEXT as "Address1",AL2.TEXT as "Address2",AL3.TEXT as "Address3",
    (ADDR.CITYNAME ||' ' || ' '||ADDR.STATECD ||' '||ADDR.ZIPCD||' '|| ADDR.ZIPSUF) as "CityStateZip"
    From PERSADDRUSE
    Join PERS
    ON PERS.PERSNBR = PERSADDRUSE.PERSNBR
    --AND  PERS.ADDDATE = '12-JAN-2005'
    AND PERSADDRUSE.ADDRUSECD = 'PRI'
    join ADDR
    ON PERSADDRUSE.ADDRNBR = ADDR.ADDRNBR
    left JOIN ADDRLINE AL1
    ON ADDR.ADDRNBR = AL1.ADDRNBR
    AND AL1.LINENBR = 1
    left JOIN ADDRLINE AL2
    ON ADDR.ADDRNBR = AL2.ADDRNBR
    AND AL2.LINENBR = 2
    left JOIN ADDRLINE AL3
    ON ADDR.ADDRNBR = AL3.ADDRNBR
    AND AL3.LINENBR = 3;

    Thanks for reply. I have some other query running for 45m and it seems fine. The following are the explain plan I print out. I'm new to PL/SQL.Could you guys give me some other ideas?
    BMS_XPLAN.DISPLAY()(PLAN_TABLE_OUTPUT)
    PERSADDRUSE | 5726 | 68712 | 183 |'), DBMS_XPLAN_TYPE('| 8 | TABLE ACCESS FULL| PERS | 161K| 2839K| 431 |'), DBMS_XPLAN_TYPE('| 9 | TABLE ACCESS FULL | ADDR | 239K| 5145K| 298 |'), DBMS_XPLAN_TYPE('| 10 | TABLE ACCESS FULL | ADDRLINE | 82087 | 1683K| 240 |'), DBMS_XPLAN_TYPE('| 11 | TABLE ACCESS FULL | ADDRLINE | 82087 | 1683K| 240 |'), DBMS_XPLAN_TYPE('| 12 | TABLE ACCESS FULL | ADDRLINE | 82087 | 1683K| 240 |'), DBMS_XPLAN_TYPE('------------------------------------------------------------------------'), DBMS_XPLAN_TYPE(' '), DBMS_XPLAN_TYPE('Note: cpu costing is off, PLAN_TABLE'' is old version'))

  • Simple Query working on 10G and not working on 11gR2 after upgrade

    Hi Folks,
    This is the first time i am posting the query in this Blog.
    I have a small issue which preventing the UAT Sigoff.
    Simple query working fine on 10.2.0.1 and after upgrade to 11.2.0.1 its error out
    10.2.0.4:
    =====
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1;
    COUNT(*)
    1
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=00001;
    COUNT(*)
    1
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1;
    ATTRIBUTE1
    00001
    11.2.0.1:
    =====
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1
    ERROR at line 1:
    ORA-01722: invalid number
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=00001
    ERROR at line 1:
    ORA-01722: invalid number
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='1';
    no rows selected
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='00001';
    COUNT(*)
    1
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='00001';
    ATTRIBUTE1
    00001
    ++++++++++++++++++++++++++++++++++++++++++++++
    SQL > desc APPS.HZ_PARTIES
    Name Type
    ======== ======
    ATTRIBUTE1 VARCHAR2(150)
    ++++++++++++++++++++++++++++++++++++++++++++++
    Changes:
    Recently i upgraded the DB from 10.2.0.4 to 11.2.0.1
    Query:
    1.If the type of that row is VARCHAR,why it is working in 10.2.0.4 and why not working in 11.2.0.1
    2.after upgrade i analyzed the table with "analyze table " query for all AP,AR,GL,HR,BEN,APPS Schemas--Is it got impact if we run analyze table.
    Please provide me the answer for above two questions or refer the document is also well enough to understand.Based on the Answer client will sigoff to-day.
    Thanks,
    P Kumar

    WhiteHat wrote:
    the issue has already been identified: in oracle versions prior to 11, there was an implicit conversion of numbers to characters. your database has a character field which you are attempting to compare to a number.
    i.e. the string '000001' is not in any way equivalent to the number 1. but Oracle 10 converts '000001' to a number because you are asking it to compare to the number you have provided.
    version 11 doesn't do this anymore (and rightly so).
    the issue is with the bad code design. you can either: use characters in the predicate (where field = 'parameter') or you can do a conversion of the field prior to comparing (where to_num(field) = parameter).
    I would suggest that you should fix your code and don't assume that '000001' = 1I don't think that the above is completely correct, and a simple demonstration will show why. First, a simple table on Oracle Database 10.2.0.4:
    CREATE TABLE T1(C1 VARCHAR2(20));
    INSERT INTO T1 VALUES ('1');
    INSERT INTO T1 VALUES ('0001');
    COMMIT;A select from the above table, relying on implicit data type conversion:
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    C1
    1
    0001Technically, the second row should not have been returned as an exact match. Why was it returned, let's take a look at the actual execution plan:
    SELECT
    FROM
      TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    SQL_ID  g6gvbpsgj1dvf, child number 0
    SELECT   * FROM   T1 WHERE   C1=1
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |     2 |    24 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(TO_NUMBER("C1")=1)
    Note
       - dynamic sampling used for this statementNotice that the VARCHAR2 column was converted to a NUMBER, so if there was any data in that column that could not be converted to a number (or NULL), we should receive an error (unless the bad rows are already removed due to another predicate in the WHERE clause). For example:
    INSERT INTO T1 VALUES ('.0001.');
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    SQL> SELECT
      2    *
      3  FROM
      4    T1
      5  WHERE
      6    C1=1;
    ERROR:
    ORA-01722: invalid numberNow the same test on Oracle Database 11.1.0.7:
    CREATE TABLE T1(C1 VARCHAR2(20));
    INSERT INTO T1 VALUES ('1');
    INSERT INTO T1 VALUES ('0001');
    COMMIT;
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    C1
    1
    0001
    SELECT
    FROM
      TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    SQL_ID  g6gvbpsgj1dvf, child number 0
    SELECT   * FROM   T1 WHERE   C1=1
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |     2 |    24 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(TO_NUMBER("C1")=1)
    Note
       - dynamic sampling used for this statement
    INSERT INTO T1 VALUES ('.0001.');
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    SQL> SELECT
      2    *
      3  FROM
      4    T1
      5  WHERE
      6    C1=1;
    ERROR:
    ORA-01722: invalid numberAs you can see, exactly the same actual execution plan, and the same end result.
    The OP needs to determine if non-numeric data now exists in the column. Was the database characterset possibly changed during/after the upgrade?
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Simple query optimization

    Hi,
    I'm using Oracle 10g r2.
    I have this simple query that seems to take too much time to execute :
    DECLARE
         nb_mesures INTEGER;
         min_day DATE;
         max_day DATE;
    BEGIN
         SELECT
              COUNT(meas_id),
              MIN(meas_day),
              MAX(meas_day)
         INTO
              nb_mesures,
              min_day,
              max_day
         FROM
              geodetic_measurements gm
              INNER JOIN
              operation_measurements om
              ON gm.meas_id = om.ogm_meas_id
         WHERE ogm_op_id = 0;
         htp.p(nb_mesures||' measurements from '||min_day||' to '||max_day);
    END;- Tables (about 11.000 records for the "Operations" table, and 800.000 for the 2 others) :
    "Operation_measurements" is the table who makes the link between the 2 others (get the 2 keys).
    SQL> DESCRIBE OPERATIONS
    Nom                  NULL     Type
    OP_ID                NOT NULL NUMBER(7)
    OP_PARENT_OP_ID               NUMBER(7)
    OP_RESPONSIBLE       NOT NULL VARCHAR2(10)
    OP_DESCRIPT                   VARCHAR2(80)
    OP_VEDA_NAME         NOT NULL VARCHAR2(10)
    OP_BEGIN             NOT NULL DATE
    OP_END                        DATE
    OP_INSERT_DATE                DATE
    OP_LAST_UPDATE                DATE
    OP_INSERT_BY                  VARCHAR2(50)
    OP_UPDATE_BY                  VARCHAR2(50)
    SQL> DESCRIBE OPERATION_MEASUREMENTS
    Nom                  NULL     Type
    OGM_MEAS_ID          NOT NULL NUMBER(7)
    OGM_OP_ID            NOT NULL NUMBER(6)
    OGM_INSERT_DATE               DATE
    OGM_LAST_UPDATE               DATE
    OGM_INSERT_BY                 VARCHAR2(50)
    OGM_UPDATE_BY                 VARCHAR2(50)
    SQL> DESCRIBE GEODETIC_MEASUREMENTS
    Nom                  NULL     Type
    MEAS_ID              NOT NULL NUMBER(7)
    MEAS_TYPE            NOT NULL VARCHAR2(2)
    MEAS_TEAM            NOT NULL VARCHAR2(10)
    MEAS_DAY             NOT NULL DATE
    MEAS_OBJ_ID          NOT NULL NUMBER(6)
    MEAS_STATUS                   VARCHAR2(1)
    MEAS_COMMENT                  VARCHAR2(150)
    MEAS_DIRECTION                VARCHAR2(1)
    MEAS_DIST_MODE                VARCHAR2(2)
    MEAS_SPAT_ID         NOT NULL NUMBER(7)
    MEAS_INST_ID                  NUMBER(7)
    MEAS_DECALAGE                 NUMBER(8,5)
    MEAS_INST_HEIGHT              NUMBER(8,5)
    MEAS_READING         NOT NULL NUMBER(11,5)
    MEAS_CORRECT_READING          NUMBER(11,5)
    MEAS_HUMID_TEMP               NUMBER(4,1)
    MEAS_DRY_TEMP                 NUMBER(4,1)
    MEAS_PRESSURE                 NUMBER(4)
    MEAS_HUMIDITY                 NUMBER(2)
    MEAS_CONSTANT                 NUMBER(8,5)
    MEAS_ROLE                     VARCHAR2(1)
    MEAS_INSERT_DATE              DATE
    MEAS_LAST_UPDATE              DATE
    MEAS_INSERT_BY                VARCHAR2(50)
    MEAS_UPDATE_BY                VARCHAR2(50)
    MEAS_TILT_MODE                VARCHAR2(4000) - Explain plan (I'm not familiar with explain plans...) :
    | Id  | Operation                     | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT              |                        |     1 |    19 |   256  (10)| 00:00:02 |
    |   1 |  SORT AGGREGATE               |                        |     1 |    19 |            |          |
    |   2 |   NESTED LOOPS                |                        |    75 |  1425 |   256  (10)| 00:00:02 |
    |*  3 |    TABLE ACCESS FULL          | OPERATION_MEASUREMENTS |    75 |   600 |    90  (27)| 00:00:01 |
    |   4 |    TABLE ACCESS BY INDEX ROWID| GEODETIC_MEASUREMENTS  |     1 |    11 |     3   (0)| 00:00:01 |
    |*  5 |     INDEX UNIQUE SCAN         | MEAS_PK_2              |     1 |       |     2  (50)| 00:00:01 |
    --------------------------------------------------------------------------------------------------------How can I optimize this query ?
    Thanks.
    Yann.

    Looks like you are missing an FK-index on the middle table, for the FK going to OPERATIONS.
    Currently this:
    WHERE ogm_op_id = 0;Is computed via a full table scan followed by a filter operation. Assuming OP_ID is rather selective, an index on OGM_OP_ID could do the trick here.

  • Simple query takes time to run

    Hi,
    I have a simple query whcih takes about 20 mins to run.. here is the TKPROF forit:
      SELECT
        SY2.QBAC0,
        sum(decode(SALES_ORDER.SDCRCD,'USD', SALES_ORDER.SDAEXP,'CAD', SALES_ORDER.SDAEXP /1.0452))
      FROM
        JDE.F5542SY2  SY2,
        JDE.F42119  SALES_ORDER,
        JDE.F0116  SHIP_TO,
        JDE.F5542SY1  SY1,
       JDE.F4101  PRODUCT_INFO
    WHERE
        ( SHIP_TO.ALAN8=SALES_ORDER.SDSHAN  )
        AND  ( SY1.QANRAC=SY2.QBNRAC and SY1.QAOTCD=SY2.QBOTCD  )
        AND  ( PRODUCT_INFO.IMITM=SALES_ORDER.SDITM  )
        AND  ( SY2.QBSHAN=SALES_ORDER.SDSHAN  )
        AND  ( SALES_ORDER.SDLNTY NOT IN ('H ','HC','I ')  )
        AND  ( PRODUCT_INFO.IMSRP1 Not In ('   ','000','689')  )
        AND  ( SALES_ORDER.SDDCTO IN  ('CO','CR','SA','SF','SG','SP','SM','SO','SL','SR')  )
        AND  (
        ( SY1.QACTR=SHIP_TO.ALCTR  )
        AND  ( PRODUCT_INFO.IMSRP1=SY1.QASRP1  )
      GROUP BY
      SY2.QBAC0
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       12     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 62 
    Rows     Row Source Operation
        131  SORT GROUP BY
    3535506   HASH JOIN 
    4026100    HASH JOIN 
        922     TABLE ACCESS FULL OBJ#(187309)
    3454198     HASH JOIN 
      80065      INDEX FAST FULL SCAN OBJ#(30492) (object id 30492)
    3489670      HASH JOIN 
      65192       INDEX FAST FULL SCAN OBJ#(30457) (object id 30457)
    3489936       PARTITION RANGE ALL PARTITION: 1 9
    3489936        TABLE ACCESS FULL OBJ#(30530) PARTITION: 1 9
      97152    TABLE ACCESS FULL OBJ#(187308)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       13     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1kindly suggest how to resolve this...
    OS is windows and its 9i DB...
    Thanks

    > ... you want to get rid of the IN statements.
    They prevent Oracle from usering the index.
    SQL> create table mytable (id,num,description)
      2  as
      3   select level
      4        , case level
      5          when 0 then 0
      6          when 1 then 1
      7          else 2
      8          end
      9        , 'description ' || to_char(level)
    10     from dual
    11  connect by level <= 10000
    12  /
    Table created.
    SQL> create index i1 on mytable(num)
      2  /
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'mytable')
    PL/SQL procedure successfully completed.
    SQL> set autotrace on explain
    SQL> select id
      2       , num
      3       , description
      4    from mytable
      5   where num in (0,1)
      6  /
                                        ID                                    NUM DESCRIPTION
                                         1                                      1 description 1
    1 row selected.
    Execution Plan
    Plan hash value: 2172953059
    | Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |         |  5001 |   112K|     2   (0)| 00:00:01 |
    |   1 |  INLIST ITERATOR             |         |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| MYTABLE |  5001 |   112K|     2   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN | I1      |  5001 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("NUM"=0 OR "NUM"=1)Regards,
    Rob.

  • Simple query ... syntax error ?

    What is wrong with this simple sql statement ?
    SELECT Caseid
    FROM (SELECT DISTINCT Caseid, userid FROM Atts)
    I get this error
    Server: Msg 170, Level 15, State 1, Line 2
    Line 2: Incorrect syntax near ')'.

    In this case, it could go either way. Because the derived
    table is
    performing the select distinct anyway, the execution plans
    would be similar,
    but you'd see one extra step for selecting all rows from the
    derived table.
    Notice that the derived table has distinct caseid, userid,
    but the main
    query only selects caseid. If the extra IO required to
    transmit the
    unwanted userid column is more expensive than the extra step
    required to
    select just the caseid column out of the derived table, then
    the derived
    table is the better choice.
    In any case, the simpler query is usually the better option.
    Unless he
    actually wants a duplicate caseid for each userid, but not
    the userid
    itself, plain ol' SELECT DISTINCT makes more sense.
    "bregent" <[email protected]> wrote in
    message
    news:ejaad1$k2u$[email protected]..
    > >On the contrary, subqueries in the FROM clause are
    quite useful and can
    > >in
    > >the right situations perform much better than
    alternative queries.
    >
    > Ah, as usual you are correct. It looks like he forgot to
    add a DT alias
    > name
    > at the end of the statement which is causing the error.
    But in any case, a
    > simple select distinct would yield the same results and
    performance,
    > correct?
    >

  • Simple Query returns no result

    We have a problem with a simple query on a "old" Table in our Database. The Table has following Structure:
    CREATE TABLE <table_name>
        ROLE_ID INTEGER NOT NULL,
        ROLE_NAME VARCHAR (99) ascii NOT NULL,
        OBJECTDATA LONG BYTE,
        UNIQUE (ROLE_NAME)
    The table containts two rows and following querys get these results:
    select role_id, role_name from <schema>.<table_name>
    --> 2 rows
    select role_id, role_name from <schema>.<table_name>
    order by role_id
    --> 2 rows
    select role_id, role_name from <schema>.<table_name>
    order by role_name
    --> 0 rows  ?? confusion
    When we create a "new" table with the same structure, and insert the same content to this new table,  the queries are working correctly.
    What happened with our "old" table,  so that these simple queries don't function anymore?
    (Database Kernel    7.6.05   Build 009-123-191-997)
    thx
    gerri
    Edited by: Gerfried on Jul 17, 2009 11:42 AM

    Ok, Gerfried send me the dump file and this is what was in it:
    INV ROOT/LEAF 15857  perm  entries : 0        [block 0]
         bottom  : 81         filevers: dummy     convvers: 9
                                                  writecnt: 1
    00001      nodepage.pno: 15857             nodepage.pt : data
    00006      nodepage.pt2: inv               nodepage.chk: checksumData
    00008      nodepage.mde: empty
    08181      nd_checksum : 61937             nodepge2.pno: 15857
    08189      nodepge2.pt : data              nodepge2.pt2: inv
    08191      nodepge2.chk: checksumData
    08192      nodepge2.mde: empty
    00009      nd_bottom   : 81                nd_rec_cnt  : 0
    00017      nd_level    : 0
    00019      nd_filestate: empty
    00020      nd_sorted   : false             nd_root     : 15857/F13D0000
    00025      nd_right    : nil_pno           nd_left     : nil_pno
    00033      nd_last     : nil_pno           nd_conv_vers: 9
    00045      nd_str_vers : nil_pno           nd_file_vers: dummy
    00052      ndPageVersio: 0                 nd_inv_usage: 0
    00057      nd_leaf_cnt : 1                 nd_treeleavs: nil
    00065      nd_trans_id : nil               ndInvRoot   : nil_pno
    00077      nd_write_cnt: 1
    END OF FILE
    Obviously the reason for not delivering any data for the query is: this index is empty.
    See "nd_filestate: empty" !
    For some reasons (that I really don't know - sorry about that) this index had not been maintained anymore, since a very long time.
    "nd_conv_vers: 9" tells us that it was the 9th savepoint that wrote down this page to the disks and that it had not been touched since then.
    To view the current converter version you may just use the db_restartinfo command in dbmcli.
    If you were a SAP customer the next thing I'd check would be with which database version these indexes had been created, what the internal file state is etc. - just to figure out the root cause.
    As such an analysis is basically not possible via this forum all I can propose is to look for those indexes (containing 0 entries although the table has more rows) and rebuild them.
    This statement should do the trick:
    select i.owner, i.indexname, i.tablename, if.*
    from files if join files tf  on if.primaryfileid=tf.fileid
                   join indexes i on if.fileid=i.fileid
    where if.entrycount =0
    and if.type ='INDEX'
    and tf.entrycount >0
    best regards,
    Lars

  • Que about Simple Query Dump.....

    Hiiii ABAPers....
    Here with i am come again with a que and my que is :
    " While debugging a simple query with where clause i am getting dump...but when i set up a break point after 2 or 3 sentenses it is executed fine without any dump "
    It has shocked me.... and i m thinking about it by my hand on my head that hows it possible.....
    Anyone can help me to put my hand down from the head???
    Warm regards,
    Nirav Parekh....

    HI Nirav
    I think you are placing break point in <b>select & end-select</b>
    In between select and end-select , if you place any break point ,then there commit statement will trigger,
    thats may be the reason ,
      so mostly the Programmers are advised to not place any Break points in between Select & end-select.
    In your case just place the breakpoint after End-select
    or before select
    Regards Rk
    Message was edited by:
            Rk Pasupuleti

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

Maybe you are looking for

  • Mail rejects my password

    Upgraded to Lion fairly smoothly. All seems to be OK (hate the monochrome finder icons..another stupid change like removing the blue progress bar with SL) Main problem is mail rejects my password on my imac but it works fine on iphone and ipad... blo

  • Failure to start agent for oracle 10g in window

    Hi Experts, I am using oracle 10G R4 in 32 bit window 2003. I saw agent stop in EM. I just restart dbconsole form window side. Now I saw that emgent with DBSNMP takes hight CPU (20-40%) I try to stop and restart dbconsole does not fix this issue. I j

  • SMTP for external address

    Hi! I'm using was 620 and with transaction SCOT I have setup the node for SMTP. If I try to send a mail for a internal outlook address the system work fine, but if I try to send a mail for an external address (address with external internet provider)

  • JDeveloper 9.0.5.2 and Reports Objects

    When do you release the extension of JDeveloper 9.0.5.2 to make Reports Objects (Reports Destination,...)? I try to make a Reports Destination with JDev 9.0.4.0 and publish to a AS10g (ver. 9.0.4.0.99), but after configuration the web reports applica

  • CS3 Saving Files Error

    It must be a bug: When I make changes to a FLA, save and close, these changes are not actually saved. So, when I open the file again the changes are not there so I have to re-do them. I have figured a way around this bug, and that is by instead or no