SSAS using datasource at query time

I have a set of reports that run against the cube.  The cube has proactive caching enabled.  If the datasource within the SSAS database has invalid credentials, queries against the cube will not run.  Note: SSAS is setup to run as network
service rather than a domain account (temporary situation).
My understanding of MOLAP is that the data is stored within the cube an no access to the source (sql server datamart) is needed for the user to run queries against the cube.  Now this is proving to not be true... leaving me feeling like I don't know
what is going on at all.  After 7 years using SSAS I'm finding out that the user's queries don't actually run against the cube, they require that ssas be able to connect to its source data to satisfy the query???!?!?!??!?
I really really don't understand this, but I have tested repeatedly and when I set the datasource to invalid credentials, report queries won't run or just return 0's.  Earlier we were getting 08001 "can't connect to server" messages, now I'm
just getting all 0's on the reports when the datasource credentials are invalid.
Just to make this 100% clear, I am talking about the SSAS datasource within the SSAS database, not the SSRS datasource that points to the cube.  Has anyone ever seen this before?  To make it more confusing, now it seems to be working in excel regardless
but SSRS still needs the valid datasource.
Thanks,
Ken

Hi Ken,
The data source credential of SSAS cube you specified in SSMS was use to retireve data from relational database. For example, we will need to update cube data or some MDX queries need to get data from relational DB. The Analysis Services querying architecture
provides several components that work together to efficiently retrieve and evaluate data, please see:
When we query a cube, the query processor breaks the query into subcube requests for the storage engine. For each subcube request, the storage engine first attempts to retrieve data from the storage engine cache. If no data is available in the cache, it
attempts to retrieve data from an aggregation. If no aggregation is present, it must retrieve the data from the fact data from a measure group’s partition data.
In your case, I recommend you use SQL Server Profiler to capture SSAS and Database Engine events for further investigation. If the SSRS reports need to get data from relational database, we will get detail information in this scenario.  For more information
regardig SQL Server Profiler, please see:
http://technet.microsoft.com/en-us/library/ms181091.aspx
If you have any feedback on our support, please click
here.
Regards,
Elvis Long
TechNet Community Support

Similar Messages

  • Create SSRS report using DMV for querying SSAS cube.

    I am trying to create a SSRS Report to find the Cube/Dimension Status (when was Cube/Dimension last processed and is Failed/Success), for example I have below DMV query for the same.
    SELECT CUBE_NAME, LAST_DATA_UPDATE FROM $System.MDSCHEMA_CUBES
    When i execute the above query in MDX query window it comes up with results, when i try to create a data using the above query in report server its coming up with error.
    Error : Please verify that the query is an MDX one and not DMX. (Microsoft.AnalysisServices.Controls)
    Can we use DMV querys for createing SSRS report and what should be the datasource.
    Thank You.
    Praveen

    Hi Praveen,
    Glad to hear that the issue had been solved. Thank you for sharing the useful information.
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Using crystal reports query (.qry) as datasource in crystal 9

    I have a problem using crystal reports query as a datasource in crystal reports 9. When Im using a report withing application i cannot change database or server property of the query.
    For example - during report design im using one odbc for query but I want to change it during runtime.
    Changing any options beside username or password results in external exception. Im using Builder XE as application environment.
    I tried switching to SQL commands which works but i lose all the fields on report when changing from query to report.
    Is there a way to make query work or to change query to sql command without losing all the fields (and putting them back on the report manually)?

    Hello,
    Unfortunately .QRY files are no longer supported as of CR 9. We replaced it with the Command Object where it basically did the same thing only you had to write the query yourself.
    The problem is because the Command Object can be anything CR has no way of mapping a query to the database field within the report so it auto deletes all of the fields.
    The only way is to create new reports, you can open the original report in one window and then create your new report, copy the SQL into the Command windows and then copy and paste the report objects from the window with the old report into the new report window.
    There is no migration wizard to do this. I have heard of others who used the RAS server or RDC to get the objects from one report and .add them to the new report. Depending on the number of reports you have depends on if it is worth the time writing that app or just rebuild all of your reports.
    Thank you
    Don

  • Performance issue in browsing SSAS cube using Excel for first time after cube refresh

    Hello Group Members,
    This is a continuation of my earlier blog question -
    https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
    As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
    I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
    4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
    refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
    Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
    daily cube refresh?
    Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
    a significant time based on the bandwidth of the network and connection.
    Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
    Best Regards, Arka Mitra.

    Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
    I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
    select [HIERARCHY_UNIQUE_NAME]
    from $system.mdschema_hierarchies
    where CUBE_NAME = 'YourCubeName'
    select [LEVEL_UNIQUE_NAME]
    from $system.mdschema_levels
    where CUBE_NAME = 'YourCubeName'
    Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
    http://artisconsulting.com/Blogs/GregGalloway

  • Function used to find Query Execution Time

    Hi All!
    Could you please let me know the function name used in finding 'Query Execution Time'?
    Thanks and Regards,
    Vikas

    I'm not quite sure what you mean...
    In SQL*Plus: SET TIMING ON
    This will display a timing message after the execution of a query. Is this what you mean?
    cheers,
    Anthony

  • Use select & update(query) at the same time

    I'm sorry to trouble you.
    I am racking my brains to find a solution to the problem.
    I don't search a problem in my source.
    I maked every effort.
    I am tired to death.
    I wanna use (select & update(query)) at the same time.
    The Source compiled but have not access to DB
    plz help me.
    Can you give me a hand?
    sorry my bad English..
    link my source
    http://www.netian.com/~111nice/bangnew.java

    can you clearly explain whats problem and what error your are getting

  • How to make an index use in a query execution

    Hi,
    I have the below query for which ename column has an index. As of my knowledge below queries 1st and 2st will not use index. Hence i used the 3rd statement and that too its not using the index. Finally i used the 4th query, but even the 4th query is not using the index. Then how do i make this query to use my index??? Do i need to create a function based index for this??? Is that the final option????
    1. select * from emp where ename !='BH' ;
    2. select * from emp where ename <> 'BH';
    3. select * from emp where ename not in ('BH');
    4. select * from emp where ename < 'BH' or ename > 'BH';
    Regards,
    007
    Edited by: 007 on Jun 6, 2013 7:56 AM
    Edited by: 007 on Jun 6, 2013 8:06 AM
    Edited by: 007 on Jun 6, 2013 8:06 AM
    Edited by: 007 on Jun 6, 2013 8:06 AM
    Edited by: 007 on Jun 6, 2013 8:12 AM

    Sorry 007, I really thought you were posting a trick question as on the OCP tests.
    Anyway, as Justin mentioned, if you have an index on ename, it may be used when doing a comparison predicate statement with the ename value.
    What it depends on are several other things: stats, how many rows in the table, use of an index hint, etc.
    Rather than questioning the group on this, why not just turn on autotrace and run the query for the different scenarios.
    The output will show you if it used the index, number of rows returned, blocks read, etc.
    SQL> create table emp (ename  varchar2(40));
    Table created.
    SQL> insert into emp select username from sys.dba_users;
    25 rows created.
    SQL> commit;
    Commit complete.
    SQL> set autotrace on
    SQL> select * from emp where ename != 'SYSTEM';
    Execution Plan
    Plan hash value: 2951343571
    | Id  | Operation        | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |           |    24 |   528 |     1   (0)| 00:00:01 |
    |*  1 |  INDEX FULL SCAN | ENAME_IDX |    24 |   528 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("ENAME"<>'SYSTEM')As you can see, the above used an index, even though there were only 25 rows in the table.
    You can test each of your scenarios, one by one, including use of a hint.

  • Cursors are not closed when using Ref Cursor Query in a report  ORA-01000

    Dear Experts
    Oracel database 11g,
    developer suite 10.1.2.0.2,
    application server 10.1.2.0.2,
    Windows xp platform
    For a long time, I'm hitting ORA-01000
    I have a 2 group report (master and detail) using Ref Cusor query, when this report is run, I found that it opens several cursors (should be only one cursor) for the detail query although it should not, I found that the number of these cursors is equal to the number of master records.
    Moreover, after the report is finished, these cursors are not closed, and they are increasing cumulatively each time I run the report, and finally the maximum number of open cursors is exceeded, and thus I get ORA-01000.
    I increased the open cursors parameter for the database to an unbeleivable value 30000, but of course it will be exceeded during the session because the cursors are increasing cumulatively.
    I Found that this problem is solved when using only one master Ref Cursor Query and create a breake group, the problem is solved also if we use SQL Query instead of Ref Query for the master and detail queries, but for some considerations, I should not use neither breake group nor SQL Query, I have to use REF Cursor queries.
    Is this an oracle bug , and how can I overcome ?
    Thanks
    Edited by: Mostafa Abolaynain on May 6, 2012 9:58 AM

    Thank you Inol for your answer, However
    Ref Cursor give me felxibility to control the query, for example see the following query :
    function QR_1RefCurDS return DEF_CURSORS.JOURHEAD_REFCUR is
    temp_JOURHEAD DEF_CURSORS.JOURHEAD_refcur;
              v_from_date DATE;
              v_to_date DATE;
              V_SERIAL_TYPE number;
    begin
    SELECT SERIAL_TYPE INTO V_SERIAL_TYPE
    FROM ACC_VOUCHER_TYPES
    where voucher_type='J'
    and IDENT_NO=:IDENT
    AND COMP_NO=TO_NUMBER(:COMPANY_NO);
         IF :no_date=1 then
                   IF V_SERIAL_TYPE =1 THEN     
                   open temp_JOURHEAD for select VOCH_NO, VOCH_DATE
                   FROM JOURHEAD
                   WHERE COMP_NO=TO_NUMBER(:COMPANY_NO)
                   AND IDENT=:IDENT
              AND ((TO_NUMBER(VOCH_NO)=:FROM_NO and :FROM_NO IS NOT NULL AND :TO_NO IS NULL)
              OR (TO_NUMBER(VOCH_NO) BETWEEN :FROM_NO AND :TO_NO and :FROM_NO IS NOT NULL AND :TO_NO IS NOT NULL )
              OR (TO_NUMBER(VOCH_NO)<=:TO_NO and :FROM_NO IS NULL AND :TO_NO IS NOT NULL )
              OR (:FROM_NO IS NULL AND :TO_NO IS NULL ))
                   ORDER BY TO_NUMBER(VOCH_NO);
                   ELSE
                   open temp_JOURHEAD for select VOCH_NO, VOCH_DATE
                   FROM JOURHEAD
                   WHERE COMP_NO=TO_NUMBER(:COMPANY_NO)
                   AND IDENT=:IDENT               
              AND ((VOCH_NO=:FROM_NO and :FROM_NO IS NOT NULL AND :TO_NO IS NULL)
              OR (VOCH_NO BETWEEN :FROM_NO AND :TO_NO and :FROM_NO IS NOT NULL AND :TO_NO IS NOT NULL )
              OR (VOCH_NO<=:TO_NO and :FROM_NO IS NULL AND :TO_NO IS NOT NULL )
              OR (:FROM_NO IS NULL AND :TO_NO IS NULL ))     
                   ORDER BY VOCH_NO;          
                   END IF;
         ELSE
                   v_from_date:=to_DATE(:from_date);
                   v_to_date:=to_DATE(:to_date);                         
              IF V_SERIAL_TYPE =1 THEN
                   open temp_JOURHEAD for select VOCH_NO, VOCH_DATE
                   FROM JOURHEAD
                   WHERE COMP_NO=TO_NUMBER(:COMPANY_NO)
              AND IDENT=:IDENT                         
                   AND ((voch_date between v_from_date and v_to_date and :from_date is not null and :to_date is not null)
                   OR (voch_date <= v_to_date and :from_date is null and :to_date is not null)
                   OR (voch_date = v_from_date and :from_date is not null and :to_date is null)
                   OR (:from_date is null and :to_date is null ))     
                   ORDER BY VOCH_DATE,TO_NUMBER(VOCH_NO);     
              ELSE
                   open temp_JOURHEAD for select VOCH_NO, VOCH_DATE
                   FROM JOURHEAD
                   WHERE COMP_NO=TO_NUMBER(:COMPANY_NO)
                   AND IDENT=:IDENT                         
              AND ((voch_date between v_from_date and v_to_date and :from_date is not null and :to_date is not null)
                   OR (voch_date <= v_to_date and :from_date is null and :to_date is not null)
                   OR (voch_date = v_from_date and :from_date is not null and :to_date is null)
                   OR (:from_date is null and :to_date is null ))     
                   ORDER BY VOCH_DATE,VOCH_NO;          
              END IF;
         END IF;               
         return temp_JOURHEAD;
    end;

  • Cannot use Flashback Versions Query in Oracle 10g

    If I want use Flashback Versions Query for one Table in my Database 10.1.0.4 then I receive follow error message:
    500 Internal Server Error
    java.lang.RuntimeException: options is null
         at oracle.sysman.emSDK.jsp.ListBean.applyAttributes(ListBean.java:70)
         at oracle.sysman.emSDK.jsp.ShuttleBean.render(ShuttleBean.java:41)
         at oracle.cabo.ui.BaseUINode.render(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderIndexedChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderIndexedChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderContent(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.render(Unknown Source)
         at oracle.cabo.ui.laf.xhtml.XhtmlLafRenderer.render(Unknown Source)
         at oracle.cabo.ui.BaseUINode.render(Unknown Source)
         at oracle.cabo.ui.BaseUINode.render(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderChild(Unknown Source)
         at oracle.cabo.ui.laf.xhtml.RowLayoutRenderer.renderChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderIndexedChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderIndexedChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderContent(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.render(Unknown Source)
         at oracle.cabo.ui.laf.xhtml.XhtmlLafRenderer.render(Unknown Source)
         at oracle.cabo.ui.BaseUINode.render(Unknown Source)
         at oracle.cabo.ui.BaseUINode.render(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderIndexedChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderIndexedChild(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.renderContent(Unknown Source)
         at oracle.cabo.ui.BaseRenderer.render(Unknown Source)
    .... and so on
    Can any of you help me?

    At what stage did you get this error?. Have you already selected the type of Flash Back Versions Query you want e.g Specifying type of Point in Time (Row Evaluation, Timestamp or SCN) or just when you select from the Action?

  • From SharePoint Content Database, Using SQL-Server Query how to fetch the 'Document GUID' based on 'Content Type'

    I want to get all the documents based on content type using SQL Server Query. I know that, querying the content database without using API is not advisable, but still i want to perform this action through SQL Server Query. Can someone assist ?

    You're right, it's not advisable, may result in corruption of your databases and might impact performance and stability. But assuming you're happy to do that then it is possible.
    Before you go down that route, have you considered using something more safe like PowerShell? I've seen a script exactly like the one you describe and it would take far less time to do it through PS than it would through SQL.

  • List of infoobjects used in a Query

    Hello All,
    Is there a way to know the list of infoobjects used in a query. i know i can note it down manually by opening the query in interface and the noting down maunally but i have to do the same operation for almost more than 50 queries and every query is having a good no of infoobjects used.
    i don't even find it in MetaData Repository. if there is some transaction or some place which give me the list of all infoobjects used in a query so i can simply copy paste the list in my excel report then noting down every infobject name manually.
    Thanks in advance for the help, it can save some precious time of mine.
    Regards
    Sonal

    tables may useful : RSZCOMPDIR RSZELTXREF RSZELTDIR
    RSZCOMPIC RSZELTPRIO RSZELTPROP RSZELTATTR RSZELTTXT
    RSZRANGE RSZCALC RSZCEL RSZGLOBV
    code :
    report zio_query.
    tables: RSZELTTXT,
    rszeltdir,
    rszeltxref,
    rszrange,
    RSZSELECT.
    data : begin of it_result occurs 0,
             iobjnm like rszselect-iobjnm,
           end of it_result,
           lv_iobjnm like rszselect-iobjnm.
    select-options:
    s_query for rszeltdir-mapname.
    start-of-selection.
    select * from rszeltdir where mapname in s_query
    and objvers = 'A'.
      select single * from RSZELTTXT where eltuid = rszeltdir-eltuid
      and objvers = 'A' and langu = 'N'.
      write: / rszeltdir-mapname, rszelttxt-txtlg.
    refresh it_result.
      select * from rszeltxref where seltuid = rszeltdir-eltuid
      and objvers = 'A'.
    *characteristics
      select iobjnm into lv_iobjnm
      from RSZSELECT where eltuid = rszeltxref-teltuid and objvers = 'A'
      and iobjnm <> '1KYFNM'.
        it_result-iobjnm = lv_iobjnm.
        append it_result.
      endselect.
    *keyfigures
      select low into lv_iobjnm
      from rszrange where eltuid = rszeltxref-teltuid and objvers = 'A'
      and iobjnm = '1KYFNM'.
        it_result-iobjnm = lv_iobjnm.
        append it_result.
      endselect.
      endselect.
      sort it_result by IOBJNM.
    delete adjacent duplicates from it_result comparing mapname IOBJNM.
      loop at it_result.
        write: / it_result-IOBJNM.
      endloop.
      uline.
    endselect.

  • Where will I specify process chain and query time statistics to be loaded .

    I am on BI 7.0. I see on my system, BI Statistics Technical Content has been installed because when I run
    RSDDSTAT transaction under Info Provides I see cubes such as 0TCT_C01, oTC_C02, oTCT_C03,  oTCT_MC01, 0TCT_VC01..
    I also see process chains installed on my system such as oTCT_C2_INIT_P01,  oTCT_C2_DELTA_P01.
    I see various RSDDSTAT* tables being populated on my system as well.
    My questions are:
    1.     How does data gets populated in 0TCT_C01, C02 etc? Is it by scheduling TC process chains or there are other means?
    2.      Where does one specify what kind of statistics will be copied from RSDDSTAT* tables. My IT lead tells me that process chain statistics are not being collected. I also thing query times are not being populated  in 0TCT tables. Where can I specify what should be loaded in these cubes.
    3.     Does ST03N transaction display data from 2.
    THANKS A LOT.

    Hi,
    1.     How does data gets populated in 0TCT_C01, C02 etc? Is it by scheduling TC process chains or there are other means?
    You can find the DataSource in RSA1. For example, 0TCT_C01 is updated from 0TCT_DS01. And you need to schedule process chain 0TCT_C0_DELTA_P01 for query statistics and 0TCT_C2_DELTA_P01 for data load statistics on a regular basis. Of course, as they are delta chains, you need to first run initializaiton chains for just one time before scheduling delta chains. The initialization chains are 0TCT_C0_INIT_P01 and 0TCT_C2_INIT_P01.
    2.      Where does one specify what kind of statistics will be copied from RSDDSTAT* tables. My IT lead tells me that process chain statistics are not being collected. I also thing query times are not being populated in 0TCT tables. Where can I specify what should be loaded in these cubes.
    As I know, the statistics data are first stored in RSDDSTAT* tables. For example, query data are stored in RSDDSTAT_OLAP. And the data are loaded to corresponding cubes when you executing InfoPackages.
    You can refer to this link and search "Recording BI Statistics" in this page:
    http://help.sap.com/saphelp_nw70/helpdata/en/44/3521c7bae848a1e10000000a114a6b/content.htm
    3.     Does ST03N transaction display data from 2.
    Yes. If BI Statistics content are not activate you would be unable to view statistics data in ST03N.
    Let us know if you have other questions.
    Regards,
    Frank

  • How can i using datasource in toplink's map file

    In the TopLink map :
    <toplink:login xsi:type="toplink:database-login">
    <toplink:platform-class>oracle.toplink.platform.database.oracle.Oracle10Platform</toplink:platform-class>
    <toplink:user-name>test</toplink:user-name> <toplink:password>C23487CFA591952D44310804F3D591CB</toplink:password>
    <toplink:sequencing>
    <toplink:default-sequence xsi:type="toplink:native-sequence">
    <toplink:preallocation-size>1</toplink:preallocation-size>
    </toplink:default-sequence>
    </toplink:sequencing>
    <toplink:driver-class>oracle.jdbc.driver.OracleDriver</toplink:driver-class>
    <toplink:connection-url>jdbc:oracle:thin:@192.168.0.1:1521:testdb</toplink:connection-url>
    <toplink:bind-all-parameters>true</toplink:bind-all-parameters>
    </toplink:login>
    How can i modify it using datasource?

    The login information stored in the map file is good for direct connection and design time login information.
    For your runtime login information I would use the sessions configuration (sessions.xml). It provides complete deployment configuration:
       <session>
          <name>my-session</name>
          <!-- This references the ORM map XML file -->
          <project-xml>META-INF/employee-basic.xml</project-xml>
          <session-type>
             <server-session/>
          </session-type>
          <login>
             <datasource>jdbc/TopLinkDS</datasource>
             <uses-external-connection-pool>true</uses-external-connection-pool>
             <uses-external-transaction-controller>true</uses-external-transaction-controller>
          </login>
          <external-transaction-controller-class>oracle.toplink.essentials.transaction.oc4j.Oc4jTransactionController</external-transaction-controller-class>
          <enable-logging>true</enable-logging>
          <logging-options>
             <log-exceptions>true</log-exceptions>
             <print-thread>false</print-thread>
             <print-date>false</print-date>
          </logging-options>
       </session>Doug

  • Index is not using for this query

    I have this query and it doesn't use index. Can you put your suggestion please?
    SELECT /*+ ORDERED USE_HASH(IC_GSMRELATION) USE_HASH(IC_UTRANCELL) USE_HASH(IC_SECTOR) USE_HASH(bt) */
    /* cp */
    bt.value value,
    bt.tstamp tstamp,
    ic_GsmRelation.instance_id instance_id
    FROM
    xr_scenario_tmp IC_GSMRELATION,
    xr_scenario_tmp IC_UTRANCELL,
    xr_scenario_tmp IC_SECTOR,
    rg_busyhour_tmp bt
    WHERE
    bt.instance_id != -1
    AND (IC_GSMRELATION.entity_id = 133)
    AND (IC_GSMRELATION.parentinstance_id = ic_UtranCell.instance_id)
    AND (IC_UTRANCELL.entity_id = 254)
    AND (IC_UTRANCELL.parentinstance_id = ic_Sector.instance_id)
    AND (IC_SECTOR.entity_id = 227)
    AND (IC_SECTOR.parentinstance_id = bt.instance_id);
    table : xr_scenario_tmp
    entity_id          num
    instance_id          num
    parentinstance_id     num
    localkey          varchar
    indexes: 1. entity_id+instance_id
         2. entity_id+parentinstance_id
    table : rg_busyhour_tmp
    instance_id     notnull     num
    tstamp          notnull     date
    rank          notnumm     num
    value               float
    index: instance_id+tstamp+rank
    thanks

    user5797895 wrote:
    Thanks for the update
    1. I don't understand where to put {}. you meant in the forum page like below
    Use the tag. Read the [FAQ|http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ?t=anon] for more information. It's the link on the top right corner.
    >
    2. AROUND 8000 IN DEV MACHINE. BUT 1.5M IN PRODUCTION
    It's a more or less useless exercise if you have that vast difference between the two systems. You need to test this thoroughly using a similar amount of data.
    3.
    Note: cpu costing is off, PLAN_TABLE' is old version
    You need to re-create your PLAN_TABLE. That's the reason why important information is missing from your plans. It's the so called "Predicate Information" section below the execution plan and it requires the correct version of the plan table. Drop your current plan table and re-run in SQL*Plus on the server:
    @?/rdbms/admin/utlxplan
    to re-create the plan table.
    Dynamic sampling doesn't alter the plan in any way no matter what sampling level I choose.
    When I added Cardinality it switched from 1 full table scan and 2 index read
    Can you post the statements with the hints included resp. just the first line including the hints used for the different attempts?
    # WITH dbms_stats.gather_table_stats, without cardinality it uses indexes all the time.
    How did you call DBMS_STATS.GATHER_TABLE_STATS, i.e. which parameter values where you using?
    # After deleting the table stats performance improved back
    All these different attempts are not really helpful if you don't say which of them was more effective than the other ones. That's why I'm asking for the "Predicate Information" section so that this information can be used to determine which of your tables might benefit from an indexed access path and which don't.
    As already mentioned several times if you use SQL tracing as described in one of the links provided you could see which operation produces how many rows. This would allow to determine if it is efficient or not.
    But given that you're doing all this with your test data it doesn't say much about the performance in your production environment.
    4. whether GTT created with "ON COMMIT PRESERVE ROWS"?
    YES - BUT DIFFERENT SESSIONS HAS DIFFERENT NUMBER OF ROWS
    The question is, whether the number of rows differs significantly, if yes, then you shouldn't use the DBMS_STATS approach
    5. neigher (48 sec. / 25 sec. run time) are sufficient, then what is the expected?
    ACTUALLY I AM DOING IT IN DEVELOPMENT MACHINVE. IN PRODUCTION THE NUMBER OF ROWS ARE DIFFERENT. LAST TIME WHEN WE RELEASED THE
    PATCH WITH THIS CODE, THE PERFORMANCE WAS BAD.
    See 2., you need to have a suitable test environment. It's a more or less useless exercise if you only have a fraction of the actual amount of data.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Maximum  number of characteristics and  Key figures used in a Query

    Hi Experts,
                Any Thought on the maximum number of Characterstics and key figures that can used in a Query. I know  it impacts performance query and runs for very long time.
    Thanks,
    Kumar.

    Hi Kumar,
    Welcome to SDN.
    I really did not think about it till now, sharing my thoughts on this -
    1. You can use all the characteristics & Key Figure ( & more)in Query what you have in you Cube.
    2. Query is suppose to serve a purpose, to deliver you analytics based on some KPI etc & it will have maximum no of char / key figures which human mind could make some sense of.
    3. Performance of query actually is divided into three areas -
    a. database time
    b. OLAP time
    c. Front End time
    and if you query on not performing well first look at which area is taking maximum time & the optimize that first & there are various ways to optimize each area.
    Hope it helps.
    VC

Maybe you are looking for