Query  performens on ODS

Hi Friends.
Could any one tell me , how to improve query performens on ODS , i understand Aggregtes are used to improve query performens on infocube.
Thanks.
hari

Hello Hari,
I think you are in the wrong section of the forums. Please visit Business Information Warehouse section and you should be able to get good help.
https://www.sdn.sap.com/sdn/developerareas/bi.sdn?node=linkDnode4
Kind regards,
Shravan

Similar Messages

  • Query Performance on ODS

    Hi all,
            I have an ODS with around 23000 records. The ODS has around 20 fields in it (3 key fields). I built a query on the ODS. The User only inputs Cost center Hierarchy. The report does not have any calculations. It is a direct view of the fields, but the report is taking a long time to run if I select first 2nd or 3rd level of the hierarchy as the starting point. If I  select the 5th or 6th level for the hierarchy then the query output is fast. With just 23000 records on the ODS I thought query performance should be fast for any level of hierarchy.
    I even created an index on Costcenter, even then no improvement in performance. Is there any way to achieve faster query performance on the ODS.
    Thanks,
    Prabhu.

    Technical content cubes[ if installed ] gives more generic statistics of query run time...
    RSRT option mentioned by Sudheer is more targeted approach.
    The same RSRT can be help full to do the following....[ from help.sap]
    Definition
    The read mode determines how the OLAP processor gets data during navigation. You can set the mode in Customizing for an InfoProvider and in the Query Monitor for a query.
    Use
    The following types are supported:
           1.      Query to be read when you navigate or expand hierarchies (H)
    The amount of data transferred from the database to the OLAP processor is the smallest in this mode. However, it has the highest number of read processes.
    In the following mode Query to read data during navigation, the data for the fully expanded hierarchy is requested for a hierarchy drilldown. In the Query to be read when you navigate or expand hierarchies mode, the data across the hierarchy is aggregated and transferred to the OLAP processor on the hierarchy level that is the lowest in the start list. When expanding a hierarchy node, the children of this node are then read.
    You can improve the performance of queries with large presentation hierarchies by creating aggregates on a middle hierarchy level that is greater or the same as the hierarchy start level.
           2.      Query to read data during navigation (X)
    The OLAP processor only requests data that is needed for each navigational status of the query in the Business Explorer. The data that is needed is read for each step in the navigation.
    In contrast to the Query to be read when you navigate or expand hierarchies mode, presentation hierarchies are always imported completely on a leaf level here.
    The OLAP processor can read data from the main memory when the nodes are expanded.
    When accessing the database, the best aggregate table is used and, if possible, data is aggregated in the database.
           3.      Query to read all data at once (A)
    There is only one read process in this mode. When you execute the query in the Business Explorer, all data in the main memory area of the OLAP processor that is needed for all possible navigational steps of this query is read. During navigation, all new navigational states are aggregated and calculated from the data from the main memory.
    The read mode Query to be read when you navigate or expand hierarchies significantly improves performance in almost all cases compared to the other two modes. The reason for this is that only the data the user wants to see is requested in this mode.
    Compared to the Query to be read when you navigate or expand hierarchies, the setting Query to read data during navigation only effects performance for queries with presentation hierarchies.
    Unlike the other two modes, the setting Query to Read All Data At Once also has an effect on performance for queries with free characteristics. The OLAP processor aggregates on the corresponding query view. For this reason, the aggregation concept, that is, working with pre-aggregated data, is least supported in the Query to read all data at once mode.
    We recommend you choose the mode Query to be read when you navigate or expand hierarchies.
    Only choose a different read mode in exceptional circumstances. The read mode Query to read all data at once may be of use in the following cases:
    §         The InfoProvider does not support selection. The OLAP processor reads significantly more data than the query needs anyway.
    §         A user exit is active in a query. This prevents data from already being aggregated in the database.

  • Query based on ODS do not display on the EP using LDOC

    Hello everybody,
            I have a query based on ODS. It can display on the workbook. It also can display on the web using LDOC. But when i excute it as iview on the EP, it show nothing. I am so doubt. Who can help me?
    Thanks Thomas.

    Hi Thomas,
    If you are able to see records on web but not on Portal, it will be something to do with your portal setting.
    try to check other queries as well on portal.
    hope it helps
    Regards
    Vikash

  • Query on transactional ODS

    Hi All,
    I want to write a query on a multiprovider built on two transactional oDS.
    The first transactional ODS will have :
    1. Material
    2. Avg rating
    The secong transactional ODS will have:
    1. Material
    2. Vendor
    3. Avg rating
    Now, I want to build a MP on these two ODS' where the report should have:
    1. Material
    2. Vendor
    3. Rating
    4. Rating as a % of Avg rating
    How can I do it?

    Hi Ravi,
    I made a mistake in the specification.
    The first transactional ODS has:
    1. material
    2. Average rate
    The second ODS acyually has:
    1. material
    2. vendor
    3. rate
    Finally what I want to calculate on the MP is:
    vendor, material, rate and rate as % of average rate
    Please suggest.
    Regards,
    Sharmishtha

  • APD Query populating Transactional ODS

    Hi ,
       1) How do we find out if a Transactional ODS is populated by an APD Query. Where will this information be mentioned?
       2) If we dont know the name of the APD Query, but we know the transactional ODS, can we find out the APD query name.
       3) Is an APD Query created like a normal query?
       4) If we know the name of the APD Query, then where do we go and check it?

    Hi,
    You can chek the APD using Tcode RSANWB. Check your particular APD and dbl click on that you will get all information where it is picking and where the data populating.
    check this
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/96939c07-0901-0010-bf94-ac8b347dd541
    and sap help
    http://help.sap.com/saphelp_nw2004s/helpdata/en/39/e45e42ae1fdc54e10000000a155106/frameset.htm
    and service.sap.com/bi
    https://websmp206.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000585703&
    The SAP course that covers APD is BW380 - SAP Business Intelligence: Analysis Processes
    Check this link:
    http://help.sap.com/saphelp_nw04/helpdata/en/49/7e960481916448b20134d471d36a6b/frameset.htm
    Reg
    Pra

  • Query based on ODS showing connection time out

    Hi,
    I have a query based on an ODS. The selection criteria for the query is Country which is a navigational attribute of the ODS. When i exceute the query using a country with lesser volume of data the query runs fine. But when i choose a country that has more number of records the query shows 500 connection timeout error sometimes and sometimes it runs fine for the same country.
    I tried to execute the report using country as input variable - First i executed the report using a country with lesser volume of data it worked fine as usual and then i added another country with larger volume of data in the filter criteria and the old problem cropped up.
    Could you please suggest a solution?
    Appreciate your response.
    Thanks
    Ashok

    Hi,
    Its not recommneded to have a query on DSO which gives huge amount of data as output.
    For small amount it should not be a problem.
    Shift your queries to a multicube as recommended by SAP and build the multicube over the cube which loads from the DSO.
    SAP recommends to create queries only on multiproviders.
    If not possible then you can try to create indexes on DSO on the characteristics field which are used in filter and global selections in the query.This can improve the query performace.
    But this will create performance issue while loading data to the DSO.
    So you need to do a trade off.
    Thanks
    Ajeet

  • Quantity getting added up in query result from ODS 0SD_O06

    Dear Friends,
    I have loaded 2LIS_13_VDKON in ODS 0SD_O06. Now, the ODS has lets say 4 rows for one invoice and against in each row there is a corresponding invoice quantity (0INV_QTY). Lets say the value in this field 0INV_QTY is 1 for a particular invoice.
    so it will look like this:
    Invoice || 0INV_QTY ....etc many other coloumns
    12345 ||  1
    12345 ||  1
    12345 ||  1
    12345 ||  1
    When I run the query, the result would be one row for this invoice 12345. The quantity is coming up as 4 instead of 1 as it is getting aggregated.
    What to do, so it would come as 1?
    I tried exception aggregation - on 0INV_QTY, aggregating on BILL_NUM i.e. invoice # but same result.
    Please suggest.
    Thanks.

    hello,
    i am drilling down into it..
    if the data is that way what should be result as per your logic
    12345 || 1
    12345 || 1
    12345 || 3
    12345 || 2
    waitng for your feedback..
    regards
    nitin bhatia

  • Query on Cube jumps to Query on ODS ; Query on ODS takes Long time

    Hi All,
    Perormance Issue:  Query on Cube jumps to Query on ODS.
    Query on ODS taking long time.(JumpQuery)  
    Specific to ODS Query: When i have checked the Query on ODS(individually) also taking longer time
    Actually ODS contains quite huge data. Indexes already maintained.
    I have checked the RSRT- Execute SQL and Debug Option also. Indexes maintained Perfectly .
    Order of objects in ODS indexes are matching the order of Objects in SQL stat of RSRT Trans. Inspite of that taking long time.
    I have checked both the ways jumpquery aswellas individually .
    My question is when the query is jumping from cube to query on ODS how to check the performance, how the query is executing in background when switching over to the second query, Moreover calculated keyfigure has been used for jumping to the target query.
    How can query(ods query)  time is optimized or improve performance when jumping  from query on Cube ?
    can any body help?
    Rgds,
    C.V.
    Message was edited by:
            C.V. P

    What i understand is that you need to optimise the Query jumping time . But this will be very less compared to the time taken by the query on the ODS.
    Ideally you shouldnt be making a BEx Query on the ODS , as this takes a long time. What you can do is try executing the Bex Query on the ODS to find out as to where the issue lies. If this query is taking a long time , there is not muich that you can do here.

  • Performance issue with query when generated from an ODS

    I am generating a query from an ODS. The run time is very high. How do I improve the performance of the query ?

    Hi Baruah,
    Steps:
    1. Build the Secondary Index.
    2. divide the data in to 2 ODS where Historical and Present data ODS's and then build a Multiprovider and build the query on multiprovider.
    3.  Build the Indexing on the Table level (ODS table level).
    We cannot make much faster performance for the ODS's that too with huge data...
    The above are very few of them...
    Hope you understood ..
    Regards,
    Ravi Kanth

  • Not able to find the New ods created in Query Designer.

    Hi,
    I am trying to create a new query on an ODS(new) in Query Designer but not able to find the ODS . The ODS is present in the system however when we check . Please advice whats going wrong.

    Hello Prakash,
    I know the solution
    Go to RRMX -> Queries -> Select New Query -> and then click on InfoArea -> Select ur new ODS -> and Click OK
    U will be able to see characteristics and keyfigures..
    Drag and drop and click on execute....
    Reward if its helpful.....
    Thanks,
    Sonu

  • How to find out the Queries Generated for a particular ods?

    Hi all,
    There are atleast 10 queries generated for a Particular ODs zyyyyy. I would like to know what are the queries generated for a particular ods.
    I have used the where use list of the ods. But it did not displayed the queries generated for that ods.
    I have looked at the bex query for that ods but cant find any query being generated on that ods??
    Is there any way to find out what are the queries generated for a particular ods?
    thanxs
    haritha

    Hello Haritha,
    Please follow the path.
    Go to RSA1 -> Metadata Repository ->DataStore Objects(ODS) -> Find you DSO(ODS) there and click on that, you will get all the details(Queries, Objects - Char, Key figures, update rules,etc..) related to that particular DSO(ODS), same is the case for Cube as well.
    Please assign points.

  • Hints on this model: Deciding on Cube/ODS & types of dimensions

    Hi,
    We have 4 ODSes which are fed data from 4 different flat files:
    ODS1—loaded daily-key data-field1, field2, field3, field4
    ODS2—loaded twice per week-key data-field1, field2, field3, field4
    ODS3—loaded daily-key data-field1, field2, field3, field4, field5
    ODS4—loaded twice per month-key data-field1, field2, field3, field4, field5, field6
    This is a shipping, delivery, finance reporting scenario.
    1. What considerations need to be made with respect to the differences in the key fields as the data are pushed up in the data flow?
    2. What considerations need to be made with respect to the differences in the key fields as the data are pushed up in the data flow?
    3. If the goal is to summarize the data in the cube so that the details are kept in monthly buckets, can you provide me with some steps on implementing it?
    4. What if for performance reasons, we want to keep the summarized data in buckets of months, but we need the capability to drill to the details in the ODSes.
    Can you provide me with some steps on implementing it?
    Thanks

    I picked ODS4 as main ods because the key on this ods represents as key for all the other ODSs.
    Example.
    ODS1: Key1, Key2, Key3, Key4, data1,data2,data3
    ODS2: Key1, Key2, Key3, Key4, Key5, data3,data4,data5,data6
    ODS3: Key1, Key2, Key3, Key4, Key5, Key6, data6, data7, data8.
    Main ODS (Consolidated ODS)
    ODS3: Key1, Key2, Key3, Key4, Key5, Key6, data6, data7, data8.
    in your update rule, only map the field you need from each ODSs, the other fields need to be no update if the field is getting populated from other ods. That is what I mean in respective fields from each ODSs.
    Yes all the loads should come to the ods and gets updated to the cube, I am not sure if your extractor is supporting delta. You need to think about the reporting requirements, should you report on the daily load if even the bi-weekly, weekly and bi-monthly loads get loaded? But one meaningful record will be created on the consolidated ods (main ods) and depending on how granular your reporting requirement, you could either put the key of the ods in your cube. If you have all the key in your cube, then your cube is very granular and there is no need to do jump-query to the ods.
    For creating a dimension, just keep in mind that you have create as many dimension as you could (max 13) instead of creating one big dimensions. Create a relevant char in the same dimension where there is one to one relationships.
    please don't forget to say thank you by assigning points.
    thanks.
    Wond

  • Query running longer time

    Hi All,
    when i run the query in Analyzer,it is taking longer time.the query is built on DSO.
    can anyone give me inputs why the query is taking much time
    Thanks in Advance
    Reddy

    Hi,
    Follow this thread to find out how to improve Query performance on ODS.
    ODS Query Performance  
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Hope this helps.
    Thanks,
    JituK

  • Query Performance and Result Set Query/Sub-Query

    Hi,
    I have an infoset with as many as <b>15 joins</b> with different ODS and Master Data..<b>The ODS are quite huge with 20 million, 160 million records)...</b>Its taking a lot of time even for a few days and need to get atleast 3 months data in a reasonable amount of time....
    Could anyone please tell me whether <b>Sub-Query or Result Set Query have to be written against the same InfoProvider</b> (Cube, Infoset, etc)...or they can be used in queries
    on different infoprovider...
    Please suggest...Thanks in Advance.
    Regards
    Anil

    Hi Bhanu,
    Needed some help defining the Precalculated Value Set as I wasnt succesful....
    Please suggest answers if possible for the questions below
    1) Can I use Filter Criteria for restricting the value set for the characteristic when I Define a Query on an ODS. When i tried this it gave me errors as below  ..
    "System error in program CL_RSR_REQUEST and form  EXECUTE_VTABLE:COB_PRO_GET_ALWAYS....                     Error when filling value set DELIVERY..                                               
    Job cancelled after system exception ERROR_MESSAGE"                                           
    2) Can I create a create a Value Set predefined on an Infoset -  Not Succesful as Infoset have names such as "InfosetName_F000232" and cannot find the characteristic in Paramter Tab for Value Set in Reprting Agent.
    3) How does the precalculated value Set variable help if I am not using any Filtering Criteria and is storing the List of all values for the Variable from the ODS which consists of 20 millio records.
    Thanks for your help.
    Kind Regards
    Anil

  • Problem in Query change in APD

    Dear All,
    I have created APD  using one query. but according to business requriments i have modified the query and Transactional ODS, but i am facing problem in adding sames query with New changes in my existing APD.
    is there any way to add same or New query in existing APD.
    please help.
    Thanks & Regards,
    Omkar

    Try this ?
    T.Code--->RSANWB
    1.Select  APD from Apllication server
    2.Create
    3.Application   -
    >Genaral
    4.Continue
    5.Select Data Source (Which You want to Transform)
    6.Select the Datatarget
       Drag &Drop on the work area
    7.Select your Target Area
        Overwrite complete content filter icon
    8.Double click on the Transfermation icon
    9.Make link Between source query & Filter & Target
    10.Double click on filter
    11.Filed selection for range value
         --->All fileds
         --->Transfer the Fileds
      Doulble click on Transfermation icon Field assigment
    >Select Automatic assigmnet icon-->Save
    >Continue -
    >SAVE

Maybe you are looking for

  • Recommendation for new features in next Sun Studio releases

    Hello, I have started using the Sun Studio Express recently and as a feedback to your opening to receive proposals to new features in the next releases of the Sun Studio collection I would want to point some aspects of interest to the programmers in

  • HT4623 There is no update in my settings

    IOS update

  • Desktop Manager: Must Close error

    Hi all, I've lurked around and tried to solve this, now I beg for assistance at the feet of the masters.... Simply, Blackberry Desktop Manager has worked fine since original install last November.  Recently now, I get the popup "BlackBerry Desktop Ma

  • How to add partition to T400 (Vista 64-bit)?

    i just got my T400 yesterday and uninstalled all the crap, updated, and installed all my apps.  the process took several hours.  Then, I used Acronis Disk Director 10 (boot cd) to split the C: partition in two, so that I could move my documents onto

  • Rebate Processing Periodic Payment

    Hi, I am implementing rebate process and wants to know configuration required to be done for enabling Periodic payment and how we can make it done based on Billing date instead of service rendered date, if possible