Query Performance with Unit Conversion

Hi Experts,
Right now my Customer ask to me to do a improve in the runtime of some queries.
I detect a problem in one related to unit conversion. I execute a workbook statistics and found that the time is focusing in step conversion data.
I'm consulting all the year and is taking around 20 minuts to give us a result. I too much time. The only thing in the query is the conversion data.
How can I to improve the performance? what is the check list in this case?
thanks for you help.
Jose

Hi Jose,
You might not be able to reduce the unit conversion time here and try to apply the general query performance improvement techniques, e.g. caching the query results etc.
But there is one thing which can help you, if end user is using only one of the unit for e.g. User always execute the report in USD but the source currency is different from USD. In such cases you can create another data source and do the data conversion at the time of data loading so that in the new data source all data will be available in required currency and no conversion will happen at runtime and will improve the query performance drastically.
But above solution is not feasible if there are many currencies and report needs to be run in multiple currency frequently.
Regards,
Durgesh.

Similar Messages

  • Query Performance with Exception aggregation

    Hello,
    My Query Keyfigures has exception aggregation on order line level as per requirement.
    Currently cube holds 5M of records, when we run query its running more than 30min.
    We cont remove exception aggregation.
    Cube is alredy modeled correctly and we dont want to use the cache.
    Does anybody can please advice if there is any other better approach to improve query performance with exception agg?
    Thanks

    Hi,
    We have the same problem and raised an OSS ticket. They replied us with the note 1257455 which offers all ways of improving performance in such cases. I guess there s nothing else to do, but to precalculate this exception aggregated formula in data model via transformations or ABAP.
    By the way, cache can not help you in this case since exc. agg. is calculated after cache retrieval.
    Hope this helps,
    Sunil

  • Disappointing query performance with object-relational storag

    Hello,
    after some frustrating days trying to improve query performance on an xmltype table I'm on my wits' end. I have tried all possible combinations of indexes, added scopes, tried out of line and inline storage, removed recursive type definition from the schema, tried the examples from the form thread Setting Attribute SQLInline to false for Out-of-Line Storage (having the same problems) and still have no clue. I have prepared a stripped down example of my schema which shows the same problems as the real one. I'm using 10.2.0.4.0:
    SQL> select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    You can find the script on http://www.grmblfrz.de/xmldbproblem.sql (I tried including it here but got an internal server error) The results are on http://www.grmblfrz.de/xmldbtest.lst . I have no idea how to improve the performance (and if even with this simple schema query rewrite does not work, how can oracle xmldb be feasible for more complex structures?). I must have made a mistake somewhere, hopefully someone can spot it.
    Thanks in advance.
    --Swen
    Edited by: user636644 on Aug 30, 2008 3:55 PM
    Edited by: user636644 on Aug 30, 2008 4:12 PM

    Marc,
    thanks, I did not know that it is possible to use "varray store as table" for the reference tables. I have tried your example. I can create the nested table, the scope and the indexes, but I get a different result - full table scan on t_element. With the original table I get an index scan. On the original table there is a trigger (t_element$xd) which is missing on the new table. I have tried the same with an xmltype table (drop table t_element; create table t_element of xmltype ...) with the same result. My script ... is on [google groups|http://groups.google.com/group/oracle-xmldb-temporary-group/browse_thread/thread/f30c3cf0f3dbcafc] (internal server error while trying to include it here). Here is the plan of the query
    select rt.object_value
    from t_element rt
    where existsnode(rt.object_value,'/mddbelement/group[attribute[@name="an27"]="99"]') = 1;
    Execution Plan
    Plan hash value: 4104484998
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 40 | 2505 (1)| 00:00:38 |
    | 1 | TABLE ACCESS BY INDEX ROWID | NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 3 | FILTER | | | | | |
    | 4 | TABLE ACCESS FULL | T_ELEMENT | 1000 | 40000 | 4 (0)| 00:00:01 |
    | 5 | NESTED LOOPS SEMI | | 1 | 88 | 5 (0)| 00:00:01 |
    | 6 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
    | 7 | TABLE ACCESS BY INDEX ROWID| NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 9 | TABLE ACCESS BY INDEX ROWID| T_GROUP | 1 | 39 | 1 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | SYS_C0082878 | 1 | | 0 (0)| 00:00:01 |
    |* 11 | INDEX RANGE SCAN | SYS_IOT_TOP_184789 | 1 | 29 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("NESTED_TABLE_ID"=:B1)
    3 - filter( EXISTS (SELECT /*+ ???)
    8 - access("NESTED_TABLE_ID"=:B1)
    9 - filter("T_GROUP"."SYS_NC0001300014$" IS NOT NULL AND
    SYS_CHECKACL("ACLOID","OWNERID",xmltype('<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-insta
    nce" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-properties
    /><read-contents/></privilege>'))=1)
    10 - access("SYS_ALIAS_3"."COLUMN_VALUE"="T_GROUP"."SYS_NC_OID$")
    11 - access("NESTED_TABLE_ID"="T_GROUP"."SYS_NC0001300014$")
    filter("SYS_XDBBODY$"='99' AND "NAME"='an27')
    Edited by: user636644 on Sep 1, 2008 9:56 PM

  • How can we improve query performance with out indexes?

    Hello Experts,
    I have a problem with table(calc) which contain 3 crore records and table doesn't have index, on that table one of the view(View_A) created.
    When i use the view in  below query SELECT count(*)
    FROM
      Table A INNER JOIN Table B ON (A.a=B.b)
       LEFT OUTER JOIN View_A ON ( Table A.a=View_A.a)
       LEFT OUTER JOIN View_B ON (Table A.a=View_B.a)
    In above query View_A is causing the problem, view_A is created on Calc table. One more thing when i execute select statement on the view it was running fine.
    With out View_A query fetching data fine. Update stats of the table also fine. When i run cost plan for scanning only cost % is 90.
    Can any help me please?.
    Thank you all.
    Regards,
    Jason.

    Jason,
    Not sure what you are trying to do. But outer joins are bad for performance. So try to avoid them.
    You also say that you have a view on a calc table. What are you calculating? Are you using user defined functions maybe?
    Regards,
    Nico

  • Query performance with filters

    Hi there,
    I've noticed that when I run a query in Answers, if the query has a filter which is not in the displayed columns, the query runs very slowly. However, if I run the same query WITH the filtering column in the displayed columns, the query will return the results almost immediately.
    Take the example of a sales report. If I run a query of [Store Number] vs. [Sales Amount] and ctrl-click filter it with the [Region] dimension column equal to 'North America'. The query will take about 5 to 10 minutes to run. However, if I include the [Region] column in the display columns (i.e. [Region], [Store Number] vs. [Sales Amount]) or the "Excluded" columns in Answers, then the query will take less than a minute to run.
    I am using Oracle BI to connect to a MS Analysis Services cube by the way.
    Any ideas or suggestions on how to improve the performance? I don't want to include the filtering columns in the select query because when users use the dashboard filters, they just want to filter the results by different dimension values instead of seeing them in the report.
    Thanks.

    Thanks.
    However, when I run a similar query in the backend (MS Analysis Services), the performance is very good. Only when I try to run the query through Oracle BI, the performance suffers. I know that it has something to do with the way Oracle constructs its query to send back to the Analysis Services.
    The main thing about my issue is that in Answers, queries with the filtering columns in both the select and where clauses run much faster than queries with the filtering columns ONLY in the where clause. Why is that, and how to speed it up?

  • Poor query performance with BETWEEN

    I'm using Oracle Reports 6i.
    I needed to add Date range parameters (Starting and Ending dates) to a report. I used lexicals in the Where condition to handle the logic.
    If no dates given,
    Start_LEX := '/**/' and
    End_LEX := '/**/'
    If Start_date given,
    Start_LEX := 'AND t1.date >= :Start_date'
    If End_date given,
    End_LEX := 'AND t1.date <= :End_date'
    When I run the report with no dates or only one of the dates, it finishes in 3 to 8 seconds.
    But when I supply both dates, it takes > 5 minutes.
    So I did the following
    If both dates are given and Start_date = End date,
    Start_LEX := 'AND t1.date = :Start_date'
    End_LEX := '/**/'
    This got the response back to the 3 - 8 second range.
    I then tried this
    if both dates are given and Start_date != End date,
    Start_LEX := 'AND t1.date BETWEEN :Start_date AND :End_date'
    End_LEX := '/**/'
    This didn't help. The response was still in the 5+ minutes range.
    If I run the query outside of Oracle Reports, in PL/SQL Developer or SQLplus, it returns the same data in 3 - 8 seconds in all cases.
    Does anyone know what is going on in Oracle Reports when a date is compared with two values either separately or with a BETWEEN? Why does the query take approx. 60 times as long to execute?

    Hi,
    Observe access plan first by using BETWEEN as well as using <= >=.
    Try to impose logic of using NVL while forming lexical parameters.
    Adinath Kamode

  • Query Performance with and without cache

    Hi Experts
    I have a query that takes 50 seconds to execute without any caching or precalculation.
    Once I have run the query in the Portal, any subsequent execution takes about 8 seconds.
    I assumed that this was to do with the cache, so I went into RSRT and deleted the Main Memory Cache and the Blob cache, where my queries seemed to be.
    I ran the query again and it took 8 seconds.
    Does the query cache somewhere else? Maybe on the portal? or on the users local cache? Does anyone have any idea of why the reports are still fast, even though the cache is deleted?
    Forum points always awarded for helpful answers!!
    Many thanks!
    Dave

    Hi,
    Cached data automatically becomes invalid whenever data in the InfoCube is loaded or purged and when a query is changed or regenerated. Once cached data becomes invalid, the system reverts to the fact table or associated aggregate to pull data for the query You can see the cache settings for all queries in your system using transaction SE16 to view table RSRREPDIR . The CACHEMODE field shows the settings of the individual queries. The numbers in this field correspond to the cache mode settings above.
    To set the cache mode on the InfoCube, follow the path Business Information Warehouse Implementation Guide (IMG)>Reporting-Relevant Settings>General Reporting Settings>Global Cache Settings or use transaction SPRO . Setting the cache mode at the InfoCube level establishes a default for each query created from that specific InfoCube.

  • Query Performance with/without PK Index

    Hi!
    Please take a look at these queries and tell me why their
    performances are
    so extremely different!
    1.
    SELECT DISTINCT column_name #(Many Null-Values in this
    Column)
    FROM table_name
    WHERE primary_key_index_name IN (...long list...)
    AND column_name IS NOT NULL;
    --> 1 Row, 120 msec
    2.
    #(Only Order altered:)
    SELECT DISTINCT column_name
    FROM table_name
    WHERE column_name IS NOT NULL
    AND primary_key_index_name IN (...long list...);
    --> 1 Row, 2 sec (nearly 20 times slower!)
    3.
    #(No NOT NULL)
    SELECT DISTINCT column_name
    FROM table_name
    WHERE primary_key_index_name IN (...long list...);
    --> 1 Row, 2 sec just as No. 2!
    Can anyone explain?
    TIA! Dominic

    As mentioned, you really should create explain plans for all 3
    queries. I could be that the first query loaded all the block
    into the buffer cache so when you ran the 2nd query, the data it
    needed was already in memory.

  • Query performance with %

    Hi,
    I have a system running on 11gr2 windows 64bit windows with Oracle text.
    90 % of the data is in Hebrew and the rest is in English.
    we have a great performance while running Q with % in the end of the word like : word%
    when we issue a search with the % in the beginning like : %word , of the word the performance are extremely bad.
    __I have create a test case :__
    -- CREATE TABLE
    CREATE TABLE news (pkey NUMBER,lang VARCHAR2 (2), short_content CLOB);
    -- INSERT DATA
    insert into news values (myseq.nextval,'iw','&1');
    -- The next step is to configure the MULTI_LEXER
    BEGIN
    -- hebrew
    ctx_ddl.create_preference ('hebrew_lexer', 'basic_lexer');
    --english
    ctx_ddl.create_preference('english_lexer','basic_lexer');
    ctx_ddl.set_attribute('english_lexer','index_themes','yes');
    ctx_ddl.set_attribute('english_lexer','theme_language','english');
    END;
    -- CREATE THE MULTI_LEXER
    --Create the multi-lexer preference:
    BEGIN
    ctx_ddl.create_preference('global_lexer', 'multi_lexer');
    END;
    -- make the hebrew lexer the default using CTX_DDL.ADD_SUB_LEXER:
    BEGIN
    ctx_ddl.add_sub_lexer('global_lexer','default','hebrew_lexer');
    END;
    --add the English  language with CTX_DDL.ADD_SUB_LEXER procedure.
    BEGIN
    ctx_ddl.add_sub_lexer('global_lexer','english','english_lexer','eng');
    END;
    -- create the wordlist
    begin
    -- ADD WORDLIST
    -- exec ctx_ddl.drop_preference ('my_wordlist');
    ctx_ddl.create_preference ('my_wordlist','basic_wordlist');
    ctx_ddl.set_attribute     ('my_wordlist', 'stemmer','auto');
    ctx_ddl.set_attribute ('my_wordlist','SUBSTRING_INDEX', 'YES');
    end;
    --CREATE THE INDEX 
    --drop index search_news
    create index search_news
    on news (short_content)
    indextype is ctxsys.context
    parameters
    ('lexer          global_lexer
         language column lang
         wordlist     my_wordlist')
    still the performance are bad.
    I know I am missing here somthing.
    I appropriate any help

    That's expected. Internally Oracle Text has a list of words (the $I table) on which there is an index (the $X index).
    If you use a leading wildcard, then the $X index cannot be used and Oracle Text has to do a full-table scan of the $I table.
    If you MUST have leading wildcards, you should use the SUBSTRING_INDEX wordlist preference when creating the index. That creates an extra ($P) table which allows Oracle Text to resolve leading wildcards without resorting to a full table scan.
    Be warned that your index creation will take considerably longer and use a lot more space with this option in place. Many customers prefer to disallow leading wildcards from their search interface.

  • Query performanz due to Unit Conversion

    Hello,
    I have the requirement to improved the performance of some queries with unit conversion. Could someone help me with How Tos or Tipps to solved this issue?
    Thanks in advance
    KmerSoft

    Hi
    Thanks for the reply. After a lot more googling- it turns out this is a general Oracle problem and is not solely related to use of the GEOMETRY column. It seems that sometimes, the Oracle optimiser makes an arbitrary decision to do bitmap conversion. No amount of hints will get it to change its mind !
    One person reported a similarly negative change after table statistic collection had run.
    Why changing the columns being retrieved should change the execution path, I do not know.
    We have a numeric primary key which is always set to a positive value. When I added "AND primary_key_column > 0" (a pretty pointless clause) the optimiser changed the way it works and we got it working fast again.
    Chris

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • Unit Conversion in Query Designer

    Hi All,
    In one of my report, i have to show the gross weight in column against material which i am reading from material master.
    Now the weight is stored in two different unit's KG & G.
    But i have to show the weight finally in KG.
    Does any body has any idea of performing unit conversion at Query level i.e. in Query Designer.
    Thanks.

    Rakesh the same i am doing, that is reading the weight value from material master through formula variable using replacement path.
    Now when i create a key figure like this, there is a tab of 'Conversion' in that there is mentioned the 'Unit conversion'
    In there, there are 2 options
    1. Conversion type - this is a drop down but with no vlaue
    2. Target Unit - Where i selected KG.
    But output remains the same, i.e. in KG.
    M I missing something ?

  • Problem with unit in Query

    Hallo Experts,
    We encounter a problem with unit in our Query.
    Szenario:
    We use the Unit like PC, SET in Query.
    In Table T006 those Units have value 0 in the field ANDEC, so they will be rounded up and rounded down in Query.
    Our Problem is some Queries need the rounding up and down but others don't need this function.
    Where and how should we change to solve the problem.
    thanks & Best regards

    hi,
    if you want to control the rounding off while unit conversion you can do it from SPRO
    In the BW system, goto SPRO -> SAP NetWeaver -> General Settings -> Check Units of Measurement -> Units of Measurement -> Choose the unit corresponding to 'KM' -> Click on 'Details' button in the toolbar.
    Look the field 'Decimal Pl rounding'. Change this to 3 or whatever value you choose and see if it works.
    in above link i have taken example of KM, you need need to select the required unit.
    let us know if this is solves your problem. I am not sure if this can be done for individual queries but this is controlled by UOM wise as mentioned above.
    Regards,
    Rk.

  • Unit Conversion in Info Set Query

    Hello Experts
    I have create a query on infoset in which I want to show the qty field with the unit conversion factor . but I am not able to do so.
    It is giving an error message while saving the query
    0Material is not a valid characteristics for Info provider ZINFOSET
    Diagnosis
    Customer enhancement RSR00002 or the implementation of BAdI RSR_OLAB_BADI delivers 0MATERIAL as the characteristic to be calculated. 1. is, however, not a valid key figure for InfoProvider ZIS_BILL.
    System Response
    The information from 0MATERIAL is ignored.
    Procedure
    Check the exit.
    Procedure for System Administration
    Please help me out
    Thanks
    Neha

    Hi,
    You can try unit conversion planning function,create planning function and add it at the back of button.
    Can refer to link below:-
    Not sure if it can solve your problem.As it also depends upon source and target keyfigure used.
    http://help.sap.com/saphelp_nw70/helpdata/en/44/21643cedf8648ee10000000a1553f7/content.htm

  • Unit Conversion in the Query

    Hello Experts.
    I have a situation where I need the units of quantity to be converted at the query level. For e.g. if the value of the keyfigure is stored in litres in the Cube, then at the time of the query execution this value should be converted in gallons.
    I cannot use the following options in this case:
    1) The Solution for alternate UOM using the How-to guide for the same as the conversion from liters to gallons is not maintained at the material master level in my case
    2) I cannot use the Unit Conversion in the Conversion tab because gallons won't be the only target unit in my report, it will depend on the type of material select by the user.
    Please let me know about the options in this case. Any sort of help is appreciated and points will be duly rewarded.
    Thanks,
    Rishi

    Hi Prasad,
    In both of the ways, there should be only one target unit for them to work which is not the case in my situation. I want only specific material types to be displayed in gallons. If the User selects a different Material type then the target unit will be different.
    Any other possible solution on this one.
    Thanks,
    Rishi

Maybe you are looking for

  • Unable to Install Windows 8 on 2011 Mac Mini

    I'm having problems installing Windows 8 on my 2011 Mac Mini. Obviously I've gone through the BootCamp Assistant, partitioned my drive and then rebooted and started the Windows 8 install process. I can go through the initial bits Region, entering my

  • Multiple iPads, need to be able to Push apps from home office

    We have sent iPads out to all of our external sales reps and regional sales directors (about 17 total).  Each one has a different Apple ID because we forward our exchange e-mail to their Apple ID's,and we do not give out the password to their Apple I

  • RFC SAPOSS for EWA  Error

    Dear Gurus, I'll tested the connection using SM59 and this below error occured. I think,the connection is Ok before ,but don't know what happen.Where i need to check and fix it. For OSS connection to SAP or PUBLIC its OK.No errors.Only for EWA group.

  • Need suggestions in doing my Init and Deltas

    Hello Experts, I am using the standard SALES OVERVIEW cube to store the sales data. I have introduced the custom built ODS to store the order level data in my flows. I am about to test my flows but I am afraid if the custom built ODS in the flows wil

  • No sound effects

    Headphones unplugged.no sound effects from internal speaker