Performance wise: CMOD or dividing the queries.

Dear Friends,
I have a performance related question.
We need to develop a report in which Sum of top 5 materials based on delivery quantity has to be displayed monthwise.
Eg: If in January Materials M1 M2 M5 M6 M7 are top 5 in delivery quantity, then the sum of their delivery quantity.
If in Feb Materials M2 M3 M5 M8 M9 are top 5 in delivery quantity, then the sum of their delivery quantity.
and so on till December.
I thought of using condition TOP N on delivery quantity in Bex query. (Pls correct me if i am wrong)
Problem: Top N will work only for one month in a single query. It cant work for different month (different top 5 materials)
Solution Proposed:
1. Make 12 different queries, one for each month (using Top N) and put each result in a workbook.
2. 12 different cmod variables and cmod code picksup top 5 materials for each month and put in each variables of respective months. Restrict using these cmod variables for respective columns.
So I wanted your valuable suggestions which of the above method will be of good performance (considering maintenance, etc factors) and the reason also.
Approximate number of records 1 lakh per year.
Kindly give ur suggestions.
Thanks,
Guru
Edited by: hi2guru on Nov 17, 2010 5:21 AM
Edited by: hi2guru on Nov 17, 2010 5:55 AM

Hi
Interesting situation !!!
I think writing 12 queries is not a good idea. For a simple reason that you will have either run a Query at a time (with a different input for each of this) .. isnt it ?
Creating variables for each of the month would be a good idea. You could generate value of 11 months based on a single input from the user. Moreover, the user can give any input ...say for eg the input is 11.2010 then through the customer exit you could find values for the past 11 months. Similarly even if the user changes the input value (say to 08.2010) the system can easily calculate the previous 11 months thus giving you the desired output.
Cheers
Umesh

Similar Messages

  • Difference betweem temp table and CTE as performance wise?

    Hi Techies,
    Can anyone explain CTE and Temp table performance wise. Which is the better object to use while implementing DML operations.
    Thanks in advance.
    Regards
    Cham bee

    Welcome to the world of performance tuning in SQL Server! The standard answer to this kind of question is:
    It depends.
    A CTE is a logical construct, which specifies the logical computation order for the query. The optimizer is free to recast computation order in such away that the intermediate result from the CTE never exists during the calculation. Take for instance this
    query:
    WITH aggr AS (
        SELECT account_no, SUM(amt) AS amt
        FROM   transactions
        GROUP  BY account_no
    SELECT account_no, amt
    FROM   aggr
    WHERE  account_no BETWEEN 199 AND 399
    Transactions is a big table, but there is an index on account_no. In this example, the optimizer will use that index and only compute the total amount for the accounts in the range. If you were to make a temp table of the CTE, SQL Server would have no choice
    to scan the entire table.
    But there also situations when it is better to use a temp table. This is often a good strategy when the CTE appears multiple times in the query. The optimizer is not able to pick a plan where the CTE is computed once, so it may compute the CTE multiple times.
    (To muddle the waters further, the optimizers in some competing products have this capability.)
    Even if the CTE is only referred to once, it may help to materialise the CTE. The temp table has statistics, and those statistics may help the optimizer to compute a better plan for the rest of the query.
    For the case you have at hand, it's a little difficult to tell, because it is not clear to me if the conditions are the same for points 1, 2 and 3 or if they are different. But the second one, removing duplicates, can be quite difficult with a temp table,
    but is fairly simple using a CTE with row_number().
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Does the DIMINFO affects the queries performance ?

    Hello all,
    Can a USER_SDO_GEOM_METADATA.DIMINFO well defined improve the queries performance ?
    For all the tables in my system, i have the view USER_SDO_GEOM_METADATA like this:
    DIMINFO
    X; -2147483648; 2147483648; 5E-5
    Y; -2147483648; 2147483648; 5E-5
    Z; -2147483648; 2147483648; 5E-5
    Thank you all

    Hi,
    I think you'll have more luck in the Spatial forum?
    Spatial
    Regards
    Peter

  • BO4 - The action cannot be performed WIS 30650 when try open some WEBI docs

    Get message BO4 - The action cannot be performed WIS 30650 when try open some WEBI docs.
    Documents created brand new in BI 4.0 .
    Some docs open fine.
    Get message even when log in as Administrator.
    Windows 2008 Server Oracle 11.2.0.1
    Any idea what causes this.
    Many Thanks

    Hi,
    Thanks for reponse.
    Distributed environment 1 server hosts web application server (Tomcat), 1 server hosts BOE components.
    Contacted BO support and we did right click on document/modify and get error message you referred to.
    However - also get this message when try right click modify existing web intelligence sample reports or even try open which ship with the product and have never been modified.
    Checked that file referred to exist in Input frs and does.
    Logged in as Administrator so not permission-realted.
    Thi seems like a huge bug to me that can't vene open the webi sample reports which ship with the product.
    servers look fine - I'm only one using and can create ans save reports fine just openiong or modifying some later or some of the sample ones.
    Our FRS is on networked location and uses UNC path - I'm assuming this has no impcat.
    Many Thanks

  • Aperture Conversion - My wife is converting from iPhoto to Aperture due to large library ( 33,000 photos, 109GB).  Performance-wise, is it better to convert to Aperture library and leave on the 250GB internal drive or convert and store externally?

    My wife is converting from iPhoto to Aperture due to large library ( 33,000 photos, 109GB).  Performance-wise, is it better to convert to Aperture library and leave on the 250GB internal drive or convert and store externally?

    You are welcome.
    convert and store externally?
    What versions of iPhoto and Aperture is your wife using? With both iPhoto 9.3 or later and Aperture 3.3 or later she simply could open her iPhoto library in Aperture and be done, since these versions are using a unified library format.
    Aperture 3.3: Using a unified photo library with iPhoto and Aperture

  • Performance wise data representation

    Hello there,
    I would like to start discussion regarding performance wise solution implementation solution,
    From performance point of view, which choice would be better to be chosen among the following scenarios,
    1. one is to run a process run on every and each record in the a Huge table, for example with million records, for all the records,
    2. or to have another table which might have about 800 fields !! which might represents a map crossing the id values in the main Huge table, which must be filled based on the values in the main table!! as a pre-processing
    then instead of processing the Huge table, simple query the map table which could be indexed using the needed values,
    what is the order of causes of bad performance, a process on many records, or many records that represents a pre-processing
    many thanks

    Thank you Billy for replying,
    Billy  Verreynne  wrote:
    Nor is performance something that one looks at after the design is done and the code written. Performance is a primary factor that needs to be considered with the h/w bought, the s/w installed and configured, the design, and every single line of code written.yes, currently I am in the design phase so that I am trying to understand the major performance principles that might effect the software when dealing with huge amount of data , whether pre-processing would be better, and such implementation issues.
    Here is the case logically:
    The process I have mentioned in the post, corresponds to a procedure that must be applied on data from a table, then return a certain value, calculated on the fly.
    some of those processing might not needed to be done , so in order to avoid huge unnecessary operation I need to perform some kind of predicating or indexing based on certain values.
    what is the best practice for such scenarios, performance wise!
    Thanks

  • We have many mappings, which one is good in performance wise ?

    We have many mappings, which one is good in performance wise ?

    HI
    Different Mapping Techniques are available in XI. They are: Message Mapping, XSLT Mapping, Java Mapping and ABAP mapping.
    u2022The integration repository includes a graphical mapping editor. It includes built-in functions for value transformations and queue and context handling.  There is an interface for writing user-defined functions (java) as well.
    u2022XSLT mappings can be imported into the Integration Repository; java methods can be called from within the XSLT style sheet. Advantages of this mapping are: open standard, portable, extensible via Java user-defined functions.
    u2022If the transformation is very complex, it may be easiest to leverage the power of Java for mapping.
    u2022ABAP mapping programs can also be written to transform the message structures.
    Message Mapping
    SAP XI provides a graphical mapping tool that generates a java mapping program to be called at run time.
    u2022Graphically define mapping rules between source and target message types.
    u2022Queue-based model allows for handling of extremely large documents.
    u2022Drag-and-drop.
    u2022Generates internal Java Code.
    u2022Built-in and user-defined functions (in Java)
    u2022Integrated testing tool.
    u2022N:M mapping is possible.
    JAVA MAPPING:
    Usually Java mapping is preferred when the target structure is relatively complex and the transformation cannot be accomplished by simple graphical mapping.
    For e.g. consider a simple File->IDoc scenarion where the source file is a simple XML file, whereas the target file is an IDoc with more than one hierarchy level e.g FINSTA01. Content conversion in XI can only create a single level hierarchy, so in this scenario a Java mapping would come in handy.
    See these:
    http://help.sap.com/saphelp_nw04/helpdata/en/e2/e13fcd80fe47768df001a558ed10b6/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10dd67dd-a42b-2a10-2785-91c40ee56c0b
    /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-i
    /people/thorsten.nordholmsbirk/blog/2006/08/10/using-jaxp-to-both-parse-and-emit-xml-in-xi-java-mapping-programs
    When to use Java mapping
    1) Java mapping are used when graphical mapping cannot help you.
    Advantages of Java Mapping
    1)you can use Java APIs and Classes in it.
    2) file look up or a DB lookup is possible
    3) DOM is easier to use with lots of classes to help you create nodes and elements.
    Java mapping can be used when you have complex mapping structures.
    ABAP MAPPING:
    ABAP mappings are mapping programs in ABAP objects that customers can implement using the ABAP Workbench.
    An ABAP mapping comprises an ABAP class that implements the interface IF_MAPPING in the package SAI_MAPPING. The interface has a method EXECUTE with the some signature.
    Applications can decide themselves in the method EXECUTE how to import and change the source XML document. If you want to use the XSLT processor of SAP Web AS, you can use the ABAP Workbench to develop a stylesheet directly rather than using ABAP mappings.
    In ABAP mapping you can read access message header fields. To do this, an object of type IF_MAPPING_PARAM is transferred to the EXECUTE method. The interface has constants for the names of the available parameters and a method GET, which returns the respective value for the parameter name. The constants are the same as in Java mappings, although the constant MAPPING_TRACE does not exist for ABAP mappings. Instead, the trace object is transferred directly using the parameter TRACE of the method IF_MAPPING~EXECUTE.
    For more details refer
    http://help.sap.com/saphelp_nw70/helpdata/EN/ba/e18b1a0fc14f1faf884ae50cece51b/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5c46ab90-0201-0010-42bd-9d0302591383
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e3ead790-0201-0010-64bb-9e4d67a466b4
    /people/sameer.shadab/blog/2005/09/29/testing-abap-mapping
    ABAP Mapping
    /people/udo.martens/blog/2006/08/23/comparing-performance-of-mapping-programs
    https://websmp101.sap-ag.de/~sapdownload/011000358700003082332004E/HowToABAPMapping.pdf
    /people/ravikumar.allampallam/blog/2005/02/10/different-types-of-mapping-in-xi
    /people/r.eijpe/blog
    ABAP Mapping Vs Java Mapping.
    Re: Message Mapping of type ABAP Class not being shown
    Re: Performance of mappings (JAVA, XSLT, ABAP)
    XSLT Mapping
    XSLT stands for EXtensible Stylesheet Language Transformations. It is an XML based language for transforming XML documents into any other formats suitable for browser to display, on the basis of set of well-defined rules.
    /people/sap.user72/blog/2005/03/15/using-xslt-mapping-in-a-ccbpm-scenario
    /people/anish.abraham2/blog/2005/12/22/file-to-multiple-idocs-xslt-mapping
    The above menotined are the mapping present in the XI.
    When it is critical and complicate we go for the ABAP,JAVA or XSLt mapping. For simple Mapping we go for the graphical mapping.
    the selection of mapping also depends upon the requirement and alos on our scenario.
    cheers

  • Processing in 2 internal tables -Performance wise better option

    Hi Experts,
    I have 2 internal tables.
    ITAB1 and ITAB2  both are sorted by PSPHI.
    ITAB1 has PSPHI  some more fields INVOICE DATE  and AMT
    ITAB2 has PSPHI  some more fields amount.
    Both itab1 and itab2 will always have same amount of data.
    I need to filter data from ITAB2 based invoice date given on selection screen.since ITAB2 doesnt have invoice date field.
    i am doing further processing to filter the records.
    I have thought of below processing logic and wanted to know if there is a better option performance wise?
    loop at ITAB1 into wa where invoice_date > selection screen date. (table which has invoice date)
    lv_index = sy-tabix.
    read table itab2 where psphi = wa-psphi and index = lv_index.
    if sy-subrc = 0.
    delete itab2 index lv_index.
    endif.
    endloop.

    Hi Madhu,
    My Requirement is as below could you please advice on this ?
    ITAB1
    Field   1 PSPHI ,    FIELD 2 INVOICE,  FIELD 3 INVOICE_DATE , FIELD4 AMT
                 15245,                       INV1,                           02/2011  ,  400
                  15245                       INV2                            02/2012  ,  430
    ITAB2
       Field   1 PSPHI ,    FIELD 2 PSNR,      FIELD 3 MATNR  , FIELD4 AMT
                 15245,                       PSNR1,                   X .          430
                  15245                       IPSNR2                    Y,          400
    When user enteres date on sel screen as 02/2011
    I want to delete the data from itab1 and itab2 for invoice date greater then 02/2011/
    If i delere ITAB1 for date > selection screen date.
    Loop itab1.
    delete itab2 where psphi in itab1 will delete both rows in above example because the field psphi which is common can be mutiple.
    endloop.
    Can you advice ?

  • Performance wise, a select statement is faster on a view or on a table??

    Performance wise, a complex (with multi join) select statement is faster on a view or on a table??

    Hi,
    the purpose of a view is not to provide performance benefits, it's basically a way to better structure database code and data access. A view is nothing but a stored query. When the optimizer sees references to a view in a query, it tries to merge it (i.e. replace the view with its definition), but it some cases it may be unable to do so (in presence of analytic functions, rownum pseudocolumn etc.) -- in such cases views can lead to a performance degradation.
    If you are interested in performance, what you need is a materialized view, which is basically a table built from a query, but then you need to decide how you would refresh it. Please refer to the documentation for details.
    Best regards,
    Nikolay

  • Divide the Vendor Account  BSEG-KOART= 'K' into 90% and 10% in MIRO.

    I want to divide the Vendor Account  BSEG-KOART= 'K' into 90% and 10% in MIRO.
    I entered PO and selected the  tax , the balance amount is adjusted,
    when i click on the simulation button I want to   divide the Vendor Account  BSEG-KOART= 'K' into 90% and 10% .
    The  90%  will be shown in Vendor account and 10% should be show in GL account, is there any exit  or BADI to perform this calculation?
    This operation should be done before posting .

    Hi,
    I guess your question is more related to MIRO than to real estate business. Please address your question in forum
    ERP - Logistics Material Management (MM).
    Regards, Franz

  • Map the user exit variables with the queries/cubes using them

    Hello Friends,
    What are the post unicode conversion tests that you can perform on front end/existing queries?
    is there any added advantage for queries due to unicode?
    is testing the working of variable  enough? is there any table to map the user exit variables with the queries/cubes using them?
    Thanks
    Tanya

    Guys, any clue about this? Answer are appreciated.
    Thanks
    Tanya

  • Package - performance-wise is it correct?

    Hi All
    I have created a package which runs as a concurrent programme to populate 9 tables. The package includes a separate procedures to populate each of the tables as below. I would like to know whether the below method is recommended performance-wise or is there are any other better approach to achieve this?
    Thanks in advance
    regards
    anna
    procedure populate_table1
    begin
    for my_cursor_emp in crs_emp
    loop
    insert into employees
    (emp_no
    ,first_name
    ,last_name
    values
    my_cursor_emp.emp_no
    ,my_cursor_emp.first_name
    ,my_cursor_emp.lastt_name
    end loop;
    end populate_table1
    Lot more columns are there in the above procedure. Package continues as
    procedure 2
    procedure 3
    ...

    Annas wrote:
    Hi All
    I have created a package which runs as a concurrent programme to populate 9 tables. The package includes a separate procedures to populate each of the tables as below. I would like to know whether the below method is recommended performance-wise or is there are any other better approach to achieve this?The recommended approach would be to get rid of the cursor loops.
    INSERT INTO source_table
    select <columns>
    from YOUR_QUERY;This assumes you actually NEED to populate 9 tables like you say, i find that suspect in and of itself. Can you explain the end goal here? Are you populating temporary tables, doing a data migration, something else?

  • How to analyse the performance by using RSRTand byseeing the query results

    Hi,
    I want to see the performance of the query in each land scape. I have executed my query
    using the transaction RSRT.  Ho w can we analyse the query reuires aggregats or not.
    I have taken the no. of records in cube . I also saw the number of records in the aggregates.
    I didnot get the clear picture.
    I selected the options Aggregates , Statistics and donot use cache. Query got execute and it displays one report . But I am unable to analyse the performace.
    Can anyone please guide me with steps . Which factors we need to consider for the performace point of view.
    Points will be rewarded.
    Thanks in advacne for all your help.
    Vamsi

    Hi,
    This info may be helpful.
    General tips                                   
    Using aggregates and compression.          
    Using  less and complex cell definitions if possible.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.               
    Using cache memoery will decrease the loading time of the report.                                        
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.                                        
    Also try                                        
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.                                        
    2. Use the program SAP_INFOCUBE_DESIGNS to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.                                        
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.                                        
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.                                        
    Refer.                                        
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm                                   
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.                         
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report                         
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.               By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.                         
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm                                   
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me                         
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2                              
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c                    
    Performance of BW infocubes                                        
    Go to SE38                    
    Run the program SAP_INFOCUBE_DESIGNS     
    It will shown dimension Vs Fact tables Size in percent     If you mean speed of queries on a cube as performance metric of cube,measure query runtime.                         
    You can go to T-Code DB20 which gives you all the performance related information like                                   
    Partitions                         
    Databases                         
    Schemas                         
    Buffer Pools                    
    Tablespaces etc                                        
    Thanks,
    JituK

  • Poor query performance only with migrated 7.0 queries

    Dear Team,
    We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
    I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
    We are not using any aggregates in the InfoProvider level.
    Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
    I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
    I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
    Regards,
    Chan

    Hi,
    Queries in BI 7.0 are almost the same as queries in 3.x format.
    inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
    if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
    after the query execution u can use statistics which will be shown at the top of the page.
    With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
    Tomer.

  • Performance wise which is best extends Thread Class or implement Runnable

    Hi,
    Which one is best performance wise extends Thread Class or implement Runnable interface ?
    Which are the major difference between them and which one is best in which case.

    Which one is best performance wise extends Thread Class or implement Runnable interface ?Which kind of performance? Do you worry about thread creation time, or about execution time?
    If the latter, then don't : there is no effect on the code being executed.
    If the former (thread creation), then browse the API Javadoc about Executor and ExecutorService , and the other execution-related classes in the same package, to know about the usage of the various threading/execution models.
    If you worry about, more generally, throughput (which would be a better concern), then it is not impacted by whether you have implemented your code in a Runnable implementation class, or a Thread subclass.
    Which are the major difference between them and which one is best in which case.Runnable is almost always better design-wise :
    - it will eventually be executed in a thread, but it leaves you the flexibility to choose which thread (the current one, another thread, another from a pool,...). In particular you should read about Executor and ExecutorService as mentioned above. In particular, if you happen to actually have a performance problem, you can change the thread creation code with little impact on the code being executed in the threads.
    - it is an interface, and leaves you free to extend another class. Especially useful for the Command pattern.
    Edited by: jduprez on May 16, 2011 2:08 PM

Maybe you are looking for

  • Slow system by adding sata hd in km4am (6734)

    slow windows start up and new windows instalation when connect my sata hard drive. my system AMD sempron 2500+   ram:1 gig    jumper: set to 150 g/s is threr any soloution?

  • ***SOLVED*** Friends MSI P35 Neo has no video/post!!! Need IMMEDIATE Help!

    Ok, I have been working with computers for a LONG time.  My best friend and I built him a gaming rig.  He has a Intel Core 2 Quad 2.4ghz, a single Ati Radeon x850 Crossfire Edition, a Rosewell 550 watt power supply, and 2gb of ddr2 RAM (unknown brand

  • How to display from IPOD to TV

    Is there any way to display what I am looking at on my IPOD 4 to my TV. I have a cable that works fine for videos and music but it wont allow photos, documents or anything else to display. Is there an app I need to get or something? I was hoping the

  • Transfer data to new macbook pro via timemachine external harddrive?

    My old macbook pro from 2006, osx (mountain?) lion, has totally broken down now. The screen is black on start up and I think its time is up. I've made saves and backed it up regularly to an external harddrive. If I buy a new macbook pro with yosemite

  • CC Teams account on the Mac is asking for serial number

    Hi We've got a large CC Teams account and we've signed in, downloaded the desktop app, installed Photoshop, buts its asking for a Serial Number... we checked online and made sure it was set to international English on the Mac ...Any ideas