MultiCube Queries - Inventory

I have built a multicube which includes the daily load inventory cube and a snapshot inventory cube.  The idea for this is that I am building a workbook which will include the daily stock overview query and a monthly stock overview query.
In order to show the daily load as part of this workbook I have had to add the 0CALDAY time characteristics.  In the monthly stock overview we want to select fiscal periods time dimensions (which we have included in the multicube) is this feasible if so how and is there any impact because we have 0CALDAY granualarity? Thanks

Hi Niten,
Using a workbook you only have to introduce query on daily and period key figures.
You will not have any kind of problems also becuase Key Figures are different and the queries are with different selection for time.
Only remember to do not introduce in drill down the time CH, becuase in this case you will not be able to match directly the values.
Restricted Key Figures are usefull only for data all in a single query, not for workbook where different selection are for different queries. In previous topic I didn' t consider Workbook specification, sorry.
Ciao.
Riccardo.

Similar Messages

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Query takes long time on multiprovider

    Hi,
    When i execute a query on the multiprovider, it takes very long time. it doesnt show up the results also. It just keep processing. I have executed the report only for one day but still it doesnt show any result. But when i execute on the cube, it executes quickly and shows the result.
    Actually i added one more cube to the multiprovider and ten transported that multiprovider to QA and PRD. Transportation went on successfully. After this i am unalbe to execute the reports on that multiprovider. What might be the cause? your help is appreciated.
    Thanks
    Annie

    Hi Annie.......
    Checklist for the performance of a Query........from a DOc........
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If u201CDisplay as hierarchyu201D is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Also check this.........Recommendations for Modeling MultiProviders
    http://help.sap.com/saphelp_nw70/helpdata/EN/43/5617d903f03e2be10000000a1553f6/frameset.htm
    Hope this helps......
    Regards,
    Debjani......

  • The InfoCube contains non-cumulative values

    Hi,
    While creating multicube for inventory on two cubes its askin the below "The InfoCube contains non-cumulative values. A validity table is created for these non-cumulative values, in which the time interval is stored, for which the non-cumulative values are valid.
    The validity table automatically contains the "most detailed" of the selected time characteristics (if such a one does not exist, it must be defined by the user, meaning transfered into the InfoCube)", what is that and how to solve this,
    Please through some light on this its urgent.
    Chandan

    Hi,
    Your multi cube is probably based on the infociune 0IC_C03 containing non cumulative key figures taht's why you get this message.
    You generaly don't have to maintain validity area unless you are in a special configuration (for exemple loading data from two source systems).
    UThe following link should give more information about validity area with non cumulative :
    [http://help.sap.com/saphelp_nw04/helpdata/en/02/9a6a1c244411d5b2e30050da4c74dc/frameset.htm|http://help.sap.com/saphelp_nw04/helpdata/en/02/9a6a1c244411d5b2e30050da4c74dc/frameset.htm]
    Hope this helps.
    Cyril

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • ATG 10 implementation error

    Hello Experts
    We use ATG 10.1.1 and multisite capability to render international sites such as AU, NZ, HK, EU and UK. Some of these sites respond with HTTP 500 internal server error for about 10-15 minutes every Tuesday. The sites are back up after that and work normally.
    We analyzed the server logs, thread dumps and memory utilization during this period. We found resource contentions occurring where one thread goes to TIMED_WAITING (transaction timeout is 600s) mode holding the lock on inventory repository item. The same thread is also holding the lock on the category item to set a transient property on it. All other threads are in BLOCKED state waiting to lock the category item to set a transient property on it.
    The reason behind the transient property on category item
    We have a requirement to not display gift card in any child categories but only in parent category. So there is a logic to determine if the child category contains gift card as SKU type and to hide it.
    We are able to see a droplet reporting "missing inventory for sku and store" corresponding to the international site which goes down. We observed only AU and NZ going down last week. HK is slow but responsive. EU and UK worked normally without any hitch. We saw lot of partial GC happening at this interval (671). Our normal mini GC for the same length of time is only 73. The HEAP also seems to be getting closer to the max every second during this interval.
    We also found that the catalog update and inventory update job runs at this time from our CA agent and pushes the update to commerce.
    Question
    Our code only runs a RQL to read values from inventory repository. If inventory repository is being cache invalidated and loaded with new cache, won't the RQL hit the database directly and proceed with the transaction? Our InventoryRepository cache-mode is distributed. Please see the thread trace below
    "http-0.0.0.0-8080-14" daemon prio=10 tid=0x000000004d810000 nid=0x6e87 runnable [0x00000000573ed000]
       java.lang.Thread.State: RUNNABLE
        at atg.adapter.gsa.GSAItem.lookupItemTransactionState(GSAItem.java:2456)
        - locked <0x000000079baef2d0> (a atg.adapter.gsa.GSAItem)
        at atg.adapter.gsa.GSAItem.
    getItemTransactionState(GSAItem.java:2337)
        at atg.adapter.gsa.GSAItem.getItemTransactionState(GSAItem.java:2301)
        at atg.adapter.gsa.GSAItemDescriptor.updateItem(GSAItemDescriptor.java:7305)
        at atg.adapter.gsa.ItemTransactionState.updateItemState(ItemTransactionState.java:984)
        at atg.adapter.gsa.GSATransaction.beforeCompletion(GSATransaction.java:452)
        at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.beforeCompletion(SynchronizationImple.java:101)
        at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.beforeCompletion(TwoPhaseCoordinator.java:269)
        - locked <0x00000007a3e79c60> (a java.lang.Object)
        at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:89)
        at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:160)
        at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1431)
        at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:137)
        at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
        at atg.dtm.TransactionManagerWrapper.commit(TransactionManagerWrapper.java:438)
        at atg.adapter.gsa.GSARepository.commitTransaction(GSARepository.java:7036)
        at atg.adapter.gsa.GSAItem.setPropertyValue(GSAItem.java:1577)
        at atg.adapter.gsa.GSAPropertyDescriptor.setPropertyValue(GSAPropertyDescriptor.java:538)
        at atg.repository.RepositoryItemImpl.setPropertyValue(RepositoryItemImpl.java:249)
        at atg.adapter.gsa.GSAItem.setPropertyValue(GSAItem.java:1536)
        at com.llm.intl.repository.CategoryDisplayPropertyDescriptorIntl.checkInventoryStatus(CategoryDisplayPropertyDescriptorIntl.java:32)
    "http-0.0.0.0-8080-14" daemon prio=10 tid=0x000000004d810000 nid=0x6e87 in Object.wait() [0x00000000573ee000]
       java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)    
        at atg.repository.query.QueryCache.populateEntry(QueryCache.java:978)
        - locked <0x00000007e9252ab8> (a atg.repository.query.QueryCacheEntry)
        at atg.repository.query.QueryCache.executeCachedQuery(QueryCache.java:635)
        at atg.adapter.gsa.GSAView.executeQuery(GSAView.java:1172)
        at atg.repository.rql.RqlStatement.executeQuery(RqlStatement.java:230)
        at atg.projects.store.inventory.LLMBaseInventoryManager.getInventoryItem(LLMBaseInventoryManager.java:54)
        at atg.projects.store.inventory.LLMBaseInventoryManager.getInventoryInfo(LLMBaseInventoryManager.java:37)
        at atg.projects.store.inventory.LLMInventoryManager.queryAvailabilityStatus(LLMInventoryManager.java:38)
        at com.llm.repository.CategoryDisplayPropertyDescriptor.checkInventoryStatus(CategoryDisplayPropertyDescriptor.java:88)
        at com.llm.repository.CategoryDisplayPropertyDescriptor.isCategoryDisplayable(CategoryDisplayPropertyDescriptor.java:60)
        at com.llm.repository.CategoryDisplayPropertyDescriptor.getPropertyValue(CategoryDisplayPropertyDescriptor.java:47)
        at atg.adapter.gsa.GSAItem.getPropertyValue(GSAItem.java:1453)
        at atg.repository.RepositoryItemImpl.getPropertyValue(RepositoryItemImpl.java:151)
        at atg.repository.nucleus.RepositoryItemPropertyMapper.getPropertyValue(RepositoryItemPropertyMapper.java:151)
        at atg.beans.DynamicBeans.getPropertyValue(DynamicBeans.java:333)
        at atg.servlet.DynamoHttpServletRequest.getObjectParameter(DynamoHttpServletRequest.java:4558)
        at atg.servlet.DynamoHttpServletRequest.getObjectParameter(DynamoHttpServletRequest.java:4436)
        at atg.servlet.DynamoHttpServletRequest.getObjectParameter(DynamoHttpServletRequest.java:4828)
        at atg.servlet.DynamoHttpServletRequest.getObjectParameter(DynamoHttpServletRequest.java:4820)
        at atg.taglib.dspjsp.GetValueOfTag.calculateValue(GetValueOfTag.java:271)
        at atg.taglib.dspjsp.GetValueOfTag.doStartTag(GetValueOfTag.java:297)
        at org.apache.jsp.elements.gadgets.leftNavigationItem_jsp._jspx_meth_dsp_005fgetvalueof_005f8(leftNavigationItem_jsp.java:750)
    "http-0.0.0.0-8080-14" daemon prio=10 tid=0x000000004d810000 nid=0x6e87 waiting for monitor entry [0x00000000573ee000]
       java.lang.Thread.State: BLOCKED (on object monitor)
        at atg.adapter.gsa.GSAItem.lookupItemTransactionState(GSAItem.java:2456)
        - waiting to lock <0x000000079e700270> (a atg.adapter.gsa.GSAItem)
        at atg.adapter.gsa.GSAItem.getItemTransactionState(GSAItem.java:2337)
        at atg.adapter.gsa.GSAItem.getItemTransactionState(GSAItem.java:2301)
        at atg.adapter.gsa.GSAItem.getItemTransactionState(GSAItem.java:2287)
        at atg.adapter.gsa.GSAItem.getItemTransactionStateUnchecked(GSAItem.java:2517)
        at atg.adapter.gsa.GSAItem.setPropertyValue(GSAItem.java:1560)
        at atg.adapter.gsa.GSAPropertyDescriptor.setPropertyValue(GSAPropertyDescriptor.java:538)
        at atg.repository.RepositoryItemImpl.setPropertyValue(RepositoryItemImpl.java:249)
        at atg.adapter.gsa.GSAItem.setPropertyValue(GSAItem.java:1536)
        at com.llm.intl.repository.CategoryDisplayPropertyDescriptorIntl.checkInventoryStatus(CategoryDisplayPropertyDescriptorIntl.java:32)
    The snippet which sets the transient property "containsGiftCard" on category item
                            if(sku.getPropertyValue("LLLSkuType") != null && ((String)sku.getPropertyValue("LLLSkuType")).equals("giftCard")){
                                repositoryItemImpl.setPropertyValue("containsGiftcard", true); //repositoryItemImpl is of childCategory. HTTP thread 14 has executed &holds lock on the category
                            }else{
                                repositoryItemImpl.setPropertyValue("containsGiftcard", false);
                            int status = getInventoryManager().queryAvailabilityStatus(sku.getRepositoryId(), "10056"); // the code of this method is below
    The snippet which queries inventory
                    RepositoryView view = getRepository().getView(getItemType());
                    Object[] params = { pSkuId, pStoreId };
                    RepositoryItem[] items = getCatalogRefIdByStoreMatchQuery().executeQuery(view, params); //HTTP thread 14 goes to TIMED_WAITING mode
    #property query
    catalogRefIdByStoreMatchQuery=catalogRefId=?0 and storeId=?1
    Very much appreciate your inputs.
    Thanks,
    Sundar

    The error clearly states that it is not able to find your schema
    -------DATA IMPORT FAILED-------------------------------------------------------
    Make sure you have configured the connection details and created the schema.
    So please make sure that you have created all your schemas and are providing the correct connection details to the CIM.
    ~Gurvinder

  • Excessive DM Prep Times

    Since our upgrade from BW 3.1 to NW04s, we are seeing that the majority of our queries take longer than 20 seconds in the DM Prep step.  Some take as long as 400-1500 seconds!  We are running on Oracle version 10.2.  It doesn't seem to matter whether we are running 3.x queries or NW04s queries.  Does anyone have any suggestions for reducing the database prep time?
    Thank you.

    Hi
    Check the blelow point with for each query.
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queries—for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The “not assigned” nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Madhan

  • Reconcile Inventory BI Content Queries with ECC

    Hi Folks,
    I have a question regarding reconciling Inventory Management BI Content reports based on cube 0IC_C03 with ECC.
    We implemented IM in BI, and now try to reconcile the queries with ECC. Specifically, I have one query, 0IC_C03_Q0008 - Stock in Transit.
    We ran the report by Plant, in BI, it only showed one material with #, but people from ECC side use T-Code MB5T in ECC and saw many materials with Stock in Transit for that Plant; and for the material showed on BI report, the # doesn't match between BI and ECC.
    What is the right way to reconcile the report with ECC? is there any other report/Tcode in ECC we can use?
    Any insight ideas would be much appreciated!
    Thanks,
    Freda

    Hi Freda,
    For me, it is the typical nightmare to try to reconcile the cube to your ECC transactions since the cube summarizes and doesn't include your material documents.
    So, to make it a bit easier to match MB5T or MB51, add an ODS with a material document and Material item key and all of the fields and key figures necessary for reconciliation (I usually look at the datasource's delivered fields and add whatever sounds reasonable to look at).  To fill records in this ODS, only fill them from 2LIS_03_BF, because BX and UM do not contain material documents.
    While I was writting this, I think I realize what is happening in your situation.  If memory serves me correctly, MB5T is Cross-Company SIT, which is not part of the 2LIS_03* datasources.  They only account for Intra-Company (Plant-to-Plant within the same company code) SIT.  For that, there is a White Paper you can have an ABAP'er implement for you.
    Here are two good documents you should read and be familiar with:
    How to .... Report on Cross Company Stock in Transit: https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/92c0aa90-0201-0010-17b1-bf5b11c71257
    How to .... Handle Inventory Management Scenarios in BW: https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Also, when comparing my cube data to ECC/R/3, I use transaction MMBE in the ECC to identify stock quantities and stock in transit (SIT).
    Brian

  • Inventory Management (0IC_C03) Business Content queries

    I have the following queries which are delivered when I activate 0IC_C03.  The queries below have also been activated as part of the model.
    Is there any additional queries which should/can be activated against this InfoProvider and of the list below are any redundant? Thanks
    Blocked Stock
    Consignment Stock at Customer
    Inventory Turnover
    Quantities of Valuated Project Stock (as of 3.1 Content)
    Quantities of Valuated Sales Order Stock (as of 2.1 Cont.)
    Receipt and Issue Customer Consignment Stock
    Receipt and Issue of Blocked Stock
    Receipt and Issue Quality Inspection Stock
    Receipt and Issue Stock in Transit
    Receipt and Issue Vendor Consignment Stock
    Scrap
    Stock in Quality Inspection
    Stock in Transit
    Stock Overview
    Stock Overview (as of 3.1 Content)
    Stock Range of Coverage
    Valuated Stock
    Valuated Stock (as of 3.1 Content)
    Vendor Consignment Stock

    hi Niten,
    by choosing that 0IC_C03 and using grouping 'in dataflow afterward' in business content activation, you should get all the queries.
    you can cross check in rsa1->metadata repository->local object (business content).
    in sap help
    http://help.sap.com/saphelp_nw2004s/helpdata/en/77/65073c52619459e10000000a114084/frameset.htm
    (expand left node)
    hope this helps.

  • Queries from the Standard Inventory Cube 0IC_C03

    What are the advantages/disdvantages of building the queries which have been delivered with the 0IC_C03 off a multicube.  If no disdavantages any reason why SAP have not delivered a multicube with the model?  Thanks

    I dont see any disadvantages building the same queries off a multi. Many companies have a std to build queries of multi rather than the basic cube. Multi's most of the time are paired of several cubes and not rather on a single IC. So i dont think SAP can anticipate different needs of hell alot of customers.

  • Inventory Management Queries

    As of BI 3.2, does anyone know which infoprovider provides these queries as they are not part of 0IC_C03, Material Stocks/movement?
    1.     Q0002 – Inventory Days of Supply - Quantity
    2.     Q0003 – Inventory Days of Supply - Value
    3.     Q0004 – Finished Goods Inventory Days of Supply - Quantity
    4.     Q0005- Finished Goods Inventory Days of Supply - Value
    5.     Q0012 – Material Consumption

    Hi Eric,
    you are right !
    0IC_01 is obsolete because feeded by the old LIS datasources....so, you have to use the new cube with the new datasources from LBWE...the problem is that your queries still remain (on business content standard level) dependent from the old cube with no copy-query for the new one !!! And, in addition, you have no possibility to copy the queries from the std tool because the cubes are completely different !!!
    If you want, you can use the workarounds mentioned in a previous thread on the BW forums (how can I copy query and bla bla bla..).
    That's all is in my knowledge (strange situation, my dear..)
    Bye,
    Roberto

  • Inventory of Queries from Infocube based on DSO fields

    Hi,
    I have a requirement to find out the Inventory of queries used by 15 Info providers build on 2 DSO's.
    One of the DSO have more than 7 yr old data and to restrict the same with 3 yr data, we want to do impact analysis on all the queries using the DSO in the respective Info cubes.
    Question: Can someone help me out to find the solution to identify the queries especially using the fields populated from the DSO, as there are more than 1 thousand reports existing in the data base from the above 15 Info providers.
    Fields-Infoprovider-Queries  --> Depending on the DSO
    Please let me know your thoughts.
    Thanks
    Bhanu

    Hi Friends,
    Thank guys for your answers.
    I have assigned points for all of you.
    Thank you!
    Bhanukiran

  • Non-cumulative Values not showing in Inventory Management Queries

    Hi:
    Has anyone had a problem with the new version and Non-cumulative key figures not showing up in Bex for Inventory Managemet reports? Specifically, they showed and validated back to ECC for our Development and QA boxes but now in our Regression box they are all showing zeros. Our Cumulative values are all showing correctly still within the Regression box. For example, Total Receipts and Total Issues are correctly poplating values but Total Stock is not.
    I have checked and validate that the configuration in the Regression box matches or Dev and QA box.

    Found the problem.  It had to do with the compression variant in the delta process chain.  The compression was set to 'no marker update'.  Since we only started receiving measureable deltas in or regression box this is where the incorrect setting showed up.  Reinitalized and the deltas are working correctly now.

  • Inventory management: Validy dates in queries???

    Hi experts,
    We have BW 7.0.
    I have loaded data on 04.03.2010 with BF- and UM-datasources and compressed it correctly.
    When I now look into the validy table for plant 0001 there is a validy range from 03.07.1997 to 08.02.2010.
    But when I start a query for 09.02.2010 system says "no data found".
    The document "How To Handle Inventory Management Scenarios in BW" says that I should show the result in parenthesis!
    What is up here?
    But I don't want to update this validy table!!!
    Pls help!
    KR,
    Raimund
    Text in  document:
    ...For example, if data with document data was loaded into the
    InfoCube with values from 01.01.2002 to 15.02.2002 (assigned to the respective time
    characteristic), the validity interval is also determined by these two date values. You can
    extend the intervals by maintaining the table (transaction RSDV). Stock balances are
    displayed for requests that relate to this period. If you start a query that requests the
    stock balance for 16.02.2002 or later, the result is displayed in parenthesis (a blank
    value is displayed in BW 2.0B and 2.1C), since it lies outside the validity area (providing
    that the validity table was not manually extended using transaction RSDV)....

    Hi,
    Hi,
    Use 2LIS_03_BX, 2LIS_03_BF, 2LIS_03_UM to 0IC_C03 Cube and design the report.
    Use :
    0VALSTCKVAL   " for Value
    0VALSTCKQTY   " for Qty
    0CALMONTH        " for Month
    Use the above combinations in New Selections in columns and go it.
    For Qty Opening:
    New Selection bad drag abd drop following things
    0VALSTCKQTY   " for Qty
    0CALMONTH        " for Month and restrict with less then or equalto option variable (single value, user input)  and set the offeset
                                   value = -1 bcoz if user will give 12.2009 , so it will display 11.2009 closing stock, this is opening for 12.2009.
    For Qty Closing:
    New Selection bad drag abd drop following things
    0VALSTCKQTY   " for Qty
    0CALMONTH        " for Month and restrict with less then or equalto option variable (single value, user input) .
    In the same way build for Value and other Keyfigures on 0IC_C03.
    And
    Drag & drop
    0MATERAIL
    0PLANT  " Give some Input Variable.
    See the steps.
    Treatment of historical full loads with Inventory cube
    Setting up material movement/inventory with limit locking time
    If it is BI 7 then for BX in in DTP in Extraction Tab you need to select Extacrion mode = NON-Cumulative option.
    See this thread related to Inventory.
    Re: Inventory Mangement steps - questions
    Thanks
    Reddy
    Thanks
    Reddy

  • Queries on multicube preferred why?

    Hello BW Experts,
    I am in the process of deciding to have the queries on cube / multiprovider. Could any one provide feedback with your experiences which option is preferrable. I am also considering of multiprovider even if there is only one cube.
    Thanks very much.
    BWer

    Hello dear;
    Please read this thread for your help. If you have only one cube, to me it doesn't make any sense to have multiprovider, unless you are thinking of may be another cube or provider also will be necessary to create a report in near future. Even for that I would recomends you to create the multiprovider when you have that situation.
    Qeries on MultiProviders Vs Underlying InfoProviders
    Hope it helps.
    Buddhi

Maybe you are looking for