Multiple Query - Performance
Hi,
I got multiple queries within a data template. The queries are in fact the same, except for a group by condition in each of them...e.g. first query groups by the customer id, the second groups by the customer preferences....
Since I hit the same table everytime, would it be possible for me to just execute the raw data once and group by using BI Publisher..this way my hits to the database will be reduced to a single query....this would also mean that within the same data template..I need to pass the output of the first query always...
Do let me know if posisble
TIA....VJ
Yes u can do that.
You can regroup it as <?for-each-group:GRP_NAME;REGROUP_ELEMENT?>
Refer the regrouping section in user guide
Similar Messages
-
Query performance - A single large document VS multiple small documents
Hi all,
What are the performance trade offs when using a single large document VS multiple small documents ?
I want to store xml snippets with similar structure in a container. Is there any benefit while querying, if I use a single large document to store all these snippets against adding each snippet as a different document. Would it degrade the performance when adding an xml snippet each time by modifying an existing document?
How could we decide whether to use a single large document VS multiple small documents?
Thanks,
AnoopHello Anoop,
In case you wanted to get a comparison between the storage types for containers, wholedoc and node, let us know.
What are the performance trade offs when using a
single large document VS multiple small documents ?Depends on what is more important to you, performance when creating the container, and inserting the document(s) or performance when retrieving data.
For querying the best option is to go with smaller documents, as node indexes would help in improving query performance.
For inserting initial data, you can construct your large document composed of smaller xml snippets and insert the document as a whole.
If you further want to modify this document changing its structure implies performance penalties; more indicated is to store the xml snippets as documents.
Overall, I see no point in using a large document that will hold all of your xml snippets, so I strongly recommend going with multiple smaller documents.
Regards,
Andrei Costache
Oracle Support Services -
Query performance when multiple single variable values selected
Hi Gurus,
I have a sales analysis report that I run with some complex selection criteria. One of the variables is the Sales Orgranisation.
If I run the report for individual sales organisations, the performance is excellant, results are dislayed in a matter of seconds. However, if I specify more than one sales organisation, the query will run and run and run.... until it eventually times out.
For example;
I run the report for SALEORG1 and the results are displayed in less than 1 minute.
I then run the report for SALEORG2 and again the results are displayed in less than 1 minute.
If I try to run the query for both SALEORG1 and SALEORG2, the report does not display any results but continues until it times out.
Anybody got any ideas on why this would be happening?
Any advise, gratefully accepted.
Regards,
DavidWhile compression is generally something that you should be doing, I don't think it is a factor here, since query performance is OK when using just a sinlge value.
I would do two things - first make sure the DB stats for the cube are current. You might even consider increasing the sample rate for the stats collection if you are using one. You don't mention the DB you use, or what type of index is on Salesorg which could play a role. Does the query run against a multiprovider on top of multiple cubes, or is there just one cube involved.
If you still have problems after refreshing the stats, then I think the next step is to get a DB execution plan for the query by running the query from RSRT debugging mode when it is run with just one value and another when run with multiple values to see if the DB is doing something different - seek out your DBA if you are not familiar with executoin plan. -
Hi,
I want to pass multiple query string values using the same parameter in Query String (URL) Filter Web Part like mentioned below:
http://server/pages/Default.aspx?Title=Arup&Title=Ratan
But it always return those items whose "Title" value is "Arup". It is not returned any items whose "Title" is "Ratan".
I have followed the
http://office.microsoft.com/en-us/sharepointserver/HA102509991033.aspx#1
Please suggest me.
Thanks | Arup
THanks! Arup R(MCTS)
SucCeSS DoEs NOT MatTer.Hi DH, sorry for not being clear.
It works when I create the connection from that web part that you want to be connected with the Query String Filter Web part. So let's say you created a web part page. Then you could connect a parameterized Excel Workbook to an Excel Web Access Web Part
(or a Performance Point Dashboard etc.) and you insert it into your page and add
a Query String Filter Web Part . Then you can connect them by editing the Query String Filter Web Part but also by editing the Excel Web Access Web Part. And only when I created from the latter it worked
with multiple values for one parameter. If you have any more questions let me know. See you, Ingo -
SELECT query performance : One big table Vs many small tables
Hello,
We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
multiple small tables will help ?
For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
Thanks.Hello,
There is some information on this topic in the FAQ at:
http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
If this does not address your question, please just let me know.
Thanks,
Sandra -
How to improve Query performance on large table in MS SQL Server 2008 R2
I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups is a best option or splitting the table into multiple smaller tables?
Hi bala197164,
First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
Partitioning:
http://msdn.microsoft.com/en-us/library/ms178148.aspx
CREATE INDEX (Transact-SQL):
http://msdn.microsoft.com/en-us/library/ms188783.aspx
TechNet
Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Allen Li
TechNet Community Support -
I have a table with about 500 million records in it stored within both Oracle and MySQL (MyISAM). I have a column (say phone_number) with a standard b-tree index on it in both places. Without data or indexes cached (not enough memory to fully cache the index), I run about 10,000 queries using randomly generated phone numbers as criteria in 10 concurrent threads. It seems that the average time to retrieve a record in MySQL is about 200 milliseconds whereas in Oracle it is about 400 milliseconds. I'm just wondering if MyISAM/MySQL is inherently faster for a basic index search than Oracle is, or should I be able to tune Oracle to get comparable performance.
Of course the hardware configurations and storage configurations are the same. It's not the absolute time I'm concerned about here but the relative time. Twice as long to perform basically the same query seems concerning. I enabled tracing and it seems like some of the problem may be the recursive calls Oracle is making. Is there some way to optimize this a bit further?
Realize, I just want to look at query performance right now...ignoring all of the issues (locking, transactional integrity, etc.)
Thanks,
GregIn Oracle, a standard table is heap-organized. A b-tree index then contains index keys and ROWIDs, so if you need to read a particular row in the table, you first do a few I/O's on the index to get the ROWIDs and then look up the ROWIDs in the table. For any given key, ROWIDs are likely to be scattered throughout the table, so this latter step generally involves multiple scattered I/O's.
You can create an index-organized table or a hash cluster in Oracle in order to minimize the cost of this particular sort of lookup by clustering data with the same key physically near each other and, in the case of IOTs, potentially eliminating the need to store the index and table separately. Of course, there are costs to doing this in that inserts are now more expensive and secondary indexes are likely to be less useful. That probably gets you closer to what MySQL is doing if, as ajallen indicates, a MySQL table is generally stored more like an IOT than a heap-organized table.
If you get really creative, you could even partition this single table to potentially improve performance further.
Of course, if you're only storing one table, I'm not sure that you could really justify the cost of an Oracle license. This may well be the case where MySQL is more than sufficient for what this particular customer needs (knowing, nothing, of course, about the other requirements for the system).
Justin -
Having more LTSs in logical dimension table hit the query performance?
Hi,
We have a logical table having around 19 LTSs. Having more LTSs in logical dimension table hit the query performance?
Thanks,
AnileshHi Anilesh,
Its kind of both YES and NO. Here is why...
NO:
LTS are supposed to give BI Server an optimistic and logical way to retrieve the data. So, having more Optimistic LTS might help the BI Server with some good options tailored to a variety of analysis requests.
YES:
Many times, we have to bring in multiple physical tables as a part of single LTS (Mostly when the physical model is a snowflake) which might cause performance issues. Say there is a LTS with two tables "A" and "B", but for a ad-hoc analysis just on columns in "A", the query would still include the join with table "B" if this LTS is being used. We might want to avoid this kind of situations with multiple LTS each for a table and one for both of them.
Hope this helps.
Thank you,
Dhar -
Effect of Restricted Keyfigure & calculated keyfigure in query performance
Hi,
What is the effect of Restricted Keyfigure & calculated keyfigure in Query Performance?
Regards
AnilAs compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.
other than performance, there might be other considerations to determine which one of the options should be used.
If the RKF's are query specific and not used anywhere in majority of other queries, I would go for structure selections. And from my personal exp, sometimes all the developers end up with so many RKF and CKF's that you get easily lost in the web and not to mention the duplication.
if the same structure is needed widely across most of the queries, that might be a good idea to have global structure to be available across the provider, which might considerable cut down the development time. -
Structures Vs RKFs and CKFs In Query performance
Hi Gurus,
I am creating a GL query which will be returning with a couple of KFs and some calculations as well with different GL accounts and I wanted to know which one is going to be more beneficial for me either to create Restricted keyfigures and Calculated Keyfigures or to just use a structure for all the selections and formula calculations?
Which option will be better for query performance?
Thanks in advanceAs compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible. -
Hi, I need help for improving query performance..
I have a query in which i am joining 2 tables, join after some aggregation. Both table has more than 50 million record.
There is no index created on these table,both tables are loaded after truncation. So is it required to create index on this table before joining? The query status was showing 'suspended' since it was running for long time. For temporary purpose, i just executed
the query multiple times by changing month filter each times.
How can i improve this instead of adding month filter and running multiple timesHi Nikkred,
According to your description, you are joining 2 table which contain more than 50 million records. Now what you want is improving query performance, right?
Query tuning is not an easy task. Basically it depends on three factors: your degree of knowledge, the query itself and the amount of optimization required. So in your scenario, you can post your detail query, so that you can get more help. Besides, you
can create index on your table which can improve the performance. Here are some links about performance tuning tips for you reference.
http://www.mssqltips.com/sql-server-tip-category/9/performance-tuning/
http://www.infoworld.com/d/data-management/7-performance-tips-faster-sql-queries-262
Regards,
Charlie Liao
TechNet Community Support -
Speed up table query performance
Hi,
I have a table with 200mil rows (approx 30G size). What are the multiple ways to improve query performance on this table? We have tried indexing, but doesn't help much. Partitioning - I am not sure how to use it, since there is no clear criteria for creating partitions. What other options could I use?
If this is in the wrong forum, please suggest appropriate forum.
Thanks in advance.Assuming you have purchased the partitioning option from Oracle and are not violating the terms of your license agreement then partitioning is the way to go.
For the concept docs go to http://tahiti.oracle.com
For demos of all of the various forms of partitioning go to Morgan's Library at www.psoug.org and look up partitioning.
New partitioning options have been added in 11gR1 so some of the demos may not work for you: Most will. -
Query Performance with Unit Conversion
Hi Experts,
Right now my Customer ask to me to do a improve in the runtime of some queries.
I detect a problem in one related to unit conversion. I execute a workbook statistics and found that the time is focusing in step conversion data.
I'm consulting all the year and is taking around 20 minuts to give us a result. I too much time. The only thing in the query is the conversion data.
How can I to improve the performance? what is the check list in this case?
thanks for you help.
JoseHi Jose,
You might not be able to reduce the unit conversion time here and try to apply the general query performance improvement techniques, e.g. caching the query results etc.
But there is one thing which can help you, if end user is using only one of the unit for e.g. User always execute the report in USD but the source currency is different from USD. In such cases you can create another data source and do the data conversion at the time of data loading so that in the new data source all data will be available in required currency and no conversion will happen at runtime and will improve the query performance drastically.
But above solution is not feasible if there are many currencies and report needs to be run in multiple currency frequently.
Regards,
Durgesh. -
Hi,
Does using compression & partitioning (by time) affect the Reporting performance adversely? I have a 8GB Cube with 13 dimensions built in 10.1.0.4. Cube was defined with 1 dense dimension and other 12 as sparse in a compressed composite. It was also partitioned by Years. It takes close to 1 hour to build the cube. Since it is compressed, fully aggregated, I would assume. However, performance of discoverer queries on this cube has been pathetic! Any drill downs or slice/dice takes a long time to return if there are multiple dimensions in either edges of the Crosstab. Also, when scrolling down, it freezes for a while and then brings the data. Sometimes it takes couple of minutes!
What are the things I needs to check to speed this up? I think I checked things like sparsity, SGA/PGA sizes, OLAP Page Pool etc..
Regards
SureshHi Suresh,
Before you can implement changes to improve performance, you need to understand the causes of the performance problems. Discoverer for OLAP uses the OLAP API for queries, and the OLAP API generates SQL to query an analytic workspace. There are a few broad possible causes of poor query performance:
retrieving data from the AW is slow
SQL execution is slow, perhaps because the SQL is inefficient
SQL execution is fast, but the OLAP API is slow to fetch data
Each of these causes demands a different approach. I'd suggest that you enable configuration parameters SQL_TRACE and TIMED_STATISTICS, generate some trace files, and use the tkprof utility to try to narrow down the cause of the trouble.
Geof -
Physical partitioning query performance wore than without
Hello experts,
I have copied InfoCube 0COPC_C04 to ZCPOC_C06 with a test-query and took over data from 0COPC_C04 to test the functionality of Repartitioning (~ 4 million records)
I did Repartitioning following instructions from note 1008833 for 0FISCPER (I used se38 RSDU_SET_FV_TO_FIX_VALUE to set 0FISCVARNT to constant) - starting-point is 001.2000 - end-point is 016.2010
Indexes and Statistcs are re-created.
We have Oracle 10.2.0.2.0 and SAPKW70022 on BW 700
So - from my point of view everything is fine - BUT: Queryperformance is worst than before - shown in rsrv.
Variable in Query is over 0FISCYEAR and 0FISCPER3 and NOT over 0FISCPER (a querytest with 0FISCPER brought the same bad performance result).
Even I read, that with physical partitioning the query reads PARALLEL --- I can not see any parallel things in sm50 / sm66 - I have excpected to see parallel readings due to my query - I asked for results from 001.2006 - 12.2010 - and 13 partions were created (related to se14 ).
What might be the problem?
Thanks for your answer,
Thomas
Edited by: Thomas Liebschner on Jan 4, 2011 3:28 PM#1171650 is already implemented (since 14.12.2010) --- I asked my collegaue from Basis-Team to send me the report.
> I read parallel-queryexcution in the SAP Press Galileo Press Book "SAP BW Peformanceoptimierung" written by Thomas Schröder (ISBN: 3-89842-718-8), Page 376. If you like, I can send this page as pdf to your mail stored in your business card.
Would be interesting to see that excerpt, yes.
> What I missed was to compress the F-Table.
>
> I'm wondering, that:
> 1. After Compressing 0COPC_C04 with 8 million records gave a dramatically improvement of query-performance. No partitioning.
Sure, usually a lot less data needs to be read now.
> 2. After a new Full-load from 0COPC_C04 to ZCOPC_C06 with rebuilding indices and statistics I got similar query-performance compared to 1
Also easy to understand.
The F-Facttable contains data per load. So data concerning the same business items will be in that table multiple times and need to be aggregated on-the-fly during query execution.
The E-Facttable in contrast just contains one entry per business item. Which is the same situation you'll find when you just have one single data load request.
> 3. If I compress ZCOPC_C06, then query-performance drops dramatically down (needs double time, if I use 0FISCPER, needs 8 times more in query with 0FISCYEAR and 0FISCPER3)
>
> additional information: I have merged partitions from ZCOPC_C06 to one partition --> checked by se14 /BIC/FZCOPC_C06 -- storage parameters
Hmm... ok, here we would really need to see what Oracle does with your Query and the partitioning scheme then.
What seems obvious is that as soon as you have just one partition, then there is no way for Oracle to split up work and leave uninteresting data aside (partition-pruning),
> Conclusion: it seems for me, that compression is enough - partitioning has no advantage in that dedicated scenario.
> Do I think right?
I think that compression is a must-do in 99% of all cases for SAP BI and that the partitioning of the E-Facttable should be reviewed on your system.
Up to here there is too little information available to tell whether or not it could benefit your query execution time.
Another aspect you might consider with partitioning the E-facttable is your data retention policy.
If you're about to throw away your data on, say, a yearly basis, then having the table partitioned that way can provide huge benefits.
My personal view to this is: you already payed for the most expensive and feature-rich database engine available for SAP BI - so why not go and exploit these features where possible?
best regards
Lars
Maybe you are looking for
-
Disk Utility unable to partition as FAT32
Hello, I am trying to set up bootcamp on my Late 2013 13" MBP running Yosemite 10.10.1 (14B25). I went and purchased a 16GB flash drive and directly followed the instructions laid out here. I have a family copy (3 activation codes) of Windows 7 Home
-
Actualización de Adobe Bridge CS6 5.0.2 Error de instalación. Código de error: U44M1P7
-
Oracle 11g for Oracle access manager, OID version details
At present we have 1og db for sso and oid. I have checking in db that our exsisting OID and SSO versions are Oracle9iAS Single Sign-On 10.1.2.0.2 Oracle9iAS Internet Directory OID 10.1.2.1.0 We are moving to diff hosting solution and vendor is recomm
-
I paid for the program and cannot access it please help may i talk to someone?
-
Convert PDF into a High Res Graphic for Printing
Hi there, I have to convert a pdf into a high resolution graphic meant to be printed on a bulletin board about 300dpi. What is the best way to do this? Here is the link to the pdf to what i am referring to, just in case you want to get an idea of wha