Query performance when multiple single variable values selected
Hi Gurus,
I have a sales analysis report that I run with some complex selection criteria. One of the variables is the Sales Orgranisation.
If I run the report for individual sales organisations, the performance is excellant, results are dislayed in a matter of seconds. However, if I specify more than one sales organisation, the query will run and run and run.... until it eventually times out.
For example;
I run the report for SALEORG1 and the results are displayed in less than 1 minute.
I then run the report for SALEORG2 and again the results are displayed in less than 1 minute.
If I try to run the query for both SALEORG1 and SALEORG2, the report does not display any results but continues until it times out.
Anybody got any ideas on why this would be happening?
Any advise, gratefully accepted.
Regards,
David
While compression is generally something that you should be doing, I don't think it is a factor here, since query performance is OK when using just a sinlge value.
I would do two things - first make sure the DB stats for the cube are current. You might even consider increasing the sample rate for the stats collection if you are using one. You don't mention the DB you use, or what type of index is on Salesorg which could play a role. Does the query run against a multiprovider on top of multiple cubes, or is there just one cube involved.
If you still have problems after refreshing the stats, then I think the next step is to get a DB execution plan for the query by running the query from RSRT debugging mode when it is run with just one value and another when run with multiple values to see if the DB is doing something different - seek out your DBA if you are not familiar with executoin plan.
Similar Messages
-
Use a single variable value to compare with 2 characteristics
Hi guys
I need some advice on how to use a single variable value to compare with 2 characteristics in a Infocube.
eg : I hv 2 characteristics in Infocube
Launch date & Closing Date
Now I want to display report where the variable date (inputted by user) is equal to Launch Date and Closing Date.
with regardsBobby,
if I right understood your situation, you have an input variable ZINPUT (related to a date 'A') and 2 others dates (yours Launch and Closing dates, 'B' and 'C').
You want to display only the rows where A(user input)=B=C.
Now you have to create 2 new variables (called ZB and ZC, related to B and C dates) NOT marked as 'ready for input' and set to 'mandatory variable entry'.
Call Transaction CMOD for the definition of the customer exit (if not already existing!).
Create a new project, maintain the short text, and assign a development class.
Goto Enhancements Assignments and assign RSR00001. Press the button components to continue.
Double-click on EXIT_SAPLRRS0_001. For documentation place the cursor on RSR00001 and use the menu Goto -> Display documentation.
Then double-click on ZXRSRU01. If the include doesnt exist you have to create it; assign a development class and a transport request.
Enter the coding:
DATA: L_S_RANGE TYPE RSR_S_RANGESID.
DATA: LOC_VAR_RANGE LIKE RRRANGEEXIT.
CASE I_VNAM.
WHEN 'ZB'.
(and you have to repeate the same code also for the variable 'ZC' !)
IF I_STEP = 2.
READ TABLE I_T_VAR_RANGE INTO LOC_VAR_RANGE
WITH KEY vnam = 'ZINPUT'.
if sy-subrc = 0.
L_S_RANGE-LOW = LOC_VAR_RANGE-LOW.
endif.
L_S_RANGE-sign = 'I'.
L_S_RANGE-opt = 'EQ'.
append L_S_RANGE to e_t_range.
ENDIF.
ENDCASE.
Save and activate the coding and the project.
Now go to your query and use these two new variables to restrict B and C....et voilà !!!
Let me know if you need more help (and please assign points !!! be generous !!!)
Bye,
Roberto -
How to improve query performance when reporting on ods object?
Hi,
Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
Thanks in advance,
Ravi Alakuntla.Hi Ravi,
Check these links which may cater your requirement,
Re: performance issues of ODS
Which criteria to follow to pick InfoObj. as secondary index of ODS?
PDF on BW performance tuning,
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Regards,
Mani. -
Poor query performance when joining CONTAINS to another table
We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
For example, we can find all the records a user has access to from our base table by the following query:
SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID;
This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
Our search query looks like this:
SELECT score(1), d.*
FROM duns d
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
WITH subset
AS
(SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID
SELECT score(1), d.*
FROM duns d
JOIN subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
Thanks!!Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
SCOTT@orcl_11gR2> -- tables:
SCOTT@orcl_11gR2> CREATE TABLE duns
2 (duns_loc NUMBER,
3 text_key VARCHAR2 (30))
4 /
Table created.
SCOTT@orcl_11gR2> CREATE TABLE primary_contact
2 (duns_loc NUMBER,
3 emp_id NUMBER)
4 /
Table created.
SCOTT@orcl_11gR2> -- data:
SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO duns
2 SELECT object_id, object_name
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact
2 SELECT object_id, namespace
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> -- indexes:
SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
2 ON duns (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
2 ON primary_contact (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
SCOTT@orcl_11gR2> -- as suggested by Roger:
SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
2 ON duns (text_key)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 FILTER BY duns_loc
5 /
Index created.
SCOTT@orcl_11gR2> -- gather statistics:
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- variables:
SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
SCOTT@orcl_11gR2> EXEC :employeeid := 1
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
SCOTT@orcl_11gR2> EXEC :search := 'highway'
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- original query:
SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
SCOTT@orcl_11gR2> WITH
2 subset AS
3 (SELECT d.duns_loc
4 FROM duns d
5 JOIN primary_contact pc
6 ON d.duns_loc = pc.duns_loc
7 AND pc.emp_id = :employeeID)
8 SELECT score(1), d.*
9 FROM duns d
10 JOIN subset s
11 ON d.duns_loc = s.duns_loc
12 WHERE CONTAINS (TEXT_KEY, :search,1) > 0
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 4228563783
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 84 | 121 (4)| 00:00:02 |
| 1 | SORT ORDER BY | | 2 | 84 | 121 (4)| 00:00:02 |
|* 2 | HASH JOIN | | 2 | 84 | 120 (3)| 00:00:02 |
| 3 | NESTED LOOPS | | 38 | 1292 | 50 (2)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 5 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | DUNS_DUNS_LOC_IDX | 1 | 5 | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
SCOTT@orcl_11gR2> WITH
2 subset1 AS
3 (SELECT pc.duns_loc
4 FROM primary_contact pc
5 WHERE pc.emp_id = :employeeID),
6 subset2 AS
7 (SELECT score(1), d.*
8 FROM duns d
9 WHERE CONTAINS (TEXT_KEY, :search,1) > 0)
10 SELECT subset2.*
11 FROM subset1, subset2
12 WHERE subset1.duns_loc = subset2.duns_loc
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
SCOTT@orcl_11gR2> SELECT subset2.*
2 FROM (SELECT pc.duns_loc
3 FROM primary_contact pc
4 WHERE pc.emp_id = :employeeID) subset1,
5 (SELECT score(1), d.*
6 FROM duns d
7 WHERE CONTAINS (TEXT_KEY, :search,1) > 0) subset2
8 WHERE subset1.duns_loc = subset2.duns_loc
9 ORDER BY score(1) DESC
10 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- ansi join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 JOIN primary_contact
4 ON duns.duns_loc = primary_contact.duns_loc
5 WHERE CONTAINS (duns.text_key, :search, 1) > 0
6 AND primary_contact.emp_id = :employeeid
7 ORDER BY SCORE(1) DESC
8 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- old join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns, primary_contact
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc = primary_contact.duns_loc
5 AND primary_contact.emp_id = :employeeid
6 ORDER BY SCORE(1) DESC
7 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- in clause:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc IN
5 (SELECT primary_contact.duns_loc
6 FROM primary_contact
7 WHERE primary_contact.emp_id = :employeeid)
8 ORDER BY SCORE(1) DESC
9 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 3825821668
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN SEMI | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -
How to use Multiple Single Option for selection in the Customer Exit
Hi,
How can we handle the multiple single values in the customer exit variable.
I have a requirement which is as follows -
Table A fiields -> Field Coach, Partner 2, Relation between PArtner 1 & Partner 2, Valid from, valid to date.
Table B -> Service Month, Start Date, End Date.
Table C -> Billing Date, Execution Partner,cal month /year.
For the Field coach in TABLE A, multiple Partner 2 are present.
Report has to be built on Table C.
User inputs the Service month and Field Coach . User can enter multiple field coach values.
For the All the Field Coach values entered, corresponding Partner2's have to be found from Table A and to be passed to the Execution Partner in Table C.
Now if we want to use customer exit variable on the field Execution Partner, how can we handle the Multiple Single selections in the customer exit.
Thanks,
ShubhamHi,
While creating the variable you must have to specify multiple value.
In customer exit
write code multiple times and append the values.
For example:
when 'variable'.
l_s_range - sign = 'I'.
l_s_range - OPT = 'EQ.
l_s_range - LOW = EXECUTION PARTNER 1.
APPEND L_S_RANGE TO E_T_RANGE.
l_s_range - sign = 'I'.
l_s_range - OPT = 'EQ.
l_s_range - LOW = EXECUTION PARTNER 2.
APPEND L_S_RANGE TO E_T_RANGE.
l_s_range - sign = 'I'.
l_s_range - OPT = 'EQ.
l_s_range - LOW = EXECUTION PARTNER 3.
APPEND L_S_RANGE TO E_T_RANGE.
Regards,
Ranganath. -
Restricting Variable values selectively
How to restrict values of Variable while creating query. For Ex: I am creating variable for Fiscal Year and it has a value starting from 2000 to 2050.
So in my report it will show all the values, I would like to restrict it for say between 2002 and 2015.
Thanks
RaghuHi Raghu,
I hope u have already created a Variable for FISC Year,just check in the properties on the Variable ,in the details TAB,whether the TAB Variable Represents is Multiple Single values or Selection.
If not Create one more variable with the above settings and in the Default values in the Propewrties TAB you can specify the range as 2005 to 2012.
With this the report will run for the range u have given,incase if you want to change the value range then in the POP-UP screen ,change the values,say 2005 to 2015 and run the report.
There are other ways too ,but hope this meets ur Requirement.
Rgds
SVU123 -
BPS - Variable value selection
Is there a way, where a user (e.g. admin) could make a selection for a variable from multiple values in a dropdown in a web application, and have the selection enforced for all users?
William LeeCurrent options, whether bps0 or uploading, are a little hard to swallow for the client, especially when the application is already web-enabled. Hope your how-to paper will address this typical admin/user relationship and in essence enable on-web adminstration of variable values. Looking forward. Thanks Marc.
William Lee -
RRI - Jump query, unable to pass the variable value from source to target
Hi,
I've a source query which has a variable on 0vendor, from this query i jump to another query for which i want to pass the this variable value, in the target query i've vendor in free characteristics (no filter or variable in there), and in RSBBS i tried the assignment details options keeping vendor as generic, tried variable and the variable name but nothing seem to work.
But when query is run i can jump into target query but the vendor variable value doesnt get passed thru the values i get in target query is for all rather than for the variable entered vendor value in the source.
btw we're in NW2004s.
any help appreciated with points.
thanks
MayilAnyways, I read somewhere that a variable with replacement path in target query would work, tried it seem to work.
let me know if there other way to do it without creating a variable in target query.
thanks
mayil -
Query property "Save and reuse variable value" doesn't work in BW EHP1
We have a workbook which contain 4 queries, every different worksheet is a different query.
All the queries are on the Same InfoCube and all the queries use the same variables.
In the properties of the query n.2 we set the parameter "Save and reuse variable values" then we refresh the 1st query and all work fine (the system asks the variable values) but when we refresh the 2nd query the system doesn't use the same variable values that we have inserted for the query n.1
We don't have the same problem in 7.0.
Any help is appreciated.
LucaHi,
I suppose, you need to set the 'Save and reuse variable values' for each query individually in the workbook. I am not sure however there is a option where you can specify to apply the settings of one query to all queries in the work book.
Please check and hope it helps.
Regards,
Adarsh Mhatre -
Display variable values selected as header in WAD
Hi All Gurus,
I need to display the variable values entered for running the report as a header to the report being displayed.
I'm using NW2004s BI and standard template 0analysis_pattern. Is there a way I can display the variable values as header info.
Thanks in Advance
SidHi Sid,
You may need to make a copy of this template, or create your own and use the Information Field web item:
http://help.sap.com/saphelp_nw70/helpdata/en/47/96784226d1d242e10000000a1550b0/content.htm
You can set this to display the Variables (VARIABLES_VISIBLE)
Hope this helps... -
Poor query performance when I have select ...xmlsequence in from clause
Here is my query and it takes forever to execute. When I run the xmlsequence query alone, it takes a few seconds. But once added to from clause and I execute the query below together, it takes too long. Is there a better way to do the same?
What I am doing here is what is there in t1 should not be there in what is returned from the subquery (xmlsequence).
select distinct t1.l_type l_type,
TO_CHAR(t1.dt1,'DD-MON-YY') dt1 ,
TO_CHAR(t2.dt2,'DD-MON-YY') dt2, t1.l_nm
from table1 t1,table2 t2,
select k.column_value.extract('/RHS/value/text()').getStringVal() as lki
from d_v dv
, table(xmlsequence(dv.xmltypecol.extract('//c/RHS')))k
) qds
where
t2.l_id = t1.l_id
and to_char(t1.l_id) not in qds.lkiSQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY);
Plan hash value: 2820226721
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 7611M| 907G| | 352M (1)|999:59:59 |
| 1 | HASH UNIQUE | | 7611M| 907G| 2155G| 352M (1)|999:59:59 |
|* 2 | HASH JOIN | | 9343M| 1113G| | 22M (2)| 76:31:45 |
| 3 | TABLE ACCESS FULL | table2 | 1088 | 17408 | | 7 (0)| 00:00:0
| 4 | NESTED LOOPS | | 8468M| 883G| | 22M (1)| 76:15:57 |
| 5 | MERGE JOIN CARTESIAN | | 1037K| 108M| | 4040 (1)| 00:00:49 |
| 6 | TABLE ACCESS FULL | D_V | 1127 | 87906 | | 56 (0)| 00
| 7 | BUFFER SORT | | 921 | 29472 | | 3984 (1)| 00:00:48 |
| 8 | TABLE ACCESS FULL | table1 | 921 | 29472 | | 4 (0)| 00:00:01 |
|* 9 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | | | |
Predicate Information (identified by operation id):
2 - access("t2"."L_ID"="t1"."L_ID")
9 - filter(TO_CHAR("t1"."L_ID")<>"XMLTYPE"."GETSTRINGVAL"("XMLTYPE"."EXTRACT"(VALUE(KOKB
alue/text()')))Message was edited by:
M@$$@cHu$eTt$ -
Slow performance when multiple threads access static variable
Originally, I was trying to keep track of the number of function calls for a specific function that was called across many threads. I initially implemented this by incrementing a static variable, and noticed some pretty horrible performance. Does anyone have an ideas?
(I know this code is "incorrect" since increments are not atomic, even with a volatile keyword)
Essentially, I'm running two threads that try to increment a variable a billion times each. The first time through, they increment a shared static variable. As expected, the result is wrong 1339999601 instead of 2 billion, but the funny thing is it takes about 14 seconds. Now, the second time through, they increment a local variable and add it to the static variable at the end. This runs correctly (assuming the final increment doesn't interleave which is highly unprobable) and runs in about a second.
Why the performance hit? I'm not even using volatile (just for refernce if I make the variable volatile runtime hits about 30 seconds)
Again I realize this code is incorrect, this is purely an interesting side-expirement.
package gui;
public class SlowExample implements Runnable
public static void main(String[] args)
SlowExample se1 = new SlowExample(1, true);
SlowExample se2 = new SlowExample(2, true);
Thread t1 = new Thread(se1);
Thread t2 = new Thread(se2);
try
long time = System.nanoTime();
t1.start();
t2.start();
t1.join();
t2.join();
time = System.nanoTime() - time;
System.out.println(count + " - " + time/1000000000.0);
Thread.sleep(100);
catch (InterruptedException e)
e.printStackTrace();
count = 0;
se1 = new SlowExample(1, false);
se2 = new SlowExample(2, false);
t1 = new Thread(se1);
t2 = new Thread(se2);
try
long time = System.nanoTime();
t1.start();
t2.start();
t1.join();
t2.join();
time = System.nanoTime() - time;
System.out.println(count + " - " + time/1000000000.0);
catch (InterruptedException e)
e.printStackTrace();
* Results:
* 1339999601 - 14.25520115
* 2000000000 - 1.102497384
private static int count = 0;
public int ID;
boolean loopType;
public SlowExample(int ID, boolean loopType)
this.ID = ID;
this.loopType = loopType;
public void run()
if (loopType)
//billion times
for (int a=0;a<1000000000;a++)
count++;
else
int count1 = 0;
//billion times
for (int a=0;a<1000000000;a++)
count1++;
count += count1;
}Peter__Lawrey wrote:
Your computer has different types of memory
- registers
- level 1 cache
- level 2 cache
- main memory.
- non CPU local main memory (if you have multiple CPUs with their own memory banks)
These memory types have different speeds. Depending on how you use a variable affects which memory it is placed in.Plus you have the hotspot compiler kicking in sometime during the run. In other words for some time the VM is interpreting the code and then all of a sudden its compiled and executing the code compiled. Reliable micro benchmarking in java is not easy. See [Robust Java benchmarking, Part 1: Issues|http://www.ibm.com/developerworks/java/library/j-benchmark1.html] -
Bex. Query in 3.5 : All Variable values using Text Elements not shown
I am using a Variable, for which I am suppose to select more than 15 values . After executing the report, I am trying look for these values using Layout--> Display Text Elements --> All.
Only first 10-11 values are shown and the rest are shown as ...
As such, I cannot see all the values in WAD too, in the info tab.
Is there any limitations to display the values with text elements ?
Any idea how to display all the values ?
ThanksYou are right. I can do this if I select Filter values.
But, I am trying to show the values entered for the variables using Layout-->Display Text elements --. Alll or variables.
These are the values shown in the web template. The filter values goes to the data analysis tab, which are fine.
I want to display all the values in the information tab, but only few values are show and rest are shown as ... The same is the case when I select Layout-->Display Text elements --> All or variables. after I execute the query. -
Performance when using bind variables
I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
My first JAVA program runs without using bind variables as follows:
loop
stmt.executeUpdate("delete from nobind_test where id = " + i);
end loop
My second JAVA program uses bind variables as follows:
pstmt = conn.prepareStatement("delete from bind_test where id = ?");
loop
pstmt.setString(1, String.valueof(i));
rs = pstmt.executeQuery();
end loop;
Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
The trouble is that the program that does not use bind variables runs faster than the bind variable program.
Can anyone tell me why this would be? Is my test too simple?
Thanks.[email protected] wrote:
I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
The trouble is that the program that does not use bind variables runs faster than the bind variable program.
Can anyone tell me why this would be? Is my test too simple?
The point is that you have to find out where your test is spending most of the time.
If you've just populated a table with 100,000 records and then start to delete randomly 2,000 of them, the database has to perform a full table scan for each of the records to be deleted.
So probably most of the time is spent scanning the table over and over again, although most of blocks might already be in your database buffer cache.
The difference between the hard parse and the soft parse of such a simple statement might be negligible compared to effort it takes to fulfill each delete execution.
You might want to change the setup of your test: Add a primary key constraint to your test table and delete the rows using this primary key as predicate. Then the time it takes to locate the row to delete should be negligible compared to the hard parse / soft parse difference.
You probably need to increase your iteration count because deleting 2,000 records this way probably takes too short and introduces measuring issues. Try to delete more rows, then you should be able to spot a significant and constant difference between the two approaches.
In order to prevent any performance issues from a potentially degenerated index due to numerous DML activities, you could also just change your test case to query for a particular column of the row corresponding to your predicate rather than deleting it.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How to improve query performance when query with mtart
hi all,
I need to know what is the best way to query out MATNR when i have values for MTART and WERKS?
When a user select a MTART and WERKS, a whole bunch of MATNR related to them are display out. How do i improve this query so that it does not take a long time?
Thanks
Willliam WilstrothIs that what you are looking for???
select a~matnr a~mtart ... b~werks ...
into table <itab>
from mara as a
inner join marc as b
on a~matnr = b~matnr
where mtart = p_mtart
and werks = p_werks.
There is an index on MTART in table MARA.
<b>T Material Type</b>
Kind Regards
Eswar
Maybe you are looking for
-
TSW-Ticket Creation & Actualization using BDC
Hi, I am using BDC for Ticket creation using transaction O4TEN. After execution of this BDC, Ticket get created and Saved but it does not actualize ticket even if we click actualize button using BDC. In real scenario, after ticket creation the status
-
Is there a VI availible that will allow one to interact with an analog signal
Hello all, I am new to the sight and look forward to participating. I have a project that I'm working on and I would like to ask for your thoughts and direction if you're willing. I have a few weeks left before completing an electronics program degr
-
I have been having issues with my iCloud email on my iMac. For some time period, everyday, it won't reload and often shows symbols instead of letters for the text. Any ideas how to solve?
-
Hello I have a mac pro computer and an Ipad, for some reason my username and photo does not come up, it is infact my nieces name Katie Brownson and her photo. I dont understand as she has never been on either item. Also when it is automatically add y
-
I had been facing this issue since June. Then I uninstalled Mozilla Firefox (the latest version at that time) and installed "Nightly" which continued to have the same issue. Now I hav replaced 'Nightly" with "Mozilla Firefox 33.0" but the issue persi