Performance of query , What if db_recycle_cache_size is zero
Hi
in our 11g Database , We have some objects for which default buffer pool is recycle pool.But i observed recycle pool size is zero (db_recycle_cache_size = 0 ).
Now if we issue a sql for which we need to access these objects , what happen ? i strongly think as there is no recycle bin , it will go in to traditional default buffer cache and obey normal LRU algorithm.Am i missing something here ?
The issue we face is , we have a query which is picking up correct index but it takes around 3 Minutes.I see that the step which takes more time is index range sacn which fetches around 50k records and takes 95% of whole query execution time.Then i observed that index is configured to have default pool as recycle pool.If i rerun the same query again and again , execution times are close to zero (No wonder , no physical reads in subsequent execution).
I am thinking of setting up recycel pool .What else i may need to consider tuning this query ?
Thanks and Regards
Pram
>
Now if we issue a sql for which we need to access these objects , what happen ? i strongly think as there is no recycle bin , it will go in to traditional default buffer cache and obey normal LRU algorithm.Am i missing something here ?
>
Recycle bin? What does that have to do with anything?
You are correct - with no keep or recycle cache the default buffer cache and LRU are used for aging.
>
I am thinking of setting up recycel pool .
>
Why - it doesn't sound like you know the purpose of the recycle pool. See '7.2.4 Considering Multiple Buffer Pools' in the Performance Tuning Guide
http://docs.oracle.com/cd/B28359_01/server.111/b28274/memory.htm
>
With segments that have atypical access patterns, store blocks from those segments in two different buffer pools: the KEEP pool and the RECYCLE pool. A segment's access pattern may be atypical if it is constantly accessed (that is, hot) or infrequently accessed (for example, a large segment accessed by a batch job only once a day).
Multiple buffer pools let you address these differences. You can use a KEEP buffer pool to maintain frequently accessed segments in the buffer cache, and a RECYCLE buffer pool to prevent objects from consuming unnecessary space in the cache. When an object is associated with a cache, all blocks from that object are placed in that cache. Oracle maintains a DEFAULT buffer pool for objects that have not been assigned to a specific buffer pool. The default buffer pool is of size DB_CACHE_SIZE. Each buffer pool uses the same LRU replacement policy (for example, if the KEEP pool is not large enough to store all of the segments allocated to it, then the oldest blocks age out of the cache).
By allocating objects to appropriate buffer pools, you can:
•Reduce or eliminate I/Os
•Isolate or limit an object to a separate cache
>
Using a recycle pool isn't going to affect that initial 3 minute time. It would keep things from being aged out of the default cache when the index blocks are loaded.
Further using a recycle pool could cause those 'close to zero' times for the second and third access to increase if the index blocks from the first query were 'recycled' so another query could use the blocks. Recycle means - thow-it-away when you are done if you want.
>
What else i may need to consider tuning this query ?
>
If it ain't broke, dont' fix it. You haven't shown that there is anything wrong with the query you are talking about. How could we possibly know if 3 minutes if really slow or if it is really fast? You haven't posted the query, an execution plan, row counts for the tables or counts for the filter predicates.
See this AskTom article for his take on the RECYCLE and other pools.
Similar Messages
-
Perform a query on tree node click
This is probably very easy and I'm just having brain freeze..
I have an application that is using ColdFusion to provide
data for various controls. One control is a tree showing two levels
of data (main level shows the process/job number, then you open
that to see all the event numbers for that job). This all works
fine.
Now what I want to do is when you select an even number, flex
will use a coldfusion page I set up to perform a query on the
database using the event number and job number to get all the rest
of the data about htat particular job (description, run times, etc)
I have the click event from the tree control working fine, I
have the page in coldfusion working fine, it outputs an XML format
file with all the relevant details based on the parameters you send
it. Now the question is, how do I populate the fields on the form
with the data returned from the query?
I am using a HTTPService call to get the data to/from
ColdFusion (I have version 6.1 of Coldfusion).
Thanks for any help.Well, I answered my own question... Here is what I did in
case anyone else wants to know. If there is a better way, please
advise.
I have a click event on the tree control. Since the tree will
only be two levels deep (parent and child) I test for whether the
item clicked has a parent. If it does, I know they clicked on a
child node. I then fire off the HTTPService with the parameters
from the child and parent nodes of the tree item that was clicked.
In the result parameter of the HTTPService, I populate the
various fields using the lastresult.root.fieldname syntax of the
HTTPService.
It works as expected, but perhaps there is a better
way? -
Hi Friends,
if we run a query,what all happens in background/what are phases of query processesing.?
Many thanks
ShashikalaIt has to go through
Parsing-on this stage it perform basic check on code (T-SQL,
Binding(Algebrizer Stage)-on this stage does more parsing and perform query tree,
Optimize- on this stage take query tree from Binding and parsing and find a better way to return the result that you need so cost might apply here and then
Execute
To know about transaction Log we have to know first how Transaction log file internally works: when you are trying to make change in data as Insert, Update, Delete requests
from applications using T-SQL or on object explorer, SQL Server will load (load) to the corresponding page data memory where we call this memory area as DATA CACHE. Then the data in the data cache are changed which is also known as known as DIRTY PAGE.
Next all changes we made by running query will be recorded in the transaction log file so they called the write-ahead log. At final stage, a process known as CHECK POINT PROCESS will check and write all the transaction has been committed AND completed to your
hard drive by flushing the page.
So simply put here are the steps
Data modification sent by application
Data pages are located in or read into, cache and modified
Modification is recorded in transaction log on DISK
Checkpoint writes committed transactions to database
I hope this helps a bit in answering your question good luck. -
Does Coloring Of Rows and Column in Query effect the Performance of Query
Hi to all,
Does Coloring of Rows and Column in Query via WAD or Report designer , will effect the performance of query.
If yes, then how.
What are the key factor should we consider while design of query in regards of Query performance.
I shall ne thankful to you for this.
Regards
Pavneet RanaThere will not be any significance perofrmance issue for Colouring the rows or columns...
But there are various performance parameters which should be looked into while designing a query...
Hope this PPT helps you.
http://www.comeritinc.com/UserFiles/file/tips%20tricks%20to%20speed%20%20NW%20BI%20%202009.ppt
rgds, Ghuru -
How to improve performance of query
Hi all,
How to improve performance of query.
please send :
[email protected]
thanks in advance
bhaskarhi
go through the following links for performance
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
http://www.asug.com/client_files/Calendar/Upload/ASUG%205-mar-2004%20BW%20Performance%20PDF.pdf
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2 -
Improving performance of query with View
Hi ,
I'm working on a stored procedure where certain records have to be eleminated , unfortunately tables involved in this exception query are present in a different database which will lead to performance issue. Is there any way in SQL Server to store this query
in a view and store it's execution plan and make it work like sp.While I beleive it's kinda crazy thought but is there any better way to improve performance of query when accessed across databases.
Thanks,
Vishal.Do not try to solve problems that you have not yet confirmed to exist. There is no general reason why a query (regardless of whether it involves a view) that refers to a table in a different database (NB - DATABASE not INSTANCE) will perform poorly.
As a suggestion, write a working query using a duplicate of the table in the current database. Once it is working, then worry about performance. Once that is working as efficiently as it can , change the query to use the "remote" table rather
than the duplicate. Then determine if you have an issue. If you cannot get the level of performance you desire with a local table, then you most likely have a much larger issue to address. In that case, perhaps you need to change your perspective
and approach to accomplishing your goal. -
Urgent please
anyone having issues with importing CR2 files into lightroom 5 as error message comes up saying "Some import operations were not performed". please advise what is a solution pleaseSounds like the folder Write permissions issue described here with a solution:
"Some import operations were not performed" from camera import -
Performance of query using view - what is happening here
Hi,
I can't explain the difference in performance between two queeries.
For a datawarehouse I have 3 tables from 3 different sources, named source1, source2 and
source3 they all have the identical columns:
client_key
,client_revenue
source1 has 90.000.000 rows, source2 1.000.000 rows and source3 50.000 rows
I also made a view say, all_clients which is the union of the 3 tables plus a constant column called 'source'
which corresponds to the table_name.
If I run a query which shows the number of records it takes 15-20 minutes:
select source,count(*)
from all_clients
group by source.
If i run the following query it takes about 5 minutes!
select 'source1',count(*)
from source1
union
select 'source2',count(*)
from source2
union
select 'source3',count(*)
from source3.
What makes the difference?Hmmm... Interesting. In my small example things seem pretty similar. Have you done the explain plans?
An observation is that you are using a UNION rather than a UNION ALL which would be better as you may be incurring an unnecessary SORT UNIQUE.
create table tab1 as(select object_id, object_type from all_objects);
create table tab2 as(select object_id, object_type from all_objects);
create table tab3 as(select object_id, object_type from all_objects);
analyze table tab1 estimate statistics;
analyze table tab2 estimate statistics;
analyze table tab3 estimate statistics;
create view v_tab123 as(select 'source1' source,count(*) cnt
from tab1
union
select 'source2',count(*)
from tab2
union
select 'source3',count(*)
from tab3);
select 'source1' source,count(*) cnt
from tab1
union
select 'source2',count(*)
from tab2
union
select 'source3',count(*)
from tab3;
Operation Object Name Rows Bytes Cost TQ In/Out PStart PStop
SELECT STATEMENT Hint=CHOOSE 3 180
SORT UNIQUE 3 180
UNION-ALL
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB1 38 K 10
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB2 38 K 10
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB3 38 K 10
-- Union
select source, cnt from(
select 'source1' source,count(*) cnt
from tab1
union
select 'source2',count(*)
from tab2
union
select 'source3',count(*)
from tab3)
Operation Object Name Rows Bytes Cost TQ In/Out PStart PStop
SELECT STATEMENT Hint=CHOOSE 3 180
VIEW 3 54 180
SORT UNIQUE 3 180
UNION-ALL
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB1 38 K 10
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB2 38 K 10
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB3 38 K 10
-- Union ALL
select source, cnt from(
select 'source1' source,count(*) cnt
from tab1
union ALL
select 'source2',count(*)
from tab2
union ALL
select 'source3',count(*)
from tab3)
Operation Object Name Rows Bytes Cost TQ In/Out PStart PStop
SELECT STATEMENT Hint=CHOOSE 3 180
VIEW 3 54 180
SORT UNIQUE 3 180 <<<<============== Unnecessary
UNION-ALL
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB1 38 K 10
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB2 38 K 10
SORT AGGREGATE 1 60
TABLE ACCESS FULL TAB3 38 K 10
analyze table tab1 delete statistics;
analyze table tab2 delete statistics;
analyze table tab3 delete statistics;
Now with RBO - the SORT UNIQUE goes away for the above query.
Operation Object Name Rows Bytes Cost TQ In/Out PStart PStop
SELECT STATEMENT Hint=CHOOSE
VIEW
UNION-ALL
SORT AGGREGATE
TABLE ACCESS FULL TAB1
SORT AGGREGATE
TABLE ACCESS FULL TAB2
SORT AGGREGATE
TABLE ACCESS FULL TAB3 -
Performance problem: Query explain plan changes in pl/sql vs. literal args
I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between 123 and 456;
Its performance was excellent with literal arguments (1 sec per execution).
But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
Ex:
CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
CURSOR LT_CURSOR IS
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between param1 AND param2;
BEGIN
FOR aRecord IN LT_CURSOR
LOOP
(print timestamp...)
END LOOP;
END runQuery;
Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
Solution:
Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
Downside: Query not cached for future use. Perfectly fine for this query's purpose.
Other suggestions are welcome.AmandaSoosai wrote:
I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between 123 and 456;
Its performance was excellent with literal arguments (1 sec per execution).
But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
Ex:
CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
CURSOR LT_CURSOR IS
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between param1 AND param2;
BEGIN
FOR aRecord IN LT_CURSOR
LOOP
(print timestamp...)
END LOOP;
END runQuery;
Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
Solution:
Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
Downside: Query not cached for future use. Perfectly fine for this query's purpose.
Other suggestions are welcome.Best wild guess based on what you've posted is a bind variable mismatch (your column is declared as a NUMBER data type and your bind variable is declared as a VARCHAR for example). Unless you have histograms on the columns in question ... which, if you're using bind variables is usually a really bad idea.
A basic illustration of my guess
http://blogs.oracle.com/optimizer/entry/how_do_i_get_sql_executed_from_an_application_to_uses_the_same_execution_plan_i_get_from_sqlplus -
Hi BW Folks,
I am working on virtual cube 0bcs_vc10 for bcs(business consolidation) the base cube is 0bcs_c10. We compressed and partitioned the base cube since it was having a huge performance issue. The queries which i developed are running fine and are in production.
Now when I developed new queries after developing them and running in DEv they are taking 20 to 25 mins to run i.e. the whole partitioning and compressing of the cube is not helping us.
I went to RSRV and check the indices of the cube and I got this
yellow signal <b>ORACLE: Index /BI0/ICS_ITEM~0 has possibly degenerated</b>
I need your suggestions what should be my next step to this Will assign full points
ThanksHi Ravi,
i did RSRV and corrected the error but it still shows me the same error and there is a yellow signal. Could you please tell me where else should I look in oder to get the performance of the query right.....I have already done partition and compression.
It was running fine till 2 days back and all of a sudden there is a huge runtime for the queries.
Your suggestions will be appreciatd with full pioints
Thanks -
SQ01 ABAP Query - input field contains leading zeros
Hi,
I have a ABAP query where some of the values are stored with leading zeros in the table. This means that the user have to enter the leading zeros as well in the selection screen in SQ01 in order to get any output.
What I would like to do is to restrict the input in the selection screen to allow only the digits without any leading zeros. For example, the user should not enter 000000000602590, but 602590. Also, the output should be without the leading zeros.
Does anyone know how to do this in the InfoSet?
Thanks in advance
Br,
LarsHi Lars,
The leading zeroes are appearing because your field is either a Numeric or Packed decimal.
The leading zeroes can be suppressed if you assign the packed to a Character field.
You have option to change the datatype of the field in Infoset.
Try converting it to equivalent character type.
Let me know your findings.
Thanks
Ajay -
Performance problem querying multiple CLOBS
We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
select count(*) from articles
where contains (abstract , 'matter OR dark OR detection') > 0
or contains (subject , 'matter OR dark OR detection') > 0
Columns abstract and subject are CLOBs. However, this query works fine for AND;
select count(*) from articles
where contains (abstract , 'matter AND dark AND detection') > 0
or contains (subject , 'matter AND dark AND detection') > 0
The explain plan gives a cost of 2157 for OR and 14.3 for AND.
I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
create index article_abstract_search on article(abstract)
INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
The data and index tables are on separate tablespaces.
Can anyone suggest what is going on here, and any alternatives?
Many thanks,
Geoff RobinsonThanks for your reply, Omar.
I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
Add an extra CONTAINS and it becomes 300 times more costly!
The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
Regards
Geoff -
WMI Query to SCCM 2012 results zero results in c#
In 2007 this works without issues, however in 2012 when attempting the 2nd query it returns zero results. However if I do this manually I can produce results...Here is my code:
using (WqlConnectionManager connection = Connect(getServer))
string query = "select * from SMS_ObjectContainerItem WHERE ContainerNodeID='" + ProfileID + "'";
foreach (IResultObject getobject in connection.QueryProcessor.ExecuteQuery(query))
getPackageID = getobject["InstanceKey"].StringValue;
query = "select * from SMS_Collection WHERE CollectionID='" + getPackageID + "'";
**This is where it will return zero results**
foreach (IResultObject searchID in connection.QueryProcessor.ExecuteQuery(query))
CMProfiles profile = new CMProfiles();
profile.Name = searchID["Name"].StringValue;
profile.ID = getPackageID;
results.Add(profile);
I'm pulling my hairs trying to understand by the 2nd query is not returning any results. When this works fine in SCCM 2007What you are using here are the SDK libraries, which are admittedly very thin wrappers around WMI, but not quite. Have you tried implementing this directly using WMI? I have written software that manipulates 2007 / 2012 and eventually found that
the SDK libraries just sort of got in my way, so now I do all of my interactions with SCCM directly with WMI and forgoe the SDK libraries.
Before I changed over, I did find that there are some oddities in using the SDK. What I eventually found that worked for me was to bind to the 2007 SDK libraries, deliver them with my application and use the 2007 libraries regardless of if I was connecting
to 2007 or 2012. What I found was that I would run into issues using the 2012 libraries to talk to 2007, but the 2007 libraries would work fine with both.
I have tested your queries, directly using WMI and PowerShell on a 2012 server and they work fine. I am presuming that the folder that you are attempting to access is a Device Collection folder.
Once again, I would suggest using WMI directly, especially if making a product that will work with 2007 and 2012; you will be a much happier person.
It would be greatly appreciated if you would mark any helpful entries as helpful and if the entry answers your question, please mark it with the Answer link. -
Performance optimization : query taking 7mints
Hi All ,
Requirement : I need to improve the performance of custom program ( Program taking more than 7 mints +dump).I checked in runtime analysis and below mention query taking more time .
Please let me know the approach to minimize the query time .
TYPES: BEGIN OF lty_dberchz1,
belnr TYPE dberchz1-belnr,
belzeile TYPE dberchz1-belzeile,
belzart TYPE dberchz1-belzart,
buchrel TYPE dberchz1-buchrel,
tariftyp TYPE dberchz1-tariftyp,
tarifnr TYPE dberchz1-tarifnr,
v_zahl1 TYPE dberchz1-v_zahl1,
n_zahl1 TYPE dberchz1-n_zahl1,
v_zahl3 TYPE dberchz1-v_zahl3,
n_zahl3 TYPE dberchz1-n_zahl3,
nettobtr TYPE dberchz3-nettobtr,
twaers TYPE dberchz3-twaers,
END OF lty_dberchz1.
DATA: lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
WITH NON-UNIQUE KEY belnr belzeile
INITIAL SIZE 0 WITH HEADER LINE.
DATA: lt_dberchz1a LIKE TABLE OF lt_dberchz1 WITH HEADER LINE.
*** ***********************************Taking more time*************************************************
*Individual line items
SELECT dberchz1~belnr dberchz1~belzeile
belzart buchrel tariftyp tarifnr
v_zahl1 n_zahl1 v_zahl3 n_zahl3
nettobtr twaers
INTO TABLE lt_dberchz1
FROM dberchz1 JOIN dberchz3
ON dberchz1~belnr = dberchz3~belnr
AND dberchz1~belzeile = dberchz3~belzeile
WHERE buchrel EQ 'X'.
DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.
LOOP AT lt_dberchz1.
READ TABLE lt_dberdlb BINARY SEARCH
WITH KEY billdoc = lt_dberchz1-belnr.
IF sy-subrc NE 0.
DELETE lt_dberchz1.
ENDIF.
ENDLOOP.
lt_dberchz1a[] = lt_dberchz1[].
DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
OR belzart EQ 'ZUTAX2'
OR belzart EQ 'ZUTAX3'.
DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
AND belzart NE 'ZUTAX2'
AND belzart NE 'ZUTAX3'.
***************************second query************************************
* SELECT opbel budat vkont partner sto_opbel
INTO CORRESPONDING FIELDS OF TABLE lt_erdk
FROM erdk
WHERE budat IN r_budat
AND druckdat NE '00000000'
AND stokz EQ space
AND intopbel EQ space
AND total_amnt GT 40000.
**************************taking more time*********************************
SORT lt_erdk BY opbel.
IF lt_erdk[] IS NOT INITIAL.
SELECT DISTINCT printdoc billdoc vertrag
INTO CORRESPONDING FIELDS OF TABLE lt_dberdlb
FROM dberdlb
* begin of code change by vishal
FOR ALL ENTRIES IN lt_erdk
WHERE printdoc = lt_erdk-opbel.
IF lt_dberdlb[] IS NOT INITIAL.
SELECT belnr belzart ab bis aus01
v_zahl1 n_zahl1 v_zahl3 n_zahl3
INTO CORRESPONDING FIELDS OF TABLE lt_dberchz1
FROM dberchz1
FOR ALL ENTRIES IN lt_dberdlb
WHERE belnr EQ lt_dberdlb-billdoc
AND belzart IN ('ZUTAX1', 'ZUTAX2', 'ZUTAX3').
ENDIF. "lt_dberdlb
endif.
Regards
Rahul
Edited by: Matt on Mar 17, 2009 4:17 PM - Added tags and moved to correct forumRun the SQL Trace and tell us where the time is spent,
see here how to use it:
SELECT dberchz1~belnr dberchz1~belzeile
belzart buchrel tariftyp tarifnr
v_zahl1 n_zahl1 v_zahl3 n_zahl3
nettobtr twaers
INTO TABLE lt_dberchz1
FROM dberchz1 JOIN dberchz3
ON dberchz1~belnr = dberchz3~belnr
AND dberchz1~belzeile = dberchz3~belzeile
WHERE buchrel EQ 'X'.
I assume that is this select, but without data is quite useless
How large are the two tables dberchz1 JOIN dberchz3
What are the key fields?
Is there an index on buchrel
Please use aliases dberchz1 as a
INNER JOIN dberchz3 as b
to which table does buchrel belong?
I don't know you tables, but buchrel EQ 'X' seems not selective, so a lot of data
might be selected.
lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
WITH NON-UNIQUE KEY belnr belzeile
INITIAL SIZE 0 WITH HEADER LINE.
DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.
LOOP AT lt_dberchz1.
READ TABLE lt_dberdlb BINARY SEARCH
WITH KEY billdoc = lt_dberchz1-belnr.
IF sy-subrc NE 0.
DELETE lt_dberchz1.
ENDIF.
ENDLOOP.
lt_dberchz1a[] = lt_dberchz1[].
DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
OR belzart EQ 'ZUTAX2'
OR belzart EQ 'ZUTAX3'.
DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
AND belzart NE 'ZUTAX2'
AND belzart NE 'ZUTAX3'.
This is really poor coding, there is sorted table ... nice a compelelty different key is
needed and used .... useless.
Then there is a loop which is anywy a full processing no sort necessary.
Where is the read if you use binary search on TABLE lt_dberdlb ?
Then the tables are again process completely ...
DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
AND belzart NE 'ZUTAX2'
AND belzart NE 'ZUTAX3'.
What is that ???? Are you sure that anything can survive this delete???
Siegfried -
Performance Tuning : Query session
Dear Friends,
I am working with Hyperion Interactive reporting and I am very new to this environment, I am having a query session with 11 tables all are simple joins. when ever I am processing the query it talks long time to fetch the data ofcourse it has milions of records, do you have any idea how do I reduce query processing time. or else please tell me what are things I need to do and what are things I need not do. any query performance tips in brio.
Best Regards,
S.MuruganQuery Performance is based on a variety of factors.
- Network speed
- size of dataset returning -- Are you really bringing back 1 million rows?
- properly tuned database -- Capture the SQL and have DBA review it
- proper created query - correct order of tables for the FROM clause -- This is based on order they were brought into the data model section
Wayne Van Sluys
TopDown Consulting
Maybe you are looking for
-
PO in transaction ME21N, the condition tab in PO is greyed out
Hi Experts, Pls help me out from this... When creating the PO in transaction ME21N, the condition tab in PO is greyed out.I am not able to enter the price. The price fields are greyed out. Any inputs on this will be highly rewarded. Regards Lakshmira
-
How to delete backend system data from GRC,GRC 10 AC
Hello experts, we have connected multiple ECC systems to GRC by creating connectors with respect to each system and currently we are using,now due to some reasons customer requested to delete complete data from one of the ECC system from GRC. we are
-
Html:link and javascript
Maybe I'm asking the wrong question, how do you execute JavaScript within a <html:link> tag in Struts. I simply want to execute a collapse / expand divs with an onclick attribute in the <html:link> tag. Can this be done with some other tag? TC
-
Hi all... I hope someone can help me!! I have created an Xmas invitation which has a small game in it!! Bit of a promotional thing to get people interested. The gimmick behind it is that whoever scores the highest score we will give a prize to in our
-
How to change the dashboard ivew.
Hello, I am working into have a new ivew for dashboard items in xRPM. Do you know where can i find the source data of it? how can i change it? is it posible add new tabs to dashboard? Thanks a lot, CAMILO URIBE