Query is slower on the first execution
I noticed that when I run a query command on the shell, the first run is significantly slower. I used the time command to measure the execution time. The execution time for the first run is more than 10 times of the second run.
E.g. My query took me 5.97s at the first run, but the same query only took 0.3s at the second run.
I wonder whether this is a general circumstance for Berkeley DB XML or is it specific to the container that I created?
Thank you!
all of our dbs have that behaviour too. I assume it had to do with caching in memory and (if on *nix xystems) disk caching as well. any disk based db I've worked with has that "feature".
Similar Messages
-
Query execution slow for the first time...
I am new to Oracle sql.
I have a query whose performance is very slow for the first time, but on subsequent executions its fast. When executed for the first time it is taking around 45 seconds and on subsequent executions, 600 milliseconds.
Is there a specific reason for this to happen. I am calling this query from my java code using a prepare statement.Are the differences in queries solely in the where clause? If so can you parameterize the query and use bind variables instead so the only difference from one query to the next is the values of the bind variables? Using bind variables in your queries will enable the parser to reuse the already parsed queries even when the bound values differ.
Also there may be other optimizations that can be made to either your query or the tables that it is querying against to improve your performance. To be able to improve your queries performance you need to understand how it's accessing the database.
See Rob's thread on query optimization [When your query takes too long |http://forums.oracle.com/forums/thread.jspa?threadID=501834&start=0&tstart=0] for a primer on optimizing your query. -
We experience a performanceproblem with some of our Stored Procedures. SQL Server is "Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64)"
Situation:
SQL Server Proc1 executes some SQL Statement and starts some other SQL Stored Procedures. I open a SQL Management Studio Session "example session_id 105", trace the session 105 with the SQL Server Profiler.
I start Proc 1, when Proc1 starts the execution of Proc 2, the Profiler Trace shows a delay of 6 seconds between SP:StmtStarting "execute db..proc2 @SomeVar" and SP:Starting "execute db..proc2 @SomeVar".
All following executions of Proc1 in the Session 105 runs without a delay between SP:StmtStarting "execute db..proc2 @SomeVar" and SP:Starting "execute db..proc2 @SomeVar".
But when i open a new SQL Server Management Session "session_id 124", the first execution of Proc1 when it executes Proc 1, there is again the delay of 6 seconds between SP:StmtStarting "execute db..proc2 @SomeVar" and SP:Starting "execute
db..proc2 @SomeVar".
Proc 1 starts the execution of Proc2 with a simple execute statement like this:
Execute DB..Proc2 @SomeVar
So its not dynamic SQL.
What is SQL Server doing? I understand that SQL Server has to do some work when it executes the first time a Stored Procedure. But why is the SQL Server doing it in every new Session?
How can i prevent this behavior or how to make it faster?
Best Regards
Paolo>In my case the temp tables ruined the performance.
Creating temp tables takes time & resources. Temporary table usage should be justified and tested in stored procedures. There are cases when temporary table usage is helpful especially with very complex queries.
In your case it appears that not one but several temp tables were applied. That can be punishing.
Paul White's blog: "Ask anyone what the primary advantage of temporary tables over
table variables is, and the chances are they will say that temporary tables support statistics and table variables do not. This is true, of course; even the indexes that enforce PRIMARY KEY and UNIQUE constraints on table variables do not have
populated statistics associated with them, and it is not possible to manually create statistics or non-constraint indexes on table variables. Intuitively, then, any query that has alternative execution plans to choose from ought to benefit from using
a temporary table rather than a table variable. This is also true, up to a point.
The most common use of temporary tables is in stored procedures, where they can be very useful as a way of simplifying a large query into smaller parts, giving the optimizer a better chance of finding good execution plans, providing statistical
information about an intermediate result set, and probably making future maintenance of the procedure easier as well. In case it is not obvious, breaking a complex query into smaller steps using temporary tables makes life easier for the optimizer in
several ways. Smaller queries tend to have a smaller number of possible execution plans, reducing the chances that the optimizer will miss a good one. Complex queries are also less likely to have good cardinality (row count) estimates and statistical
information, since small errors tend to grow quickly as more and more operators appear in the plan.
This is a very important point that is not widely appreciated. The SQL Server query optimizer is only as good as the information it has to work with. If cardinality or statistical information is badly wrong
at any point in the plan, the result will most likely be a poor execution plan selection from that point forward. It is not just a matter of creating and maintaining appropriate statistics on the base tables, either. The optimizer does
use these as a starting point, but it also derives new statistics at every plan operator, and things can quickly conspire to make these (invisible) derived statistics hopelessly wrong. The only real sign that something is wrong (aside from poor performance,
naturally) is that actual row counts vary widely from the optimizer’s estimate. Sadly, SQL Server does not make it easy today to routinely collect and analyse differences between cardinality estimates and runtime row counts, though some small (but welcome)
steps forward have been made in SQL Server 2012 with new row count information in the
sys.dm_exec_query_stats view.
The benefits of using simplifying temporary tables where necessary are potentially better execution plans, now and in the future as data distribution changes and new execution plans are compiled. On the cost side of the ledger we have
the extra effort needed to populate the temporary table, and maintain the statistics. In addition, we expect a higher number of recompilations for optimality reasons due to changes in statistics. In short, we have a trade-off between potential
execution plan quality and maintenance/recompilation cost.
LINK:
http://sqlblog.com/blogs/paul_white/archive/2012/08/15/temporary-tables-in-stored-procedures.aspx
Kalman Toth Database & OLAP Architect
IPAD SELECT Query Video Tutorial 3.5 Hours
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
Photoshop CS6 pentool reaction rate was slower than the first use of Photoshop CS6 pentool. I wonder the fundamental cause and solution rather than optimization.
I have no problem with CS6 Pen tool performance. Its instantaneous. I'm using Windows 7 Pro and CS6 version 13.1.2. on a Dell workstation PC.
Supply pertinent information for quicker answers
The more information you supply about your situation, the better equipped other community members will be to answer. Consider including the following in your question:
Adobe product and version number
Operating system and version number
The full text of any error message(s)
What you were doing when the problem occurred
Screenshots of the problem
Computer hardware, such as CPU; GPU; amount of RAM; etc. -
My laptop had water spilled on it 3 days ago, I let it sit as instructed by best buy . I am on it right now but the mouse movement isn't smooth anymore and it is slower. The first time I turned it on it didn't respond at all,can anyone tell how bad the damage really is without actually seeing it ? Do you think maybe it will be just a simple repair or am I looking at a whole new logic board?
Until you take it in so that a technician can look at it, it is all guessing and nothing more.
Allan -
How to capture the first execution of a report
Hi,
I am executing a report in background. The first time the report is executed I have to do a different processing. After the 1st execution I have to different processing.
Someone please tell me how to capture the 1st execution of a report. Is there a system variable ?
Appreciate your <removed by moderator> response.
Thanks,
Dikshitha G
Edited by: Thomas Zloch on May 12, 2011 11:36 AMKeshav.T wrote:
Are you going to create a Z table for this ???? ... Is there nobody to advice you in your firm ???
Hello Keshav,
A couple of years ago i would have recommended using the INDX table, but it has got it's demerits.
Maintenance of Z-table is easier than the INDX table. Say there is some error in the program & you want to override the flag. Will it be easier to do so in the INDX table or via SM30 for the Z-Table?
As a matter-of-fact i will recommend using the solution provided by Florian the TVARVC technique.
1. Create a parameter(specific to your program) & transport it.(See the trxn STVARV)
2. In your program check the value of this param & set it accordingly.
Using TVARVC you will 2 birds with one stone:
1. You don't have to create a custom table.
2. Easy maintenance via STVARV trxn.
Hope you get the point!
Cheers,
Suhas
PS: In our system we have a Z-table designed specifically for this particular purpose. All the programs having this kind of requirement refer to this table. -
Query only works on the first word of the managed property
I have several managed properties that are not returning query results as expected, they are returning results only if the term queried matches
the first word in the property, any other query returns no results.
Scenario:
filename = "Sample Excel File.xlsx" (OOTB property)
FooOwner = "Martin Zima"
FooSiteName = "Test Site"
If I query filename:Sample, filename:File, filename:Excel (or any combination) they all works as expected. However, if I query "FooOwner:Martin"
it works, but if I query FooOwner:Martin Zima it fails, and FooOwner:Zima also fails. Similarly, if I search for FooSiteName:Site it fails (only CPSiteName:Test works). Everything seems ok in the crawled property and managed property. Can anyone please help?Hi Martin,
I tried in my envrionment, author:"Rebecca Tu" and
author:Rebecca Tu returned the same result. Please try author:* and see if it will return all the results. If there is no result, then we should check the DisplayAuthor property in Search schema.
In the default search result page, there should be Author refiner in the Refinement. You could use the default one as below:
Regards,
Rebecca Tu
TechNet Community Support -
Query result area on the first row instead of last row
Hi BI experts,
I want to display the result of the BI query at the first row instead of last row.Could somebody help me how to accomplish this.
I tried with the layout->move result area but could not be possible.
I need your kind suggetion.
Thanks.hi jyuoti,
Further, open your query in query designer, look at the query property, under the tab "rows/columns"
change the result position of rows to "Below", you can see the preview there itself.
This should solve your problem.
Best Regards,
*Assign points if you find the answer useful. -
Broadband speed extremely slow after the first 10 ...
Hi so we recently did a home move and the phone and internet came on when they were supposed to.
The first few hours of the first day the download speed was going up and down as it was supposed before settling on 287.1 kbps down and 444.6 kbps up and it's not budged since.
It still drops and reconnects now and then, the homehub has been turned on the whole time, I did try changing the filter incase it was that but it didn't make a difference, And it has been restarted a couple of times.
Our estimated speed on BT's line checker is 11 Mb with an estimated range of 7.5-14 Mb
Here is the current homehub stats;
Connection Information
Line state:
Connected
Connection time:
0 days, 01:35:38
Downstream:
287.1 Kbps
Upstream:
444.6 Kbps
ADSL Settings
VPI/VCI:
0/38
Type:
PPPoA
Modulation:
G.992.5 Annex A
Latency type:
Interleaved
Noise margin (Down/Up):
12.5 dB / 16.5 dB
Line attenuation (Down/Up):
28.6 dB / 18.4 dB
Output power (Down/Up):
18.3 dBm / 12.4 dBm
FEC Events (Down/Up):
16 / 0
CRC Events (Down/Up):
0 / 17
Loss of Framing (Local/Remote):
0 / 0
Loss of Signal (Local/Remote):
0 / 0
Loss of Power (Local/Remote):
0 / 0
HEC Events (Down/Up):
0 / 11
Error Seconds (Local/Remote):
173 / 147
Any help appreciated
When I try: http://diagnostics.bt.com/login/?workflow=Connectivity and type my phone number it gives me a message telling me it doesn't look like I'm using a bt homehub when I am.
EDIT: I should of said, there is only the 1 master socket, no extensions, and only the phone and router plugged into the filter, so there's no bell wire interference etc.Okay I have a Homehub 3, screenie of the connection page
and here's the BTW results:
Download speedachieved during the test was - 0.26 Mbps
For your connection, the acceptable range of speeds is 0.1 Mbps-0.25 Mbps.
IP Profile for your line is - 0.25 Mbps
Upload speed achieved during the test was - 0.35Mbps
Additional Information:
Upstream Rate IP profile on your line is - 0.83 Mbps
I should note that the first day my connection was turned on the sync speed was switching between 2mb-12mb which was fine, then it dropped to 0.26 and it's not changed since. And I've been connected to the test socket for 3 days. -
Sql query extremely slow in the new linux environment , memory issues?
We just migrated to a new dev environment in Linux REDHAT5, and now the query is very slow, and I used the TOAD to run the query, it took like 700 msecond to finish, however from any server connection, the sql query takes hours to finish.
I checked toad monitor, it said need to increase db_buffer_cache and shared pool too small.
Also three red alert from toad is:
1. Library Cache get hit ratio: Dynamic or unsharable sql
2. Chained fetch ratio: PCT free too low for a table
3. parse to execute ratio: HIgh parse to execute ratio.
App team said it ran real quick in the old AIX system, however I ran it in old system, and monitored in the toad, it gave me all same 5 red alerts in old system, and it did provide query results a lot quicker though.
Here is the parameters in the old system (11gr1 on AIX):
SQL> show parameter target
NAME TYPE VALUE
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 278928K
sga_target big integer 0
SQL> show parameter shared
NAME TYPE VALUE
hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 31876710
shared_pool_size big integer 608M
shared_server_sessions integer
shared_servers integer 0
SQL> show parameter db_buffer
SQL> show parameter buffer
NAME TYPE VALUE
buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 2048000
use_indirect_data_buffers boolean FALSE
SQL>
In new 11gr2 Linux REDHAT parameter:
NAME TYPE VALUE
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 2512M
memory_target big integer 2512M
parallel_servers_target integer 192
pga_aggregate_target big integer 0
sga_target big integer 1648M
SQL> show parameter shared
NAME TYPE VALUE
hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 28M
shared_pool_size big integer 0
shared_server_sessions integer
shared_servers integer 1
SQL> show parameter buffer
NAME TYPE VALUE
buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 18857984
use_indirect_data_buffers boolean FALSE
SQL>
Please help. Thanks in advance.Duplicate question. Originally posted in sql query slow in new redhat enviornment
Please post in just one forum. -
Sql query runs slower from the application
Hi,
We are using oracle 9ias on AIX box.The jdk version used: 1.3.1 . From the j2ee application when we perfom a search, the sql query takes for ever to return the results. I know that we are waiting on the database because I can see the query working when I look at TOAD.But if i run the same query on the database server itself, it returns the results in less than a sec. Could you guys throw some light on how we could troubleshoot this problem. Thanks.When the results have to travel over the network, it is slow, and when they don't, it is fast.
That is what you are saying, correct?
So your approach should be to not bring so much data over the network. Don't select columns you don't need, and don't select rows you don't need. -
First execution of query is always best than all the following
I have a static query that I execute in sql*plus in the same machine as the database and all the time, the first execution is faster than the following executions and it driving me crazy! Any help is appreciated...
The database is Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
The query has a comment in the there so I can change the query to be "different" form the previous ones.
I ran it in sql*plus with set autotrace traceonly and timing on and the results are:
First execution:
SQL> @route
35 rows selected.
Elapsed: 00:00:45.49
Execution Plan
Plan hash value: 1336726102
Statistics
442 recursive calls
0 db block gets
6244491 consistent gets
0 physical reads
176 redo size
3333 bytes sent via SQL*Net to client
9006 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
46 sorts (memory)
0 sorts (disk)
35 rows processed
Second Execution:
SQL> @route
35 rows selected.
Elapsed: 00:05:04.85
Execution Plan
Plan hash value: 1336726102
Statistics
386 recursive calls
0 db block gets
1282647 consistent gets
0 physical reads
840 redo size
3333 bytes sent via SQL*Net to client
9006 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
47 sorts (memory)
0 sorts (disk)
35 rows processedSecond run:
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
PLAN_TABLE_OUTPUT
SQL_ID f2xgmq6bsd6g1, child number 1
SELECT /* + opt_param('_optimizer_use_feedback' 'false') zzzzz5 */
route, MIN(load_start) load_start, store, lc_id, loading_seq,
pick_zone_name, MAX(user_name) user_name, SUM(colli) colli,
SUM(colli_picked) colli_picked, round((1 - ((SUM(colli) -
SUM(colli_picked)) / (SUM(colli)))) * 100, 2) || '%' colli_ready FROM
(SELECT route, load_start, store,
lc_id, loading_seq, CASE
WHEN (nbr_pick_zones > 1) THEN 'MULTIPLE'
ELSE pick_zone_name END
pick_zone_name, (SELECT u.user_name
FROM dms_user u WHERE u.user_id = tab4.user_id
AND u.facility_id = 'E1') user_name, colli,
(colli - colli_not_picked) colli_picked FROM
(SELECT tab3.route, tab3.load_start,
tab3.store,
Plan hash value: 2089643899
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 35 |00:01:51.10 | 1286K| | | |
| 1 | TABLE ACCESS BY INDEX ROWID | DMS_USER | 18 | 1 | 18 |00:00:00.01 | 37 | | | |
|* 2 | INDEX UNIQUE SCAN | DMS_USER1 | 18 | 1 | 18 |00:00:00.01 | 17 | | | |
| 3 | SORT GROUP BY | | 1 | 2 | 35 |00:01:51.10 | 1286K| 9216 | 9216 | 8192 (0)|
| 4 | VIEW | | 1 | 2 | 43 |00:01:51.09 | 1286K| | | |
| 5 | WINDOW BUFFER | | 1 | 2 | 43 |00:01:51.09 | 1286K| 6144 | 6144 | 6144 (0)|
| 6 | SORT GROUP BY | | 1 | 2 | 43 |00:01:51.09 | 1286K| 11264 | 11264 |10240 (0)|
| 7 | NESTED LOOPS OUTER | | 1 | 2 | 43 |00:01:51.09 | 1286K| | | |
| 8 | VIEW | | 1 | 2 | 43 |00:01:51.09 | 1286K| | | |
| 9 | UNION-ALL | | 1 | | 43 |00:01:51.09 | 1286K| | | |
| 10 | HASH GROUP BY | | 1 | 1 | 0 |00:00:01.04 | 33948 | 752K| 752K| |
| 11 | VIEW | VM_NWVW_1 | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
|* 12 | FILTER | | 1 | | 0 |00:00:01.04 | 33948 | | | |
| 13 | HASH GROUP BY | | 1 | 1 | 0 |00:00:01.04 | 33948 | 694K| 694K| |
| 14 | NESTED LOOPS | | 1 | | 0 |00:00:01.04 | 33948 | | | |
| 15 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 16 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 17 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 18 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 19 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 20 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 21 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 22 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 23 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
| 24 | NESTED LOOPS | | 1 | 1 | 0 |00:00:01.04 | 33948 | | | |
|* 25 | TABLE ACCESS BY INDEX ROWID| CONTAINER | 1 | 13 | 36 |00:00:00.01 | 21 | | | |
|* 26 | INDEX SKIP SCAN | CONTAINER_CARR_SERV_ROUTE | 1 | 28 | 36 |00:00:00.01 | 12 | | | |
|* 27 | TABLE ACCESS BY INDEX ROWID| PICK_DIRECTIVE | 36 | 1 | 0 |00:00:01.04 | 33927 | | | |
|* 28 | INDEX RANGE SCAN | PICK_DIRECTIVE_I1 | 36 | 1 | 569K|00:00:00.47 | 12936 | | | |
| 29 | TABLE ACCESS BY INDEX ROWID | PICK_FROM_LOC | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 30 | INDEX RANGE SCAN | PICK_FROM_LOC_ITEM | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 31 | TABLE ACCESS BY INDEX ROWID | LOCATION | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 32 | INDEX RANGE SCAN | LOCATION_ZONE | 0 | 1053 | 0 |00:00:00.01 | 0 | | | |
|* 33 | TABLE ACCESS BY INDEX ROWID | STOCK_ALLOCATION | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
PLAN_TABLE_OUTPUT
|* 34 | INDEX UNIQUE SCAN | STOCK_ALLOCATION1 | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 35 | TABLE ACCESS BY INDEX ROWID | NB_TMS_INBOUND | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 36 | INDEX RANGE SCAN | NB_TMS_INBOUND_PK | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 37 | TABLE ACCESS BY INDEX ROWID | ROUTE_DEST | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 38 | INDEX RANGE SCAN | ROUTE_DEST1 | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 39 | TABLE ACCESS BY INDEX ROWID | NB_LANE_SELECTION | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 40 | INDEX UNIQUE SCAN | NB_LANE_SELECTION_PK | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 41 | TABLE ACCESS BY INDEX ROWID | ZONE | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 42 | INDEX UNIQUE SCAN | ZONE1 | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 43 | TABLE ACCESS BY INDEX ROWID | LOC_TYPE | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 44 | INDEX UNIQUE SCAN | LOC_TYPE1 | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|* 45 | INDEX UNIQUE SCAN | ITEM_MASTER1 | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 46 | TABLE ACCESS BY INDEX ROWID | ITEM_MASTER | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 47 | HASH GROUP BY | | 1 | 1 | 43 |00:01:51.09 | 1252K| 759K| 759K| 1336K (0)|
| 48 | NESTED LOOPS | | 1 | | 1963 |00:13:34.35 | 1252K| | | |
| 49 | NESTED LOOPS | | 1 | 1 | 1963 |00:13:34.33 | 1250K| | | |
| 50 | NESTED LOOPS | | 1 | 1 | 1963 |00:13:34.31 | 1250K| | | |
| 51 | NESTED LOOPS | | 1 | 1 | 37M|00:02:03.09 | 822K| | | |
| 52 | MERGE JOIN CARTESIAN | | 1 | 1 | 41685 |00:00:00.20 | 9793 | | | |
| 53 | NESTED LOOPS | | 1 | | 1985 |00:00:00.13 | 9790 | | | |
| 54 | NESTED LOOPS | | 1 | 1 | 1985 |00:00:00.10 | 7060 | | | |
| 55 | NESTED LOOPS | | 1 | 14 | 1985 |00:00:00.08 | 5047 | | | |
| 56 | NESTED LOOPS | | 1 | 14 | 1985 |00:00:00.03 | 1152 | | | |
|* 57 | HASH JOIN | | 1 | 1 | 36 |00:00:00.01 | 81 | 774K| 774K| 760K (0)|
|* 58 | HASH JOIN | | 1 | 7 | 36 |00:00:00.01 | 67 | 779K| 779K| 391K (0)|
|* 59 | HASH JOIN | | 1 | 36 | 36 |00:00:00.01 | 51 | 835K| 835K| 654K (0)|
|* 60 | TABLE ACCESS BY INDEX ROWID | CONTAINER | 1 | 36 | 36 |00:00:00.01 | 21 | | | |
|* 61 | INDEX SKIP SCAN | CONTAINER_CARR_SERV_ROUTE | 1 | 28 | 36 |00:00:00.01 | 12 | | | |
|* 62 | TABLE ACCESS FULL | NB_TMS_INBOUND | 1 | 144 | 144 |00:00:00.01 | 30 | | | |
|* 63 | TABLE ACCESS FULL | NB_LANE_SELECTION | 1 | 64 | 64 |00:00:00.01 | 16 | | | |
|* 64 | TABLE ACCESS FULL | ROUTE_DEST | 1 | 2003 | 2149 |00:00:00.01 | 14 | | | |
| 65 | TABLE ACCESS BY INDEX ROWID | CONTAINER_ITEM | 36 | 38 | 1985 |00:00:00.02 | 1071 | | | |
|* 66 | INDEX RANGE SCAN | CONTAINER_ITEM1 | 36 | 38 | 1985 |00:00:00.01 | 94 | | | |
| 67 | TABLE ACCESS BY INDEX ROWID | ITEM_MASTER | 1985 | 1 | 1985 |00:00:00.04 | 3895 | | | |
|* 68 | INDEX UNIQUE SCAN | ITEM_MASTER1 | 1985 | 1 | 1985 |00:00:00.02 | 1910 | | | |
|* 69 | INDEX UNIQUE SCAN | STOCK_ALLOCATION1 | 1985 | 1 | 1985 |00:00:00.02 | 2013 | | | |
|* 70 | TABLE ACCESS BY INDEX ROWID | STOCK_ALLOCATION | 1985 | 1 | 1985 |00:00:00.02 | 2730 | | | |
| 71 | BUFFER SORT | | 1985 | 21 | 41685 |00:00:00.04 | 3 | 2048 | 2048 | 2048 (0)|
| 72 | TABLE ACCESS BY INDEX ROWID | ZONE | 1 | 21 | 21 |00:00:00.01 | 3 | | | |
|* 73 | INDEX RANGE SCAN | ZONE_ZG_IDX | 1 | 21 | 21 |00:00:00.01 | 1 | | | |
| 74 | TABLE ACCESS BY INDEX ROWID | LOCATION | 41685 | 382 | 37M|00:01:40.75 | 812K| | | |
|* 75 | INDEX RANGE SCAN | LOCATION_ZONE | 41685 | 1053 | 37M|00:00:30.18 | 256K| | | |
|* 76 | VIEW PUSHED PREDICATE | | 37M| 1 | 1963 |00:10:49.31 | 428K| | | |
|* 77 | FILTER | | 37M| | 37M|00:09:39.35 | 428K| | | |
| 78 | SORT AGGREGATE | | 37M| 1 | 37M|00:08:18.42 | 428K| | | |
|* 79 | FILTER | | 37M| | 37M|00:05:23.60 | 428K| | | |
| 80 | TABLE ACCESS BY INDEX ROWID | PICK_FROM_LOC | 37M| 1 | 37M|00:03:59.31 | 428K| | | |
|* 81 | INDEX RANGE SCAN | PICK_FROM_LOC_ITEM | 37M| 1 | 37M|00:02:16.12 | 272K| | | |
|* 82 | INDEX UNIQUE SCAN | LOC_TYPE1 | 1963 | 1 | 1963 |00:00:00.01 | 4 | | | |
|* 83 | TABLE ACCESS BY INDEX ROWID | LOC_TYPE | 1963 | 1 | 1963 |00:00:00.01 | 1963 | | | |
|* 84 | VIEW PUSHED PREDICATE | | 43 | 1 | 43 |00:00:00.01 | 236 | | | |
|* 85 | WINDOW SORT PUSHED RANK | | 43 | 1 | 86 |00:00:00.01 | 236 | 2048 | 2048 | 2048 (0)|
|* 86 | TABLE ACCESS BY INDEX ROWID | CONTAINER_HISTORY | 43 | 1 | 160 |00:00:00.01 | 236 | | | |
|* 87 | INDEX RANGE SCAN | CHST_IDX | 43 | 2 | 246 |00:00:00.01 | 90 | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - access("U"."FACILITY_ID"='E1' AND "U"."USER_ID"=:B1)
12 - filter(("L"."LOCATION_ID"=MAX("PL"."LOCATION_ID") AND "PD"."PICK_FROM_CONTAINER_ID"=MAX("PL"."LOCATION_ID")))
25 - filter((INTERNAL_FUNCTION("C"."CONTAINER_STATUS") AND "C"."DEST_ID" IS NOT NULL))
26 - access("C"."FACILITY_ID"='E1' AND "C"."ROUTE"='556')
filter("C"."ROUTE"='556')
27 - filter("C"."CONTAINER_ID"="PD"."PICK_TO_CONTAINER_ID")
28 - access("PD"."FACILITY_ID"='E1')
30 - access("PL"."FACILITY_ID"='E1' AND "PL"."ITEM_ID"="PD"."ITEM_ID")
32 - access("L"."FACILITY_ID"='E1' AND "PD"."ZONE"="L"."ZONE")
33 - filter(("SA"."ORDER_DETAIL_UDA1"='2012-10-26' AND "SA"."SA_ROUTE" IS NOT NULL))
34 - access("SA"."FACILITY_ID"='E1' AND "SA"."DISTRO_NBR"="PD"."DISTRO_NBR" AND "SA"."DEST_ID"="C"."DEST_ID" AND "SA"."ITEM_ID"="PD"."ITEM_ID")
35 - filter(("C"."ROUTE"="A"."ROUTE" AND "SA"."SA_ROUTE"="A"."ROUTE"))
36 - access("A"."FACILITY_ID"='E1' AND "C"."DEST_ID"="A"."DEST_ID")
filter((TO_CHAR(INTERNAL_FUNCTION("A"."BUSINESS_DATE"),'YYYY-MM-DD')='2012-10-26' AND
"SA"."ORDER_DETAIL_UDA1"=TO_CHAR(INTERNAL_FUNCTION("A"."BUSINESS_DATE"),'YYYY-MM-DD')))
38 - access("RD"."FACILITY_ID"='E1' AND "C"."ROUTE"="RD"."ROUTE" AND "C"."DEST_ID"="RD"."DEST_ID")
39 - filter("RD"."SHIP_DATE"=TRUNC(INTERNAL_FUNCTION("LS"."LOAD_START")))
40 - access("LS"."FACILITY_ID"='E1' AND "LS"."BUSINESS_DATE"="A"."BUSINESS_DATE" AND "LS"."ROUTE"="A"."ROUTE")
42 - access("Z"."FACILITY_ID"='E1' AND "L"."ZONE"="Z"."ZONE")
43 - filter(("C"."LOCATION_ID" IS NULL OR "LT"."DOOR_LOCATION_FLAG"='N'))
44 - access("LT"."FACILITY_ID"='E1' AND "L"."LOCATION_TYPE"="LT"."LOCATION_TYPE")
45 - access("I"."FACILITY_ID"='E1' AND "PL"."ITEM_ID"="I"."ITEM_ID")
57 - access("C"."FACILITY_ID"="RD"."FACILITY_ID" AND "C"."ROUTE"="RD"."ROUTE" AND "C"."DEST_ID"="RD"."DEST_ID" AND
"RD"."SHIP_DATE"=TRUNC(INTERNAL_FUNCTION("LS"."LOAD_START")))
58 - access("LS"."FACILITY_ID"="A"."FACILITY_ID" AND "LS"."ROUTE"="A"."ROUTE" AND "LS"."BUSINESS_DATE"="A"."BUSINESS_DATE")
59 - access("C"."DEST_ID"="A"."DEST_ID" AND "C"."ROUTE"="A"."ROUTE")
60 - filter(("C"."USER_ID" IS NOT NULL AND INTERNAL_FUNCTION("C"."CONTAINER_STATUS") AND "C"."DEST_ID" IS NOT NULL))
61 - access("C"."FACILITY_ID"='E1' AND "C"."ROUTE"='556')
filter("C"."ROUTE"='556')
62 - filter(("A"."BUSINESS_DATE"=TO_DATE(' 2012-10-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "A"."FACILITY_ID"='E1'))
63 - filter(("LS"."BUSINESS_DATE"=TO_DATE(' 2012-10-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "LS"."FACILITY_ID"='E1'))
64 - filter("RD"."FACILITY_ID"='E1')
66 - access("CI"."FACILITY_ID"='E1' AND "CI"."CONTAINER_ID"="C"."CONTAINER_ID")
68 - access("I"."FACILITY_ID"='E1' AND "CI"."ITEM_ID"="I"."ITEM_ID")
69 - access("SA"."FACILITY_ID"='E1' AND "SA"."DISTRO_NBR"="CI"."DISTRO_NBR" AND "SA"."DEST_ID"="A"."DEST_ID" AND "SA"."ITEM_ID"="CI"."ITEM_ID")
70 - filter(("SA"."ORDER_DETAIL_UDA1"='2012-10-26' AND "SA"."SA_ROUTE" IS NOT NULL AND "SA"."SA_ROUTE"="A"."ROUTE"))
73 - access("Z"."FACILITY_ID"='E1')
75 - access("L"."FACILITY_ID"='E1' AND "L"."ZONE"="Z"."ZONE")
76 - filter(("L"."LOCATION_ID"="P"."LOCATION_ID" AND "C"."PICK_TYPE"=DECODE(TO_CHAR(NVL("P"."CASEPACK",0)),'0','U','CF')))
77 - filter(COUNT(*)>0)
79 - filter(('E1'="CI"."FACILITY_ID" AND 'E1'="L"."FACILITY_ID"))
81 - access("PL"."FACILITY_ID"='E1' AND "PL"."ITEM_ID"="CI"."ITEM_ID")
82 - access("LT"."FACILITY_ID"='E1' AND "L"."LOCATION_TYPE"="LT"."LOCATION_TYPE")
83 - filter(("C"."LOCATION_ID" IS NULL OR "LT"."DOOR_LOCATION_FLAG"='N'))
84 - filter("LAST_USER"=1)
85 - filter(ROW_NUMBER() OVER ( PARTITION BY "CH"."FACILITY_ID","CH"."CONTAINER_ID" ORDER BY INTERNAL_FUNCTION("CH"."ACTION_TS") DESC )<=1)
86 - filter((INTERNAL_FUNCTION("CH"."CONTAINER_STATUS") AND "CH"."USER_ID"<>'WMS_OWNER'))
87 - access("CH"."FACILITY_ID"="TAB1"."FACILITY_ID" AND "CH"."CONTAINER_ID"="TAB1"."LC_ID" AND "CH"."TABLE_NAME"='C' AND "CH"."ACTION_TYPE"='M')
Note
- cardinality feedback used for this statement
169 rows selected. -
First execution of sql statements is slow every morning
Dears,
we are running an oracle11g database (HP-UX Itanium) and have the following problem:
Every morning the first execution of statements is very slow.
After the first execution the statements are running fine.
Does anyone have an idea where this can come from?
Is it possible that the cache (shared pool, etc.) will be deleted every night (for example when new statistics are generated or something else)?
Regards,
IljaI think you are close to answering your question.
As you know, Oracle 11g has an automated job to run performance stats every night at approx. 10:00pm (until 2:00am).
This is run by the dbms_scheduler.
This could be causing the shared_pool to be flushed because it certainly uses it a lot. I have to manually flush the shared_pool every night in one of my databases before this job runs otherwise I get an ORA-01461.
But, what I'm surprised is that you have this problem only in the morning.
It seems you would want to pin your SQL in memory and perhaps set a profile for your execution.
You don't bounce your database every night, do you? -
Getting the first row for each group
Hi Everyone,
I have a query which returns a number of rows, all of which are valid. What I need to do is to get the first row for each group and work with those records.
For example ...
client flight startairport destairport stops
A fl123 LGW BKK 2
A fl124 LHR BKK 5
B fl432 LGW XYZ 7
B fl432 MAN ABC 8
.... etc.
I would need to return one row for Client A and one row for Client B (etc.) but find that I can't use the MIN function because it would return the MIN value for each column (i.e. mix up the rows). I also can use the rownum=1 because this would only return one row rather than one row per group (i.e. per client).
I have been investigating and most postings seem to say that it needs a second query to look up the first row for each grouping. This is a solution which would not really be practical because my query is already quite complex and incorporating duplicate subqueries would just make the whole thing much to cumbersome.
So what I really new is a "MIN by group" or a "TOP by group" or a "ROWNUM=1 by group" function.
Can anyone help me with this? I'm sure that there must be a command to handle this.
Regards and any thanks,
Alan Searle
Cologne, GermanySomething like this:
select *
from (
select table1.*
row_number() over (partition by col1, col2 order by col3, col4) rn
from table1
where rn = 1In the "partition by" clause you place what you normally would "group by".
In the "order by" clause you define which will have row_number = 1.
Edit:
PS. The [url http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/functions004.htm#i81407]docs have more examples on using analytical functions ;-)
Edited by: Kim Berg Hansen on Sep 16, 2011 10:46 AM -
10g Form - first execute query - very slow
I have the following issue:
Enter an application
open a form in enter query mode
first time execute query is very slow (several minutes)
every other time it's quick (couple seconds or less)
I can leave the form, use other forms within the app, come back and query is still quick. It's only the first time after initially launching the app.
Any ideas what might be causing this?We have the same application running in 6i client/server DB-9i in production. We are testing the upgraded application that is 10g forms on OAS DB-10g. We don't have the issue in the current production client/server app.
Maybe you are looking for
-
Master details form: in the details form the List of values is not working
The master details have an upper form and a lower form OK So, the List of values of values in the upper is working fine, the lower form or the details form the list of values not working on it and the code is right, and mean not working like when you
-
Merging pdf documents in Preview while maintaining OCR capability
...isn't working. I can copy text from the individual pdf's, but when I merge them, the OCR text is replaced with "_". This happening to anyone else? This is happening on more than one Mac here at the office. I'm opening one file, expanding the si
-
Is there a way to drap and drop JTree nodes without resorting to hacks? http://www.javaworld.com/javatips/jw-javatip97.html outlines a hack to get drag and drop working but it is kinda old. I am wondering whether there is a now a way to do this in Sw
-
Sending mail to recepients once the job gets complete
Hi, Is there a way to send email to concerned receipients once the background job completes. Regards, NS
-
Toll-free calls "cannot be completed as dialed"
message canceled