6 million + records, query takes more than 50 min to execute.
Hi
I am trying to get records from a table which has more than 6 million of records.
The value set of the particular col IND can be
NULL
'0'
'1'
and other value like A B '6'
The data type of IND is varchar.
I want all the records where the value is other than NULL, 0 and 1
I tried this simple query
SELECT ID, IND
FROM tablename
WHERE
IND IS NOT NULL
AND IND <> '0'
AND IND <> '1'
Now this query is taking more than 30-40 min. Is this a way I can speed up this query. Also I can't index on the column.
any suggestions ?
I don't know anything about your tables or hardware (nor your Oracle version because you didn't post it) but 30 - 40 minutes seems excessive for a full table scan on only 6 million rows.
On my lowly test instance, this full table scan takes a little over a minute:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 4 | 4800 (2)| 00:01:11 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | TABLE ACCESS FULL| ATLAS_SALES_HISTORY | 6618K| 25M| 4800 (2)| 00:01:11 |
Statistics
631 recursive calls
0 db block gets
55740 consistent gets
55609 physical reads
0 redo size
415 bytes sent via SQL*Net to client
346 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
15 sorts (memory)
0 sorts (disk)
1 rows processedAre you pulling all 6 million rows across the network to your client machine? (And waiting for the rows to scroll?)
Similar Messages
-
Ldap search query takes more than 10 seconds
LDAP query takes more than 10 seconds to execute.
For validating the policy configured, the Acess Manager(Sun Java System Access Manager) contacts the LDAP (Sun Java System Directory Server 6.2) to get the users in a dynamic group. The time out value configured in Access Manager for LDAP searches is 10 seconds.
Issue : The ldap query takes more than 10 seconds to execute at some times .
The query is executing with less than 10 seconds in most of the cases, but it takes more than 10 seconds in some cases. The total number of users available in the ldap is less than 1500.
7 etime =1
6 etime =1
102 etime=4
51 etime=5
26 etime=6
5 etime=7
4 etime=8
From the ldap access logs we can see the following entry,some times the query takes more than 10 seconds,
[28/May/2012:14:21:26 +0200] conn=281 op=41433 msgId=853995 - SRCH base="dc=****,dc=****,dc=com" scope=2 filter="(&(&(***=true)(**=true))(objectClass=vfperson))" attrs=ALL
[28/May/2012:14:21:36 +0200] conn=281 op=41434 msgId=854001 - ABANDON targetop=41433 msgid=853995 nentries=884 etime=10
The query was aborted by the access manger after 10 seconds.
Please post your suggestions to resolve this issue .
1.How we can find out , why the query is taking more than 10 seconds ?
2.Next steps to resolve this issue .Hi Marco,
Thanks for your suggestions.
Sorry for replying late. I was out of office for few weeks.
1) Have you already tuned the caches? (entry cache, db cache, filesystem cache?)
We are using db cache and we have not done any turning for cache. The application was working fine and there was no much changes in the number of users .
2) Unfortunately we don't have direct access to the environment and we have contacted the responsible team to verify the server health during the issue .
Regarding the IO operations we can see that, load balancer is pinging the ldap sever every 15 seconds to check the status of ldap servers which yields a new connection on every hit. (on average per minute 8 connections - )
3) We using cn=dsameuser to bind the directory server. Other configuration details for ldap
LDAP Connection Pool Minimum Size: 1
LDAP Connection Pool Maximum Size:10
Maximum Results Returned from Search: 1700
Search Timeout: 10
Is the Search Timeout value configured is proper ? ( We have less than 1500 user in the ldap server).
Also is there any impact if the value Maximum Results Returned from Search = set to 1700. ( The Sun document for AM says that the ideal value for this is 1000 and if its higher than this it will impact performance.
The application was running without time out issue for last 2 years and there was no much increase in the number of users in the system. ( at the max 200 users added to the system in last 2 years.)
Thanks,
Jay -
QUERY TAKES MORE THAN 30 MINUTES AND IT'S CANCELED
Hi
I have one workbook and sometimes it takes more than 30 minutes to be executed, but Discoverer cancels it, Does anybody know how to alter this? I mean if the query takes more than 30 minutes i need it to be finished.
Any help will be appreciated.
Best Regards
Yuri LópezHi
You need to alter the timeout settings and these are located in multiple places.
Discoverer Plus / Viewer using this workflow:
1. Edit the pref.txt file on the server located here: $ORACLE_HOME\discoverer\util
2. Locate the preference called Timeout and change it to your desired value in seconds. The default is 1800 which means 30 minutes
3. Save pref.txt
4. Execute applypreferences.bat if running on Windows or applypreferences.sh if running on Linux or Unix
5. Stop and restart the Discoverer server
Discoverer Administrator using this workflow:
1. Launch Discoverer Administrator
2. From the menu bar, select Tools | Privileges
3. Select the user
4. Open the Query Governor tab
5. If "Prevent queries from running longer than" is checked, increase the limit
6. Click OK
Note: if "Prevent queries from running longer than" is not checked then the Timout in the pref.txt controls how long before the queries stop. If it is checked and it is lower than Timeout you need to increase it.
Let me know if this helps
Best wishes
Michael Armstrong-Smith
URL: http://learndiscoverer.com
Blog: http://learndiscoverer.blogspot.com -
Need to write a query that takes more than 2 minutes to execute
Hi ,
I am using Oracle 10g as my DataBase.
I am writing a small program for Testing purpose , in that my requirement is to write a query that takes more than 2 minutes to execute .Right now i have only a small Table called as "Users" with very less data .
Please let me know how can i achieve this thing ??
Thanks .So please tell me , how can i achieve this . Thanks in advance .P. Forstman's example above will probably be more reliable, but here's an example of my idea - harder to control timing (untested)
select count(*)
from dba_objects o, dba_tables t, dba_tab_columns tc
where o.object_name||'' = t.table_name||''
and o.owner||'' = t.owner||''
and t.table_name||'' = tc.table_name
and t.owner||'' = tc.owner||'' -
I have been using USB 2.1 10/100M ethernet adaptor. But it takes more than 15 mins to detect network interface. What should I do. I am not using apple's company adaptor. It's local company.
jsavage9621,
It pains me to hear about your experience with the Home Phone Connect. This device usually works seamlessly and is a great alternative to a landline phone. It sounds like we've done our fair share of work on your account here. I'm going to go ahead and send you a Private Message so that we can access your account and review any open tickets for you. I look forward to speaking with you.
TrevorC_VZW
Follow us on Twitter @VZWSupport -
Save an empty document in InDesign CC and Illustrator CC takes more than 3 min.
Save an empty document in InDesign CC and Illustrator CC takes more than 3 min.
The same if you open an empty document.
I monitored the application status using Activity Monitor and every time I saved a document the application didn't respond then, after about three minutes, comes back to life.
I'm working on a MacBook Pro, 2.6 Ghz, 8 Gb Ram with OS X 10.9.
Please let me know If someone else had the same issue because I use Adobe softwares not only for my personal job but also to show people how Adobe software works...
Thanks.
Andrea SpinazzolaAdobe Drive is the reason!
Please communicate this to everybody massively because I think is the reason why many Adobe CC users refer about bad performance.
I heard many issues related to this and no one talk about this.
Thanks Hank. -
Orchestration takes more than 30 mins to open
Hi all
We have BizTalk 2013 Server, and our development enviroment is Visual Studio 2013.
Any time a new Orchestration is added to the Project.
It takes more than 30 mins to open the Orchestration
What could be the issue here?
Thanks in Advance for your input.
Regards
ake
AKEHi,
As of now BizTalk 2013 development is supported on visual studio 2012. And it is stated in
Release note that BizTalk 2013 R2 will support Visual studio 2013.
And as far as I know BizTalk 2013 R2 is not released yet.
Maheshkumar S Tiwari|User Page|Blog|BizTalk
Server : How Map Works on Port Level -
Splitting an event takes more than 3 min
Hi,
I have about 8.000 photos in my library and 2.500 photos in a single event. Now I want to extract a set of pictures from this event and move it in a new event. Is there a faster way doing this than splitting the event before and after the set of pictures? This created 3 events and then I have to move the 3rd event back into the first, the 2nd event is what I want to keep.
BTW, with over 8.000 pictures in the library, is it normal that splitting an event takes more than 3 minutes?Danielle
Flag the pics you want to move to the new Event and the go Events -> Create Event from Flagged Photos. That will probably be faster.
BTW, with over 8.000 pictures in the library, is it normal that splitting an event takes more than 3 minutes?
I wouldn't have thought so, but it might take a while with an Event that has more that 2,500 pics in it. Remember, Events in the iPhoto Window correspond exactly with the Folders in the Originals Folder in the iPhoto Library package file (Right click on it in the Pictures Folder -> Show Package Contents). So splitting an event literally means creating a folder and moving files to it and so on.
Regards
TD -
Takes more than 10 mins to boot my macbook pro!
Hi, I have bought my 15 inch Macbook Pro April,2008...
it worked fine until like last october before I installed
Vista on my computer..
I had no problem booting on Vista but when I try to get on leopard
it takes over 10mins to boot.
I have tried reinstalling both leopard and vista but still having a same problem
does anyone know how to sovle this problem??is your EFI up to date?
-
At first launch of a fdf file, Windows installer takes more than 30 min
I updated my Acrobat 9 software. After that, I noticed that at the first launch of a .fdf file, the Windows intaller works for about 30 min (during which time I cannot do anything) before opening the file. After that, evryting seems normal. I have Acrobat 9 on several machines (University License) and this happens for each of them. I have instructed my co-workers to rename the .fdf files as .pdf to avoid this problem, and to only try to open theur first .fdf file when they know they have at least 30 min free... Why is it so ? This problem did not occur with previous versions of Acrobat.
Just for the heck of it, I renamed a FDF file to a PDF file and tried to open it, no go. I was able to import a FDF to the original PDF. The install option is a bit of an issue and I am not sure what to say to that. It may be looking for the PDF file on the hard drive and not actually be trying to install Acrobat. I opened a FDF in the same folder as the PDF and it opened right up. Otherwise, I think the location of the PDF has to be encoded in the FDF file. The form name is typically at the end of the FDF file (OPEN in notepad or such to look at the details of the FDF). You will not that it does not start as %PDF, but %FDF.
As I mentioned, Acrobat is likely looking for the original document listed at the end of the FDF file. -
Query taking more than 1/2 hour for 80 million rows in fact table
Hi All,
I am stuck in this query as it it taking more than 35 mins to execute for 80 million rows. My SLA is less than 30 mins for 160 million rows i.e. double the number.
Below is the query and the Execution Plan.
SELECT txn_id AS txn_id,
acntng_entry_src AS txn_src,
f.hrarchy_dmn_id AS hrarchy_dmn_id,
f.prduct_dmn_id AS prduct_dmn_id,
f.pstng_crncy_id AS pstng_crncy_id,
f.acntng_entry_typ AS acntng_entry_typ,
MIN (d.date_value) AS min_val_dt,
GREATEST (MAX (d.date_value),
LEAST ('07-Feb-2009', d.fin_year_end_dt))
AS max_val_dt
FROM Position_Fact f, Date_Dimension d
WHERE f.val_dt_dmn_id = d.date_dmn_id
GROUP BY txn_id,
acntng_entry_src,
f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.pstng_crncy_id,
f.acntng_entry_typ,
d.fin_year_end_dt
Execution Plan is as:
11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414
9 TABLE ACCESS FULL TABLE Date_Dimension Cost: 29 Bytes: 94,960 Cardinality: 4,748
10 TABLE ACCESS FULL TABLE Position_Fact Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414
Kindly suggest, how to make it faster.
Regards,
SidThe above is just a part of the query that is taking the maximum time.
Kindly find the entire query and the plan as follows:
WITH MIN_MX_DT
AS
( SELECT
TXN_ID AS TXN_ID,
ACNTNG_ENTRY_SRC AS TXN_SRC,
F.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
F.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
F.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
F.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP,
MIN (D.DATE_VALUE) AS MIN_VAL_DT,
GREATEST (MAX (D.DATE_VALUE), LEAST (:B1, D.FIN_YEAR_END_DT))
AS MAX_VAL_DT
FROM
proj_PSTNG_FCT F, proj_DATE_DMN D
WHERE
F.VAL_DT_DMN_ID = D.DATE_DMN_ID
GROUP BY
TXN_ID,
ACNTNG_ENTRY_SRC,
F.HRARCHY_DMN_ID,
F.PRDUCT_DMN_ID,
F.PSTNG_CRNCY_ID,
F.ACNTNG_ENTRY_TYP,
D.FIN_YEAR_END_DT),
SLCT_RCRDS
AS (
SELECT
M.TXN_ID,
M.TXN_SRC,
M.HRARCHY_DMN_ID,
M.PRDUCT_DMN_ID,
M.PSTNG_CRNCY_ID,
M.ACNTNG_ENTRY_TYP,
D.DATE_VALUE AS VAL_DT,
D.DATE_DMN_ID,
D.FIN_WEEK_NUM AS FIN_WEEK_NUM,
D.FIN_YEAR_STRT AS FIN_YEAR_STRT,
D.FIN_YEAR_END AS FIN_YEAR_END
FROM
MIN_MX_DT M, proj_DATE_DMN D
WHERE
D.HOLIDAY_IND = 0
AND D.DATE_VALUE >= MIN_VAL_DT
AND D.DATE_VALUE <= MAX_VAL_DT),
DLY_HDRS
AS (
SELECT
S.TXN_ID AS TXN_ID,
S.TXN_SRC AS TXN_SRC,
S.DATE_DMN_ID AS VAL_DT_DMN_ID,
S.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
SUM
DECODE
PNL_TYP_NM,
:B5, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS MTM_AMT,
NVL (
LAG (
SUM (
DECODE (
PNL_TYP_NM,
:B5, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0)))
OVER (
PARTITION BY S.TXN_ID,
S.TXN_SRC,
S.HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID
ORDER BY S.VAL_DT),
0)
AS YSTDY_MTM,
SUM (
DECODE (
PNL_TYP_NM,
:B4, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS CASH_AMT,
SUM (
DECODE (
PNL_TYP_NM,
:B3, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS PAY_REC_AMT,
S.VAL_DT,
S.FIN_WEEK_NUM,
S.FIN_YEAR_STRT,
S.FIN_YEAR_END,
NVL (TRUNC (F.REVSN_DT), S.VAL_DT) AS REVSN_DT,
S.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP
FROM
SLCT_RCRDS S,
proj_PSTNG_FCT F,
proj_ACNT_DMN AD,
proj_PNL_TYP_DMN PTD
WHERE
S.TXN_ID = F.TXN_ID(+)
AND S.TXN_SRC = F.ACNTNG_ENTRY_SRC(+)
AND S.HRARCHY_DMN_ID = F.HRARCHY_DMN_ID(+)
AND S.PRDUCT_DMN_ID = F.PRDUCT_DMN_ID(+)
AND S.PSTNG_CRNCY_ID = F.PSTNG_CRNCY_ID(+)
AND S.DATE_DMN_ID = F.VAL_DT_DMN_ID(+)
AND S.ACNTNG_ENTRY_TYP = F.ACNTNG_ENTRY_TYP(+)
AND SUBSTR (AD.ACNT_NUM, 0, 1) IN (1, 2, 3)
AND NVL (F.ACNT_DMN_ID, 1) = AD.ACNT_DMN_ID
AND NVL (F.PNL_TYP_DMN_ID, 1) = PTD.PNL_TYP_DMN_ID
GROUP BY
S.TXN_ID,
S.TXN_SRC,
S.DATE_DMN_ID,
S.HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID,
S.VAL_DT,
S.FIN_WEEK_NUM,
S.FIN_YEAR_STRT,
S.FIN_YEAR_END,
TRUNC (F.REVSN_DT),
S.ACNTNG_ENTRY_TYP,
F.TXN_ID)
SELECT
D.TXN_ID,
D.VAL_DT_DMN_ID,
D.REVSN_DT,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.YSTDY_MTM,
D.MTM_AMT,
D.CASH_AMT,
D.PAY_REC_AMT,
MTM_AMT + CASH_AMT + PAY_REC_AMT AS DLY_PNL,
SUM (
MTM_AMT + CASH_AMT + PAY_REC_AMT)
OVER (
PARTITION BY D.TXN_ID,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.FIN_WEEK_NUM || D.FIN_YEAR_STRT || D.FIN_YEAR_END
ORDER BY D.VAL_DT)
AS WTD_PNL,
SUM (
MTM_AMT + CASH_AMT + PAY_REC_AMT)
OVER (
PARTITION BY D.TXN_ID,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.FIN_YEAR_STRT || D.FIN_YEAR_END
ORDER BY D.VAL_DT)
AS YTD_PNL,
D.ACNTNG_ENTRY_TYP AS ACNTNG_PSTNG_TYP,
'EOD ETL' AS CRTD_BY,
SYSTIMESTAMP AS CRTN_DT,
NULL AS MDFD_BY,
NULL AS MDFCTN_DT
FROM
DLY_HDRS D
Plan
SELECT STATEMENT ALL_ROWSCost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
25 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
24 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
23 VIEW Cost: 10,519,225 Bytes: 3,369,680,886 Cardinality: 7,854,734
22 WINDOW BUFFER Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
21 SORT GROUP BY Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
20 HASH JOIN Cost: 10,296,285 Bytes: 997,551,218 Cardinality: 7,854,734
1 TABLE ACCESS FULL TABLE proj_PNL_TYP_DMN Cost: 3 Bytes: 45 Cardinality: 5
19 HASH JOIN Cost: 10,296,173 Bytes: 2,695,349,628 Cardinality: 22,841,946
5 VIEW VIEW index$_join$_007 Cost: 3 Bytes: 84 Cardinality: 7
4 HASH JOIN
2 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_PK Cost: 1 Bytes: 84 Cardinality: 7
3 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_UNQ Cost: 1 Bytes: 84 Cardinality: 7
18 HASH JOIN RIGHT OUTER Cost: 10,293,077 Bytes: 68,925,225,244 Cardinality: 650,237,974
6 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,986 Bytes: 4,545,502,426 Cardinality: 77,042,414
17 VIEW Cost: 7,300,017 Bytes: 30,561,184,778 Cardinality: 650,237,974
16 MERGE JOIN Cost: 7,300,017 Bytes: 230,184,242,796 Cardinality: 650,237,974
8 SORT JOIN Cost: 30 Bytes: 87,776 Cardinality: 3,376
7 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 87,776 Cardinality: 3,376
15 FILTER
14 SORT JOIN Cost: 7,238,488 Bytes: 25,269,911,792 Cardinality: 77,042,414
13 VIEW Cost: 1,835,219 Bytes: 25,269,911,792 Cardinality: 77,042,414
12 SORT GROUP BY Cost: 1,835,219 Bytes: 3,698,035,872 Cardinality: 77,042,414
11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414
9 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 94,960 Cardinality: 4,748
10 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414 -
Query takes more time from client
Hi,
I have a select query (which refers to views and calls a function), which fetches results in 2 secs when executed from database. But takes more than 10 mins from the client.
The tkprof for the call from the client is given below. Could you please suggest, what is going wrong and how this can be addressed?
The index IDX_table1_1 is on col3.
Trace file: trace_file.trc
Sort options: exeela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT ROUND(SUM(NVL((col1-col2),(SYSDATE - col2)
FROM
table1 WHERE col3 = :B1 GROUP BY col3
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 7402 0.27 7.40 0 0 0 0
Fetch 7402 1.13 59.37 1663 22535 0 7335
total 14804 1.40 66.77 1663 22535 0 7335
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 32 (ORADBA) (recursive depth: 1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 SORT (GROUP BY NOSORT)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'table1'
(TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_table1_1'
(INDEX)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1663 1.37 57.71
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 16039 3.09 385.04
db file scattered read 34 0.21 1.42
latch: cache buffers chains 26 0.34 2.14
SQL*Net break/reset to client 2 0.05 0.05
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 79.99 79.99
SQL*Net message to dblink 1 0.00 0.00
SQL*Net message from dblink 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 7402 0.27 7.40 0 0 0 0
Fetch 7402 1.13 59.37 1663 22535 0 7335
total 14804 1.40 66.77 1663 22535 0 7335
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1663 1.37 57.71
1 user SQL statements in session.
0 internal SQL statements in session.
1 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: trace_file.trc
Trace file compatibility: 10.01.00
Sort options: exeela
1 session in tracefile.
1 user SQL statements in trace file.
0 internal SQL statements in trace file.
1 SQL statements in trace file.
1 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ORADBA.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
84792 lines in trace file.
4152 elapsed seconds in trace file.Edited by: agathya on Feb 26, 2010 8:39 PMI have a select query (which refers to views and calls a function), which fetches results in 2 secs when >executed from database. But takes more than 10 mins from the client.You are providing proof for the latter part of your statement above.
But not for the former part (fetches in 2 secs when exec'd from db).
It would have been nice if you also provide the sql-trace information for that.
Without it we cannot help you much. Other than making the observation that you obviously have a query that is I/O bound, and that I/O on your system is rather slow: on average an I/O takes 0.04 seconds (66.77 divided by 1663). -
Hi Friends,
SELECT
DATEPART(YEAR, SaleDate) AS [PrevYear],
DATENAME(MONTH, SaleDate) AS [PrevMonth],
SaleDate as SaleDate,
Sum(Amount) as PrevAmount
FROM TableA A
WHERE SaleDate >= DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 1, 0)
AND SaleDate <= DATEADD(dd, -1, DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0))
-----'2013-12-31 00:00:00.000'
GROUP BY
SaleDate
This Query taking more than 2 min to pull the results .... basically I was passing last year first date and last date (should derive based on getdate())
if I pass static values like this WHERE SaleDate >= ''2013-01-01 00:00:00.000''
AND SaleDate <= '2013-12-31 00:00:00.000'
then it is pulling results in fraction of seconds.....
Note: I was keeping this code in View I have to use only View ( I know we can write store procedure for this but I dont want sp I need only View)
any idea please how to improve my view performance?
Thanks,
RKDo you have an index on SaleDate column ? If so , is this NCI or CI? How much data does it return? Can you show an execution plan of the query?
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Query to retrieve the records which have more than one assignment_id
Hello,
I am trying to write a query to retrieve all the records from the table per_all_assignments_f which has more than one different assignment_id for each person_id. Below is the query i have written but this retrieves the records even if a person_id has duplicate assignment_id's but i need records which have more than one assignement_id with no duplicates for each person_id
select assignment_id ,person_id, assignment_id
From per_all_assignments_f
having count(assignment_id) >1
group by person_id, assignment_id
Thank You.
PKMaybe something like this?
select *
From per_all_assignments_f f1
where exists (select 1
from per_all_assignments_f f2
where f2.person_id = f1.person_id
and f2.assignment_id != f1.assignment_id
);Edited by: SomeoneElse on May 7, 2010 2:23 PM
(you can add a DISTINCT to the outer query if you need to) -
When recording a Video demo slide in Captivate 8, pressing the [END] key takes more than a minute to stop the recording. Why is there such a lag? Additionally, the software I record experiences terrible lag times. For instance, It may take 15 seconds to display a 2-digit number I typed into a blank field. the same software has no lag issues when I'm not recording it.
Yes and no. The CPTX file is on my PC. Our software that I'm making the demo of is on the network. but I'm not the "IT Guy" so I can't provide specifics; however, I work at a computer software company and we probably have more servers than employees. The network hardware is kept current. I've been making similar tutorials for 10 years and haven't run into this issue until switching to the subscription Captivate.
Maybe you are looking for
-
I have a 5th generation Airport Extreme giving me a great wireless signal on the 1st floor of my house. However the signal upstairs is weak. So I purchased another 5th generation Airport Extreme that I would like to plug into an ethernet wall plug up
-
How to add prefixes to xml output?
Does anyone know a quick and easy way to add a namespace prefix to an xml document when you output it as a string? i'm looking for somethinglike this. <Order xmlns="http://www.something.com/name"> <item>sink</item> <Order> converted into <dof:Orde
-
Hi All, I am getting this error while sending IDOC for Goods movement using message control (Output WA01, Medium EDI, Partner Function LS ) Maintain outgoing EDI-connection data for partner Q10CLNT100 Message no. VN032 Diagnosis The system could not
-
Hi All, Kindly let us know whether it is possible to change the date format for the locale specified for the portal user in UME. ex : for the locale en-us default date format is MM/dd/yyyy can it be customized to dd.MM.yyyy Thanks in adnvance, Regard
-
Hola, tengo un crecimiento excesivo del tablespace TEMP y no encuentro la causa, Consulte la v$sort_usage pero no muestra nada. Como puedo solucionarlo ? Gracias Jorge