Query output takes much time to download
Hi Experts,
I am running an query in SQ01 and getting output of 40000 row. The query execution is very faster but when I try to download the output to local file it take lot of time to download. Initially it was giving RUN TIME error then I increased the rdisp/max_wprun_time to 1200, now it is saving to the local file but takes 30 minutes.
So, can anybody tell me if there is any way to resolve this issue?
Note: The system performance is excellent and this is a QAS system
Regards,
Nilesh
Hi,
You can't really do much here, excpet trying to download the output in another format or maybe trying to split your query to fetch less no. of records.
Hope this helps.
Regards,
Varun
Similar Messages
-
Query takes much time while sum of yearly bases amount
I have made query on the basis of joing to get payroll data it's woking fine but when we accumulate this on yearly basis while giving parameter from and to date then it takes much time, so how can we optimise this.
please advice.this is query
SELECT paa.assignment_id,MAX(EFFECTIVE_DATE) effective_date,
paypa.business_group_id,paypa.payroll_id,
nvl(SUM(decode(pet.element_type_id,10,decode(pivf.name,'Pay Value',TO_NUMBER(NVL(prrv.result_value, 0))))),0) AMOUNT
FROM pay_assignment_actions paa,
pay_payroll_actions paypa,
pay_run_results prr,
pay_element_types_f pet,
pay_element_classifications pec,
pay_run_result_values prrv,
pay_input_values_f pivf
where paypa.payroll_ACTION_id = paa.payroll_ACTION_id
AND prr.assignment_ACTION_id = paa.assignment_ACTION_id
AND paypa.action_status = 'C'
AND paa.action_status = 'C'
and paypa.action_type in ('Q','R')
AND pet.element_type_id = prr.element_type_id
AND pec.classification_id = pet.classification_id
AND pivf.input_value_id=prrv.input_value_id
AND prr.run_result_id = prrv.run_result_id
AND pivf.element_type_id = pet.element_type_id
AND paypa.effective_date BETWEEN pivf.effective_start_date AND pivf.effective_end_date
AND paypa.effective_date BETWEEN pet.effective_start_date AND pet.effective_end_date
AND paypa.effective_date between to_date('01-JUL-2010') AND TO_DATE('30-JUN-2011')
group by paa.assignment_id,paypa.business_group_id,paypa.payroll_id
any idea for this ,how can we improve performance,althoug it's woking fine without using group by function
Edited by: oracle0282 on Mar 31, 2011 11:36 PM -
hi all
When i tries to retrive the data from outline using OLAP outline extractor it takes much time and less data
i dont know whats the reason and where as in razzza i get in fraction of sec but its not format which ima looking for
help appreciated
regardsYou may need to make some registry settings, have a look at this post and there is a link to Tim's blog - Re: Extracting large dimension using outline extractor
Cheers
John
http://john-goodwin.blogspot.com/ -
Why it takes long time to download and udates for iphone4
why it takes long time to download and updates for iphone4 ?
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
Why do you need this if you are going to Oracle. Try
removing this. It might be trying things out with the
old driver and then moving to the new driver. This has nothing to do with it. (And it doesn't "try things out")
>
Also check your machines. They might be slow. There is
no reason, it should take more than few hundred
milliseconds to connect to a database on localhost as
your case is.Possible reasons...
- Network traffic
- Faulty router/gateway
- Busy server
- Faulty network card (either end)
- Conflict with another box -
MacBook Pro takes much time for Shutdown (More than 1 min.)
I have a MacBook Pro, and some weeks ago, it takes much time for shutdown.
I've tried to install the 10.5.2 update, but nothing solves.
My mac takes for shutdown 1 min & 30 seconds every time. I don't understand. Why?
What can I do? I tried to repair permissions and reset the PMU, but the problem still there.
Help me! Please!
Thanks
Sorry, but I'm spanish and it's possible that I've made more mistakes writing in English.Thanks. I tried to dissable an EyeTv option, then EyeConnect don't appears in the Activity Monitor. This is the result:
Now, my MacBook Pro takes only 35 seconds for shutdown. I think is better than yesterdey, but is more than the 5 seconds that you described in the last post. Is 35 seconds a good time? In this 35 seconds, I only can see the background image, without icons and without the finder menu bar. After 35 seconds, Mac is off.
How can I disable iDisk sync? I have a free mac Account, but is expired and then I can't use iDisk, only the name in iChat. But, is iDisk enabled? -
Analyze a Query which takes longer time in Production server with ST03 only
Hi,
I want to Analyze a Query which takes longer time in Production server with ST03 t-code only.
Please provide me with detail steps as to perform the same with ST03
ST03 - Expert mode- then I need to know the steps after this. I have checked many threads. So please don't send me the links.
Write steps in detail please.
<REMOVED BY MODERATOR>
Regards,
Sameer
Edited by: Alvaro Tejada Galindo on Jun 12, 2008 12:14 PMThen please close the thread.
Greetings,
Blag. -
Problem : SELECT from LTAP table takes much time (Sort in Database layer)
Guys,
Im having problem with this select statement. It takes much time just to get single record.
The problem is with accessing the LTAP table and the ORDER BY DESCENDING statement.
The objective of this select statement is to get the non blocked storage bin which is used by latest transfer order number.
If the latest transfer order no storage bin is blocked, then it will loop and get the 2nd latest transfer order no's storage bin and
checks whether it blocked or not. It will keep looping.
The secondary index has been created but the it still taking much time (3 minutes for 10K records in LTAP)
Secondary Indexes:
a) LTAP_M ->MANDTLGNUM PQUIT MATNR
b)LTAP_L ->LGNUM PQUIT VLTYP VLPLA
Below is the coding.
******************Start of DEVK9A14JW**************************
SELECT ltaptanum ltapnlpla ltap~wdatu INTO (ltap-tanum, ltap-nlpla, ltap-wdatu)
UP TO 1 ROWS
FROM ltap INNER JOIN lagp "DEVK9A15OA
ON lagplgnum = ltaplgnum
AND lagplgtyp = ltapnltyp
AND lagplgpla = ltapnlpla
WHERE lagp~skzue = ' '
AND ltap~pquit = 'X'
AND ltap~matnr = ls_9001_scrn-matnr
AND ltap~lgort = ls_9001_scrn-to_lgort
AND ltap~lgnum = ls_9001_scrn-lgnum
AND ltap~nltyp = ls_9001_scrn-nltyp
ORDER BY tanum DESCENDING.
ENDSELECT.
IF sy-subrc EQ 0.
ls_9001_scrn-nlpla = ltap-nlpla.
EXIT.
ENDIF.
******************End of DEVK9A14JW**************************> Im having problem with this select statement. It takes much time just to get single record.
This is not true. Together with the ORDER BY the UP TO 1 ROWS does not read 1 record but prepare all records, orders them and return one record, i.e. the largest in sort order.
You must check what you need, either you need the largest record, then this can be your only possible solution.
If you need only one recoird then the ORDER BY does not make sense.
If you need the single largest record, then sometimes the aggregate function MAX can be an alternative.
I did not look at the index support, this can always be a problem.
Siegfried -
Primavera Contract managment take much time when open large PDF files.
Dears,
i have a big problem!
i made integration between the PCM and sharepoint 2010 and make migration from the file system to sharepoint.
the sharepoint database reach 355GB
after that unfortunately, when i try to open large pdf attachment through PCM(Primavera Contract Managmnet) it take much time then whan was opened from the file server.
i made everthing upgrde the RAM and processor but the problem still exists.
please help!
Edited by: 948060 on Sep 19, 2012 1:48 AMwe start store attachment on 2007. all of these files are migrated to sharepoint 2010 now on the staging enviroment.
but, we faced the performance issue as mentioned above.
the large files (begin 5MB) take a lot of time to open through the PCM -
now when i put my phone into recovery mode it connects to i phone software update server and sftware starts download but after much time of download error=-39 occurs
my itune keep showing message that i tune is downloading software for my phone. but downloading for several minutes it displays message that i tune cannot connect to my phone because it is locked with a passcode. first entre the pass code.
on the other hand phone is started with a message on screen phone is disabled. -
Optimizing the query - which takes more time
Hi,
Am having a query which was returning the results pretty fast one week back but now the same query takes more time to respond, nothing much changed in the table data, what could be the problem. Am using IN in the where clause, whether that could be an issue? if so what is the best method of rewriting the query.
SELECT RI.RESOURCE_NAME,TR.MSISDN,MAX(TR.ADDRESS1_GOOGLE) KEEP(DENSE_RANK LAST ORDER BY TR.MSG_DATE_INFO) ADDRESS1_GOOGLE,
MAX(TR.TIME_STAMP) MSG_DATE_INFO FROM TRACKING_REPORT TR, RESOURCE_INFO RI
WHERE TR.MSISDN IN ( SELECT MSISDN FROM RESOURCE_INFO WHERE GROUP_ID ='4'
AND COM_ID='12') AND RI.MSISDN = TR.MSISDN
GROUP BY RI.RESOURCE_NAME,TR.MSISDN ORDER BY MSG_DATE_INFO DESCHi
i have followed this link http://www.lorentzcenter.nl/awcourse/oracle/server.920/a96533/sqltrace.htm in enabling the trace and found out the following trace output, can you explain the problem here and its remedial action pls.
SELECT RI.RESOURCE_NAME,TR.MSISDN,MAX(TR.ADDRESS1_GOOGLE) KEEP(DENSE_RANK
LAST ORDER BY TR.MSG_DATE_INFO) ADDRESS1_GOOGLE, MAX(TR.TIME_STAMP)
MSG_DATE_INFO
FROM
TRACKING_REPORT TR, RESOURCE_INFO RI WHERE RI.GROUP_ID ='426' AND
RI.COM_ID='122' AND RI.MSISDN = TR.MSISDN GROUP BY RI.RESOURCE_NAME,
TR.MSISDN
call count cpu elapsed disk query current rows
Parse 1 0.01 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 6 13.69 389.03 81747 280722 0 72
total 8 13.70 389.05 81747 280722 0 72
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 281
Rows Row Source Operation
72 SORT GROUP BY
276558 NESTED LOOPS
79 TABLE ACCESS FULL RESOURCE_INFO
276558 TABLE ACCESS BY INDEX ROWID TRACKING_REPORT
276558 INDEX RANGE SCAN TR_INDX_ON_MSISDN_TIME (object id 60507)
********************************************************************************and the plan_table output is
STATEMENT_ID TIMESTAMP REMARKS OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER SEARCH_COLUMNS ID PARENT_ID POSITION COST CARDINALITY BYTES OTHER_TAG PARTITION_START PARTITION_STOP PARTITION_ID OTHER DISTRIBUTION CPU_COST IO_COST TEMP_SPACE ACCESS_PREDICATES FILTER_PREDICATES
23-Mar-11 23:36:45 SELECT STATEMENT CHOOSE 0 115 115 1058 111090 115
23-Mar-11 23:36:45 SORT GROUP BY 1 0 1 115 1058 111090 115
23-Mar-11 23:36:45 NESTED LOOPS 2 1 1 9 4603 483315 9
23-Mar-11 23:36:45 TABLE ACCESS FULL BSNL_RTMS RESOURCE_INFO 2 ANALYZED 3 2 1 8 1 30 8 "RI"."GROUP_ID"=426 AND "RI"."COM_ID"='122'
23-Mar-11 23:36:45 TABLE ACCESS BY INDEX ROWID BSNL_RTMS TRACKING_REPORT 1 ANALYZED 4 2 2 1 3293 246975 1
23-Mar-11 23:36:45 INDEX RANGE SCAN BSNL_RTMS TR_INDX_ON_MSISDN_TIME NON-UNIQUE 1 5 4 1 1 3293 1 "RI"."MSISDN"="TR"."MSISDN" -
Sql query taking 2 much time to execute
hi every body
I am trying to JOIN two tables using the following criteria
GL_JE_LINES.GL_SL_LINK_ID = CST_AE_LINES.GL_SL_LINK_ID
but it takes too much time .. like (1 hr r more)
which mens that something is wrong with my logic...
any guidance will be appreciated ....
thnxHi,
Do you run "Gather Schema Statistics" program on regular basis?
What is the query you are trying to run?
Please see the following threads.
When your query takes too long ...
When your query takes too long ...
Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting
Regards,
Hussein -
Query execution takes long time
Hi All,
I have one critical problem in my production system.
I have three sales related queries in the production system and when i try to execute it in the BEx analyser(in Microsoft excle) it will take too much time and at last it will give am error that "Time Limit Exceeded" .
Actually we have created these three queries on one Infoset and that Infoset contains three DSOs and two master data.
Please give me the proper solution and help me to solve this production problem.Dear James,
first give some filter conditions on the query and try to restrict for lesser volume of data.from the message it is evident that may be you are trying to fetch large volume of data.so please execute the query once in RSRT and try to find the solution in that.there you can get all the statistics reagrding the query.if still you cant find please let me know the message you are getting in RSRT.then we can give a viable solution for that.
hope you might aware of all the options reagarding RSRT.
assign points if it helps..
Thanks & Regards,
Ashok. -
When we start the laptop, it takes so much time to boot..
Dear Customer,
Welcome and Thank You for posting your query on HP Support Forum
You can try the steps mentioned below:
Step 01. Click on the Start Button and Click on "Run"
Step 02. Please type "temp" [Without Quotes] and press Enter
Step 03. In the window which comes up, we need to delete all the files and folders
Note: You cannot delete 2 or 3 files or folders, you can skip that
Step 04. Again click on the Start Button and Click on "Run"
Step 05. Please type "%temp%" [Without Quotes] and press Enter
Step 06. In the window which comes up, we need to delete all the files and folders
Note: You cannot delete 2 or 3 files or folders, you can skip that
Step 07. Click on the Start Button and Click on "Run"
Step 08. Please type "prefetch"[Without Quotes] and press Enter
Step 09. In the window which comes up, we need to delete all the files and folders
Note: You cannot delete 2 or 3 files or folders, you can skip that
Step 10. On your desktop, Please right click on "Recycle Bin" Icon and Click on Empty Recycle Bin.
Click Yes if you get a prompt to delete everything permanently
Step 10. Click on the Start Button and open the Control Panel
Step 11. Please open "Programs and Features / Add or Remove Programs"
Step 12. Please uninstall if you have any unwanted Softwares from here
Note: If your Notebook prompts to reboot/restart when you uninstall, go ahead with it.
Once your Notebook comes back please continue the troubleshooting again
Step 13. Please Turn OFF the Notebook
Step 14. Un-plug the Power/AC Adapter and also remove the battery too
Step 15. Press and Hold the Power Button on the Notebook for a full minute
Step 16. Now let's re-insert the battery back in and plug back the Power/AC Adapter
Step 17. Start the Notebook and Now reset the Internet Explorer browser
Hope this helps, for any further queries reply to the post and feel free to join us again
**Click the White Thumbs Up Button on the right to say Thanks**
Make it easier for other people to find solutions by marking a Reply 'Accept as Solution' if it solves your problem.
Thank You,
K N R K
Although I am an HP employee, I am speaking for myself and not for HP -
Query Prediction takes long time - After upgrade DB 9i to 10g
Hi all, Thanks for all your help.
we've got an issue in Discoverer, we are using Discoverer10g (10.1.2.2) with APPS and recently we upgraded Oracle DatBase from 9i to 10g.
After Database upgrade, when we try to run reports in Discoverer plus taking long time for query prediction than used to be(double/triple), only for query prediction taking long time andthen takes for running query.
Have anyone got this kind of issues seen before, could you share your ideas/thoughts that way i can ask DBA or sysadmin to change any settings at Discoverer server side
Thanks in advance
skatHi skat
Did you also upgrade your Discoverer from 9i to 10g or did you always have 10g?
If you weren't always on 10g, take a look inside the EUL5_QPP_STATS table by running SELECT COUNT(*) FROM EUL5_QPP_STATS on both the old and new systems
I suspect you may well find that there are far more records in the old system than the new one. What this table stores is the statistics for the queries that have been run before. Using those statistics is how Discoverer can estimate how long queries will take to run. If you have few statistics then for some time Discoverer will not know how long previous queries will take. Also, the statistics table used by 9i is incompatible with the one used by 10g so you can't just copy them over, just in case you were thinking about it.
Personally, unless you absolutely rely on it, I would turn the query predictor off. You do this by editing your PREF.TXT (located on the middle tier server at $ORACLE_HOME\Discoverer|util) and change the value of QPPEnable to 0. AFter you have done this you need to run the Applypreferences script located in the same folder and then stop and start your Discoverer service. From that point on queries will no longer try to predict how long they will take and they will just start running.
There is something else to check. Please run a query and look at the SQL. Do you by change see a database hint called NOREWRITE? If you do then this will also cause poor performance. Should you see this let me know and I will let you know how to override it.
If you have always been on 10g and you have only upgraded your database it could be that you have not generated your database statistics for the tables that Discoverer is using. You will need to speak with your DBA to see about having the statistics generated. Without statistics, the query predictor will be very, very slow.
Best wishes
Michael -
Query Consuming too much time.
Hi,
i am using Release 10.2.0.4.0 version of oracle. I am having a query, its taking too much time(~7 minutes) for indexed read. Please help me to understand the reason and workaround for same.
select *
FROM a,
b
WHERE a.xdt_docownerpaypk = b.paypk
AND a.xdt_doctype = 'PURCHASEORDER'
AND b.companypk = 1202829117
AND a.xdt_createdt BETWEEN TO_DATE (
'07/01/2009',
'MM/DD/YYYY')
AND TO_DATE (
'01/01/2010',
'MM/DD/YYYY')
ORDER BY a.xdt_createdt DESC;
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 1 | 907 |00:06:45.83 | 66716 | 60047 | 478K| 448K| 424K (0)|
|* 2 | TABLE ACCESS BY INDEX ROWID | a | 1 | 1 | 907 |00:06:45.82 | 66716 | 60047 | | | |
| 3 | NESTED LOOPS | | 1 | 1 | 6977 |00:06:45.64 | 60045 | 60030 | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| b | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | | | |
|* 5 | INDEX RANGE SCAN | IDX_PAYIDENTITYCOMPANY | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | | | |
|* 6 | INDEX RANGE SCAN | IDX_XDT_N7 | 1 | 3438 | 6975 |00:06:45.64 | 60041 | 60030 | | | |
Predicate Information (identified by operation id):
2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
5 - access("b"."COMPANYPK"=1202829117)
6 - access("XDT_DOCTYPE"='PURCHASEORDER' AND "a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
filter("a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
32 rows selected.
index 'idx_xdt_n7' is on (xdt_doctype,action_date,xdt_docownerpaypk).
index idx_xdt_n7 details are as below.
blevel distinct_keys avg_leaf_blocks_per_key avg_data_blocks_per_key clustering_factor num_rows
3 868840 1 47 24020933 69871000
But when i am deriving exact value of paypk from table b and applying to the query, its using another index(idx_xdt_n4) which is on index 'idx_xdt_n4' is on (month,year,xdt_docownerpaypk,xdt_doctype,action_date)
and completes within ~17 seconds. below is the query/plan details.
select *
FROM a
WHERE a.xdt_docownerpaypk = 1202829132
AND xdt_doctype = 'PURCHASEORDER'
AND a.xdt_createdt BETWEEN TO_DATE (
'07/01/2009',
'MM/DD/YYYY')
AND TO_DATE (
'01/01/2010',
'MM/DD/YYYY')
ORDER BY xdt_createdt DESC;
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 3224 | 907 |00:00:02.19 | 7001 | 339 | 337K| 337K| 299K (0)|
|* 2 | TABLE ACCESS BY INDEX ROWID| a | 1 | 3224 | 907 |00:00:02.19 | 7001 | 339 | | | |
|* 3 | INDEX SKIP SCAN | IDX_XDT_N4 | 1 | 38329 | 6975 |00:00:02.08 | 330 | 321 | | | |
Predicate Information (identified by operation id):
2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
3 - access("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER')
filter(("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER'))
index idx_xdt_n4 details are as below.
blevel distinct_keys avg_leaf_blocks_per_key avg_data_blocks_per_key clustering_factor num_rows
3 868840 1 47 23942833 70224133Edited by: 930254 on Apr 26, 2013 5:04 AMthe first query uses the predicate "XDT_DOCTYPE"='PURCHASEORDER' to determine the range of the index IDX_XDT_N7 that has to be scanned and uses the other predicates to filter out most of the index blocks. The second query uses an INDEX SKIP SCAN ignoring the first column of the index IDX_XDT_N4 and using the predicates for the following columns ("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER') to get a much more selective access (reading only 330 blocks instead of > 60K).
I think there are two possible options to improve the performance:
1. If creating a new index is an option you could define an index on table A(xdt_doctype, xdt_docownerpaypk, xdt_createdt)
2. If creating a new index is not an option you could use an INDEX SKIP SCAN Hint (INDEX_SS(A IDX_XDT_N4)) to order the CBO to use the second index (without a hint the CBO tends to ignore the option of using a SKIP SCAN in an NL join). But using Hints in production is rarely a good idea... In 11g you could you sql baselines to avoid such hints in the code.
Regards
Martin
Maybe you are looking for
-
How do you get a button to open a PDF?
HTML DB 2.0 Ok I can get a list to open a pdf (make target a a url then(\\server.domain.gov.uk\DATA\info\file.pdf "target=_blank") I can get a page item to open a odf (in the sourse use (href=\\server.domain.gov.uk\DATA\info\file.PDF target="_blank">
-
Bleed and crop marks for a magazine. need help
Can anyone tell me how to place the crop marks for a multiple page project ? For example on a left page do i have to set the crop marks only on the left side of the page or all around it ? *i'm using PS because i have no experience with ID. Thanking
-
What is meant by Defining Common Data Area ??
I have seen a code like this .... DATA: BEGIN OF COMMON PART FM06LCS2, ( All the parameters /select-options are declared here) END OF COMMON PART. what is this ??? what for it is used ?????? My requirement is ..... To add one more Par
-
Using java to retrieve windows password
Is there any way to get Java to retrieve the windows password?
-
Using the nav bar to link to another page.
It is possible to use the nave bar to link to another page such as a blog? Example: Welcome About Me My Photos Blog (linked to external blog such as blogger)