Export to xls from table is taking too long (Around 20 min)
Hi all,
I am new member to this forum. I had a table with 80 columns and 20000 records. It is having primary key column. When i try to export the table data in toad from data grid it is taking 15-20 min. When i try to export from my front end java application it is saying time out. when i query the table i am getting results in 3 secs, but when tried to export it is taking more time.
Can any one please provide me solution for exporting the data faster with in a min?
My table structure is
COL1 NUMBER
COL2 VARCHAR2 (150 Byte)
COL3 DATE
COL4 DATE
COL5 NUMBER
COL6 NUMBER
COL7 VARCHAR2 (6 Byte)
COL8 NUMBER
COL9 NUMBER
COL10 NUMBER
COL11 NUMBER
COL12 NUMBER
COL13 NUMBER
COL14 NUMBER
COL15 CHAR (1 Byte)
COL16 CHAR (1 Byte)
COL17 NUMBER
COL18 NUMBER
COL19 NUMBER
COL20 NUMBER
COL21 NUMBER
COL22 NUMBER
COL23 VARCHAR2 (200 Byte)
COL24 VARCHAR2 (200 Byte)
COL25 VARCHAR2 (200 Byte)
COL26 VARCHAR2 (200 Byte)
COL27 VARCHAR2 (200 Byte)
COL28 VARCHAR2 (200 Byte)
COL29 NUMBER
COL30 VARCHAR2 (200 Byte)
COL31 NUMBER
COL32 NUMBER
COL33 NUMBER
COL34 NUMBER
COL35 NUMBER
COL36 NUMBER
COL37 NUMBER
COL38 NUMBER
COL39 NUMBER
COL40 NUMBER
COL41 DATE
COL42 VARCHAR2 (150 Byte)
COL43 DATE
COL44 VARCHAR2 (150 Byte)
COL45 NUMBER
COL46 VARCHAR2 (50 Byte)
COL47 VARCHAR2 (50 Byte)
COL48 NUMBER
COL49 NUMBER
COL50 CHAR (1 Byte)
COL51 CHAR (1 Byte)
COL52 VARCHAR2 (150 Byte)
COL53 CHAR (1 Byte)
COL54 VARCHAR2 (150 Byte)
COL55 NUMBER
COL56 NUMBER
COL57 VARCHAR2 (250 Byte)
COL58 CHAR (1 Byte)
COL59 CHAR (1 Byte)
COL60 CHAR (1 Byte)
COL61 CHAR (1 Byte)
COL62 CHAR (1 Byte)
COL63 CHAR (1 Byte)
COL64 CHAR (1 Byte)
COL65 CHAR (1 Byte)
COL66 CHAR (1 Byte)
COL67 CHAR (1 Byte)
COL68 CHAR (1 Byte)
COL69 CHAR (1 Byte)
COL70 CHAR (1 Byte)
COL71 CHAR (1 Byte)
COL72 NUMBER
COL73 NUMBER
COL74 NUMBER
COL75 NUMBER
COL76 NUMBER
COL77 CHAR (1 Byte)
COL78 CHAR (1 Byte)
COL79 CHAR (1 Byte)
Hi,
I got it. But this export is happening from front end java application, Which i am simulating in back-end toad. Could you please help me any architecture level approach while creating table. Once again thanks for your input.
Regards,
Pavan.
Similar Messages
-
Getting data from table BSEG taking too long ... any solutions.
Hello people I am currently trying to get data from table BSEG for one particular G/L Account Number With restrictions using For All Entries.
The problem is that even with such tight restrictions its causing my report program to run way too slow. I put an option where you dont have to access table bseg. And it runs just fine. (all of this is done during PRD Server).
My question is
1.) How come BSEG seems to make the report slow, even though I put some tight restrictions. <b>Im using For All Entries where Zuonr eq i_tab-zuonr</b>it seems to work fine in DEV and <b>hkont EQ '0020103101'</b>(Customer Deposits).
2.) Is there a way for me to do the same thing as what I mentioned in #1 but only much faster.
Thanks guys and take careHi
It should be better you don't read BSEG table if you haven't the keys BUKRS and BELNR, because the reading could take many times if there are many hits.
If you want to find out the records of G/L Account it's better read the index table BSIS (for open items) and BSAS (for cleared items), here the field HKONT is a key (and ZUONR too). So you can improve the performance:
DATA: T_ITEMS LIKE STANDARD TABLE OF BSIS.
SELECT * FROM BSAS INTO TABLE T_ITEMS
FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
AND HKONT = '0020103101'
AND ZUONR = I_ITAB-ZUONR.
SELECT * FROM BSIS APPENDING TABLE T_ITEMS
FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
AND HKONT = '0020103101'
AND ZUONR = I_ITAB-ZUONR.
Remember every kind of item has an own index tables:
- BSIS/BSAS for G/L Account
- BSIK/BSAK for Vendor
- BSID/BSAD for Customer
These table have the same informations you can find out from BSEG and BKPF.
Max -
Create table query taking too long..
Hello experts...
I am taking the backup of table A which consist of 135 million records...
for this am using below query..
create table tableA_bkup as select * from tableA;
it is taking more than hour..still running....
is there any other way to query fast....
thanks in advance....CTAS is one of the fastest ways to do such a thing.
Remember you duplicate data. THis means if your table holds 50GB of data then it will have to copy those 50GB of data.
A different way could be to use EXPDP to create a backup dump file from this table data. However I'm not sure if there is a big performance difference.
Both versions could profit from parallel execution. -
In boot camp assistance download the latest windows support from apple is taking too long
I am trying to download the latest windows support software from apple in Boot Camp Assistance and is taking for ever. Any suggestions?
In this threadhttps://discussions.apple.com/message/16605226#16605226there's a link to download the BootCamp 4.0 Driver package.
-
Hello Friends,
The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
Scenario:
Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
Questions are…
What is best option?
IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
Thanks
Karthikeyan Jothihttp://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
Processing
hundreds of millions records can be done in less than an hour.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Importing a table with a BLOB column is taking too long
I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
1 - set buffer to 500 Mb
2 - pre-created the table and turned off logging
3 - set indexes=N
4 - set constraints=N
5 - I have 10 online redo logs with 200 MB each
6 - Even turned off logging at the database level with disablelogging = true
It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
For your info:
Computer: Sun v490 with 16 CPUs, solaris 10
memory: 10 Gigabytes
SGA: 4 GigabytesLegatti,
I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
Thanks for your reply. -
Data Archive Script is taking too long to delete a large table
Hi All,
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
CREATE TABLE "APP"."MON_TXNS"
( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
"BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_PAYER" NUMBER(12,0),
"ID_PAYER_PI" NUMBER(12,0),
"ID_PAYEE" NUMBER(12,0),
"ID_PAYEE_PI" NUMBER(12,0),
"ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
"STR_TEXT" VARCHAR2(60 CHAR),
"DAT_MERCHANT_TIMESTAMP" DATE,
"STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
"DAT_EXPIRATION" DATE,
"DAT_CREATION" DATE,
"STR_USER_CREATION" VARCHAR2(30 CHAR),
"DAT_LAST_UPDATE" DATE,
"STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
"STR_OTP" CHAR(6 BYTE),
"ID_AUTH_METHOD_PAYER" NUMBER(1,0),
"AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
"BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
"ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ENABLE,
CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for
2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2798378986
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 |
| 1 | DELETE | MON_TXNS | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 |
| 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
Please help,
thanks,
Banka Ravi'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index. -
Iam using 3015 network printer. while iam printing from word it is taking too long to print
Respected sir,
I am using 3015 printer. when ever i give print it is taking too long time nearly 1 minute . there is no problem with the networkLet's set a static IP address for the printer:
- Print a Network Config Page from the front of the printer. Note the printer's IP address.
- Type that IP address into a browser to reveal the printer's internal settings.
- Choose the Networking tab, then Wireless along the left side, then the IPv4 tab.
- On this screen you want to set a Manual IP. You need to set an IP address outside the range that the router automatically sets (called the DHCP range). You can find the DHCP range of the router using its internal settings page or in its manual. Use the CD that came with your router or type the router's IP address (ends in .1) into a browser.
- Use 255.255.255.0 for the subnet (unless you know it is different, if so, use that)
- Enter your router's IP (on the Network Config Page) for the gateway and first DNS. Leave the second one blank.
- Click 'Apply'.
Now, shut down the router and printer, start the router, wait, then start the printer.
After this you may need to redo 'Add a Printer' using the new IP address.
Say thanks by clicking "Kudos" "thumbs up" in the post that helped you.
I am employed by HP -
i tried to erase all content and reset settings from the device option.it is taking too long and its been 15 hours but not yet done.its showing a skeleton sigh with buffering and sometimes flipping to apple sign.
nabidhadi wrote:
.its showing a skeleton sigh with buffering and sometimes flipping to apple sign.
You iPod has been jail broken. Unfortunately you are on your own here. I can offer that you need to find out which method and software was used to JB.
Good luck...you're gonna need it.... -
Export PDF taking too long to upload to Adobe export online. Any suggestions?
I just recently purchased adobe exportPDF and I am trying to convert the PDF to word, but it is just taking too long even after opening multiple times.
Hi there Huapopicante,
How large/complex is the file that you're trying to convert? It may be that the file is too complex for the ExportPDF service to convert before timing out. Here are a few things that you can try, however:
Clear the browser cache and try again.
Make sure that you're using a supported web browser (http://www.adobe.com/acom/systemreqs/)
Try a different browser.
Make sure that there are no firewall/proxy settings that are limiting your access to the Internet.
Convert the PDF file from within Reader with OCR disabled, as described in this document: How to disable Optical Character Recognition (OCR) when converting PDF to Word or Excel.
Please let us know how it goes.
Best,
Sara -
Ios 7 taking too long to download from iTunes
Iam an iPad mini user.i am trying to download and update to ios7 software.but is taking too long(10 hours) and shows no sign of decrease.so what is the solution to this?
It should not take so long.
Try reset iPad and work from there.
Hold down the Sleep/Wake button and the Home button at the same time for at least ten seconds, until the Apple logo appears
Note: Data will not be affected. -
recently i noticed that the sound is not comming from my i phone 4s so i needed to restore it as it was earlier. So i decided to do it by restoring it without itunes. Now it is taking too long to restore it, it is still going on since 3 hours . Can anybody Tell me what problem i am facing???
Which software did you use to restore it, if you didn't go with iTunes?
-
I can't view my website at www.artisancandies.com, even though it's working and everyone else seems to see it. No, I don't have a firewall, and it's not because of my internet provider - I have AT&T at work, and Comcast at home. My husband can see the site on his laptop. I tried dumping my cache in both Firefox and Safari, but it didn't work. I looked at it through proxify.com, and can see it that way, so I know it works. This is so frustrating, because I used to only see it when I typed in artisancandies.com - it would never work for me if I typed in www.artisancandies.com. Now it doesn't work at all. This is the message I get in Firefox:
"The connection has timed out. The server at www.artisancandies.com is taking too long to respond."
Please help!!!
Kristen ScottLinc, here's what I've got from what you asked me to do. I hope you don't mind, but it was simple enough to leave everything in, so you could see the progression:
Kristen-Scotts-Computer:~ kristenscott$ kextstat -kl | awk ' !/apple/ { print $6 $7 } '
Kristen-Scotts-Computer:~ kristenscott$ sudo launchctl list | sed 1d | awk ' !/0x|apple|com\.vix|edu\.|org\./ { print $3 } '
WARNING: Improper use of the sudo command could lead to data loss
or the deletion of important system files. Please double-check your
typing when using sudo. Type "man sudo" for more information.
To proceed, enter your password, or type Ctrl-C to abort.
Password:
com.microsoft.office.licensing.helper
com.google.keystone.daemon
com.adobe.versioncueCS3
Kristen-Scotts-Computer:~ kristenscott$ launchctl list | sed 1d | awk ' !/0x|apple|edu\.|org\./ { print $3 } '
com.google.keystone.root.agent
com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae
Kristen-Scotts-Computer:~ kristenscott$ ls -1A {,/}Library/{Ad,Compon,Ex,Fram,In,La,Mail/Bu,P*P,Priv,Qu,Scripti,Sta}* 2> /dev/null
/Library/Components:
/Library/Extensions:
/Library/Frameworks:
Adobe AIR.framework
NyxAudioAnalysis.framework
PluginManager.framework
iLifeFaceRecognition.framework
iLifeKit.framework
iLifePageLayout.framework
iLifeSQLAccess.framework
iLifeSlideshow.framework
/Library/Input Methods:
/Library/Internet Plug-Ins:
AdobePDFViewer.plugin
Disabled Plug-Ins
Flash Player.plugin
Flip4Mac WMV Plugin.plugin
Flip4Mac WMV Plugin.webplugin
Google Earth Web Plug-in.plugin
JavaPlugin2_NPAPI.plugin
JavaPluginCocoa.bundle
Musicnotes.plugin
NP-PPC-Dir-Shockwave
Quartz Composer.webplugin
QuickTime Plugin.plugin
Scorch.plugin
SharePointBrowserPlugin.plugin
SharePointWebKitPlugin.webplugin
flashplayer.xpt
googletalkbrowserplugin.plugin
iPhotoPhotocast.plugin
npgtpo3dautoplugin.plugin
nsIQTScriptablePlugin.xpt
/Library/LaunchAgents:
com.google.keystone.agent.plist
/Library/LaunchDaemons:
com.adobe.versioncueCS3.plist
com.apple.third_party_32b_kext_logger.plist
com.google.keystone.daemon.plist
com.microsoft.office.licensing.helper.plist
/Library/PreferencePanes:
Flash Player.prefPane
Flip4Mac WMV.prefPane
VersionCue.prefPane
VersionCueCS3.prefPane
/Library/PrivilegedHelperTools:
com.microsoft.office.licensing.helper
/Library/QuickLook:
GBQLGenerator.qlgenerator
iWork.qlgenerator
/Library/QuickTime:
AppleIntermediateCodec.component
AppleMPEG2Codec.component
Flip4Mac WMV Export.component
Flip4Mac WMV Import.component
Google Camera Adapter 0.component
Google Camera Adapter 1.component
/Library/ScriptingAdditions:
Adobe Unit Types
Adobe Unit Types.osax
/Library/StartupItems:
AdobeVersionCue
HP Trap Monitor
Library/Address Book Plug-Ins:
SkypeABDialer.bundle
SkypeABSMS.bundle
Library/Internet Plug-Ins:
Move_Media_Player.plugin
fbplugin_1_0_1.plugin
Library/LaunchAgents:
com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae.plist
com.apple.FolderActions.enabled.plist
com.apple.FolderActions.folders.plist
Library/PreferencePanes:
A Better Finder Preferences.prefPane
Kristen-Scotts-Computer:~ kristenscott$ -
SQL Update statement taking too long..
Hi All,
I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
Oracle Version: 11.2.0.1 64bit
OS: Windows 2008 64bit
desc temp_person;
Name Null? Type
PERSON_ID NOT NULL NUMBER(10)
DISTRICT_ID NOT NULL NUMBER(10)
FIRST_NAME VARCHAR2(60)
MIDDLE_NAME VARCHAR2(60)
LAST_NAME VARCHAR2(60)
BIRTH_DATE DATE
SIN VARCHAR2(11)
PARTY_ID NUMBER(10)
ACTIVE_STATUS NOT NULL VARCHAR2(1)
TAXABLE_FLAG VARCHAR2(1)
CPP_EXEMPT VARCHAR2(1)
EVENT_ID NOT NULL NUMBER(10)
USER_INFO_ID NUMBER(10)
TIMESTAMP NOT NULL DATE
CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
Index created.
ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
Index analyzed.
explain plan for update temp_person
2 set first_name = (select trim(f_name)
3 from ext_names_csv
4 where temp_person.PERSON_ID=ext_names_csv.p_id
5 and temp_person.DISTRICT_ID=ext_names_csv.ed_id);
Explained.
@?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 3786226716
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 82095 | 4649K| 2052K (4)| 06:50:31 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 82095 | 4649K| 191 (1)| 00:00:03 |
|* 3 | EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV | 1 | 178 | 24 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
19 rows selected.By the looks of it the update is going to take 6 hrs!!!
ext_names_csv is an external table that have the same number of rows as the PERSON table.
ROHO@rohof> desc ext_names_csv
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
F_NAME VARCHAR2(300)
L_NAME VARCHAR2(300)Anyone can help diagnose this please.
Thanks
Edited by: rsar001 on Feb 11, 2011 9:10 PMThank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
Table created.
SQL> desc ext_person
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
FST_NAME VARCHAR2(300)
LST_NAME VARCHAR2(300)
SQL> select count(*) from ext_person;
COUNT(*)
93383
SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
Index created.
SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
SQL> explain plan for update temp_person
2 set first_name = (select fst_name
3 from ext_person
4 where temp_person.PERSON_ID=ext_person.p_id
5 and temp_person.DISTRICT_ID=ext_person.ed_id);
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 1236196514
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 93383 | 1550K| 186K (50)| 00:37:24 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 93383 | 1550K| 191 (1)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID| EXTT_PERSON | 9 | 1602 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EXT_PERSON_ED | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
SQL> explain plan for MERGE INTO temp_person t
2 USING (SELECT fst_name ,p_id,ed_id
3 FROM ext_person) ext
4 ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
5 WHEN MATCHED THEN
6 UPDATE set t.first_name=ext.fst_name;
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 2192307910
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | MERGE STATEMENT | | 92307 | 14M| | 1417 (1)| 00:00:17 |
| 1 | MERGE | TEMP_PERSON | | | | | |
| 2 | VIEW | | | | | | |
|* 3 | HASH JOIN | | 92307 | 20M| 6384K| 1417 (1)| 00:00:17 |
| 4 | TABLE ACCESS FULL| TEMP_PERSON | 93383 | 5289K| | 192 (2)| 00:00:03 |
| 5 | TABLE ACCESS FULL| EXT_PERSON | 92307 | 15M| | 85 (2)| 00:00:02 |
Predicate Information (identified by operation id):
3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
Note
- dynamic sampling used for this statement (level=2)
21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
Thank you all for your ideas that helped us get to the solution.
Much appreciated.
Thanks -
Query is taking too long to execute - contd
I am unable to post the entire explain plan in one post as it exceeds maximum length.
Please advise on how to post this.
Previous post Link : Link: Query is taking too long to execute
Regards,
Sreekanth Munagala.
Edited by: Sreekanth Munagala on Oct 27, 2009 8:31 AM
Edited by: Sreekanth Munagala on Oct 27, 2009 8:34 AMHi Tubby,
Today i executed only the first query in the view and it took almost 2.5 hrs.
Here is the explain plan for this query
SQL> SET SERVEROUTPUT ON
SQL> set linesize 200
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 766 | 2448 |
| 1 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 13 | 3 |
|* 2 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
| 3 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 29 | 3 |
|* 4 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
| 5 | VIEW | POC_ASN_PICKUP_LOCATIONS_V | 2 | 2426 | 17 |
| 6 | UNION-ALL | | | | |
| 7 | NESTED LOOPS | | 1 | 85 | 4 |
| 8 | NESTED LOOPS | | 1 | 78 | 4 |
|* 9 | TABLE ACCESS BY INDEX ROWID | PO_VENDOR_SITES_ALL | 1 | 73 | 3 |
|* 10 | INDEX UNIQUE SCAN | PO_VENDOR_SITES_U2 | 1 | | 2 |
|* 11 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | 5 | 1 |
|* 12 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | 7 | |
| 13 | NESTED LOOPS | | 1 | 91 | 13 |
| 14 | NESTED LOOPS | | 1 | 84 | 13 |
| 15 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 13 | 3 |
|* 16 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
PLAN_TABLE_OUTPUT
|* 17 | TABLE ACCESS BY INDEX ROWID | FND_LOOKUP_VALUES | 1 | 71 | 10 |
|* 18 | INDEX RANGE SCAN | FND_LOOKUP_VALUES_U2 | 13 | | 2 |
|* 19 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | 7 | |
|* 20 | COUNT STOPKEY | | | | |
| 21 | TABLE ACCESS BY INDEX ROWID | MTL_SYSTEM_ITEMS_B | 8 | 136 | 12 |
|* 22 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_U1 | 8 | | 3 |
|* 23 | COUNT STOPKEY | | | | |
| 24 | TABLE ACCESS BY INDEX ROWID | MTL_SYSTEM_ITEMS_B | 8 | 288 | 12 |
|* 25 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_U1 | 8 | | 3 |
| 26 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 27 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
| 28 | NESTED LOOPS | | 1 | 40 | 5 |
| 29 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 11 | 3 |
|* 30 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 2 |
| 31 | TABLE ACCESS BY INDEX ROWID | HZ_PARTIES | 1 | 29 | 2 |
|* 32 | INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | | 1 |
| 33 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 34 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
| 35 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 36 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
|* 37 | COUNT STOPKEY | | | | |
PLAN_TABLE_OUTPUT
|* 38 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_HEADERS | 1 | 21 | 3 |
|* 39 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_HEADERS_U2 | 1 | | 2 |
| 40 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 41 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
|* 42 | COUNT STOPKEY | | | | |
|* 43 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_HEADERS | 1 | 21 | 3 |
|* 44 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_HEADERS_U2 | 1 | | 2 |
| 45 | SORT AGGREGATE | | 1 | 39 | |
| 46 | NESTED LOOPS OUTER | | 2 | 78 | 1828 |
|* 47 | TABLE ACCESS FULL | ONTC_MTC_PROFORMA_HEADERS | 1 | 24 | 1825 |
| 48 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_LINES | 5 | 75 | 3 |
|* 49 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_LINES_PK | 11 | | 2 |
| 50 | NESTED LOOPS | | 1 | 766 | 2448 |
| 51 | NESTED LOOPS | | 1 | 761 | 2447 |
| 52 | NESTED LOOPS | | 1 | 746 | 2445 |
| 53 | NESTED LOOPS | | 1 | 694 | 2443 |
| 54 | NESTED LOOPS | | 1 | 682 | 2441 |
| 55 | NESTED LOOPS | | 1 | 671 | 2439 |
| 56 | NESTED LOOPS | | 1 | 612 | 2437 |
| 57 | NESTED LOOPS | | 1 | 600 | 2435 |
| 58 | NESTED LOOPS | | 1 | 575 | 2433 |
PLAN_TABLE_OUTPUT
| 59 | NESTED LOOPS | | 1 | 552 | 2431 |
| 60 | NESTED LOOPS | | 1 | 533 | 2429 |
| 61 | NESTED LOOPS | | 1 | 524 | 2428 |
| 62 | NESTED LOOPS | | 1 | 455 | 2427 |
| 63 | NESTED LOOPS | | 1 | 429 | 2426 |
| 64 | NESTED LOOPS | | 1 | 389 | 2424 |
| 65 | NESTED LOOPS | | 1 | 368 | 2422 |
| 66 | NESTED LOOPS | | 1 | 308 | 2421 |
| 67 | NESTED LOOPS | | 1 | 281 | 2419 |
| 68 | NESTED LOOPS | | 1 | 253 | 2418 |
| 69 | NESTED LOOPS | | 1 | 214 | 2416 |
| 70 | NESTED LOOPS | | 39 | 7371 | 2338 |
|* 71 | TABLE ACCESS FULL | RCV_SHIPMENT_HEADERS | 39 | 5070 | 2221 |
|* 72 | TABLE ACCESS BY INDEX ROWID| RCV_SHIPMENT_LINES | 1 | 59 | 3 |
|* 73 | INDEX RANGE SCAN | RCV_SHIPMENT_LINES_U2 | 1 | | 2 |
|* 74 | TABLE ACCESS BY INDEX ROWID | PO_LINES_ALL | 1 | 25 | 2 |
|* 75 | INDEX UNIQUE SCAN | PO_LINES_U1 | 1 | | 1 |
|* 76 | TABLE ACCESS BY INDEX ROWID | PO_LINE_LOCATIONS_ALL | 1 | 39 | 2 |
|* 77 | INDEX UNIQUE SCAN | PO_LINE_LOCATIONS_U1 | 1 | | 1 |
|* 78 | TABLE ACCESS BY INDEX ROWID | PO_HEADERS_ALL | 1 | 28 | 1 |
|* 79 | INDEX UNIQUE SCAN | PO_HEADERS_U1 | 1 | | |
PLAN_TABLE_OUTPUT
|* 80 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_LINES_ALL | 1 | 27 | 2 |
|* 81 | INDEX UNIQUE SCAN | OE_ORDER_LINES_U1 | 1 | | 1 |
| 82 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_HEADERS_ALL | 1 | 60 | 1 |
|* 83 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 | 1 | | |
|* 84 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 21 | 2 |
|* 85 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | 1 |
|* 86 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 40 | 2 |
|* 87 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | 1 |
| 88 | TABLE ACCESS BY INDEX ROWID | WSH_CARRIERS | 1 | 26 | 1 |
|* 89 | INDEX UNIQUE SCAN | WSH_CARRIERS_U2 | 1 | | |
|* 90 | TABLE ACCESS BY INDEX ROWID | WSH_CARRIER_SERVICES | 1 | 69 | 1 |
|* 91 | INDEX RANGE SCAN | WSH_CARRIER_SERVICES_N1 | 2 | | |
|* 92 | TABLE ACCESS BY INDEX ROWID | WSH_ORG_CARRIER_SERVICES | 1 | 9 | 1 |
|* 93 | INDEX RANGE SCAN | WSH_ORG_CARRIER_SERVICES_N1 | 1 | | |
| 94 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 19 | 2 |
|* 95 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 1 |
|* 96 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 23 | 2 |
|* 97 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | 1 |
|* 98 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 25 | 2 |
|* 99 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | 1 |
| 100 | TABLE ACCESS BY INDEX ROWID | HZ_PARTY_SITES | 1 | 12 | 2 |
PLAN_TABLE_OUTPUT
|*101 | INDEX UNIQUE SCAN | HZ_PARTY_SITES_U1 | 1 | | 1 |
| 102 | TABLE ACCESS BY INDEX ROWID | HZ_LOCATIONS | 1 | 59 | 2 |
|*103 | INDEX UNIQUE SCAN | HZ_LOCATIONS_U1 | 1 | | 1 |
|*104 | INDEX RANGE SCAN | HZ_LOC_ASSIGNMENTS_N1 | 1 | 11 | 2 |
| 105 | TABLE ACCESS BY INDEX ROWID | HZ_PARTY_SITES | 1 | 12 | 2 |
|*106 | INDEX UNIQUE SCAN | HZ_PARTY_SITES_U1 | 1 | | 1 |
| 107 | TABLE ACCESS BY INDEX ROWID | HZ_LOCATIONS | 1 | 52 | 2 |
|*108 | INDEX UNIQUE SCAN | HZ_LOCATIONS_U1 | 1 | | 1 |
|*109 | INDEX RANGE SCAN | HZ_LOC_ASSIGNMENTS_N1 | 1 | 15 | 2 |
|*110 | INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | 5 | 1 |
I will put the predicate information in another post.
193 rows selected.
SQL> spool offPlease suggest on how can we improve the performance.
Regards,
Sreekanth Munagala.
Maybe you are looking for
-
Problems with Adobe creative cloud offline packages ( Windows )
Hi When I try to install packages that are created with Adobe Creative Cloud Packager for Enterprise they start installing and then they roll-back in the middle of the process without returning any error. Not really sure what can cause this as there
-
Ipad mini retina ios 8 update - useless wifi connection
Hi, I updated my ipad mini retina display to latest version of ios 8 yesterday. What a disaster that turned out to be - since updating wifi connection to my BT homehub 5 router has been terrible. It will work for a few minutes then the connection jus
-
How do you find out if the transmitter and/or receiver leaves the session ?
So , how do you find out if the transmitter and/or a receiver leaves the session , is there an event when this happens ?
-
I do not know what I did. AFAIK, I was typing normally, and then, whoosh! Half an hour's work just vanished off to the right-hand side of the window. HELP! How do I get it back?
-
Seeburger AS2: Could not create deploy file
Hello, I am using Seeburger BIC mapper to do my EDI to XML conversion. When i try to generate the SDA file i get error as shown. Errors of Mapping See_E2X_ORDERS_UN_D97A : com.seeburger.bicmd.compile.JavaGenerationFailedException: Precompiler Error