Data Archive Script is taking too long to delete a large table
Hi All,
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
CREATE TABLE "APP"."MON_TXNS"
( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
"BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_PAYER" NUMBER(12,0),
"ID_PAYER_PI" NUMBER(12,0),
"ID_PAYEE" NUMBER(12,0),
"ID_PAYEE_PI" NUMBER(12,0),
"ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
"STR_TEXT" VARCHAR2(60 CHAR),
"DAT_MERCHANT_TIMESTAMP" DATE,
"STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
"DAT_EXPIRATION" DATE,
"DAT_CREATION" DATE,
"STR_USER_CREATION" VARCHAR2(30 CHAR),
"DAT_LAST_UPDATE" DATE,
"STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
"STR_OTP" CHAR(6 BYTE),
"ID_AUTH_METHOD_PAYER" NUMBER(1,0),
"AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
"BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
"ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ENABLE,
CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for
2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2798378986
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 |
| 1 | DELETE | MON_TXNS | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 |
| 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
Please help,
thanks,
Banka Ravi
'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.
Similar Messages
-
Data activation in ODS taking too long
Hi gurus,
I have loaded data into PSA. From there I have loaded this data into ODS. However, the data is not being activated. So, I have to do manual activation on it.
The problem is, I have to wait for a very long time. I started the data activation about 1 hour ago. My data records have 1.5 million lines.
In SM37 the log shows the following messages:
05.02.2007 12:13:37 Job started 00 516 S
05.02.2007 12:13:37 Step 001 started (program RSODSACT1, variant &0000000000061, user ID XXXXX) 00 550 S
05.02.2007 12:13:49 Activation is running: Data target ZFIGL_O2, from 1,553 to 1,553 RSM 744 S
05.02.2007 12:13:49 Data to be activated successfully checked against archiving objects RSMPC 153 S
05.02.2007 12:13:49 SQL: 05.02.2007 12:13:49 S12940 DBMAN 099 I
05.02.2007 12:13:49 ANALYZE TABLE "/BIC/AZFIGL_O240" DELETE DBMAN 099 I
05.02.2007 12:13:49 STATISTICS DBMAN 099 I
05.02.2007 12:13:49 SQL-END: 05.02.2007 12:13:49 00:00:00 DBMAN 099 I
05.02.2007 12:13:49 SQL: 05.02.2007 12:13:49 S12940 DBMAN 099 I
05.02.2007 12:13:49 BEGIN DBMS_STATS.GATHER_TABLE_STATS ( OWNNAME => DBMAN 099 I
05.02.2007 12:13:49 'SAPBWP', TABNAME => '"/BIC/AZFIGL_O240"', DBMAN 099 I
05.02.2007 12:13:49 ESTIMATE_PERCENT => 1 , METHOD_OPT => 'FOR ALL DBMAN 099 I
05.02.2007 12:13:49 INDEXED COLUMNS SIZE 75', DEGREE => 1 , DBMAN 099 I
05.02.2007 12:13:49 GRANULARITY => 'ALL', CASCADE => TRUE ); END; DBMAN 099 I
05.02.2007 12:13:56 SQL-END: 05.02.2007 12:13:56 00:00:07 DBMAN 099 I
It stuck there at SQL-END for some time...
Is this normal? How can I improve the performance for my data loading & activation?
Thank you very much.Hi Ken.
Many thanks for your input. I think I will follow what have been suggested in the note as follow.
+ 4. Activation of data in an ODS object
To improve system performance when activating data in the ODS object, you can make the following entries in Customizing under Business Information Warehouse ® General BW Settings ® ODS Object Settings:
the maximum number of parallel processes when activating data in the ODS object as when moving SIDs
the minimum number of data records for each data package when activating data in the ODS object, meaning you define the size of the data packages that are activated
the maximum wait time in seconds when activating data in the ODS object. This is the time when the main process (batch process) waits for the dialog process that is split before it classifies it as having failed.+
However, can someone advises me what would be an optimum / normal value for
1. the maximum number of parallel processes
2. the minimum number of data records for each data package
3.the maximum wait time in seconds?
Many thanks. -
Data manager jobs taking too long or hanging
Hoping someone here can provide some assistance with regard to the 4.2 version. We are specifically using BPC/OutlookSoft 4.2SP4 (and in process of upgrading to BPC7.5). Three server environment - SQL, OLAP and Web.
Problem: Data manager jobs in each application of production appset with five applications are either taking too long to complete for very small jobs (single entity/single period data copy/clear, under 1000 records) or completely hanging for larger jobs. This has been an issue for the last 7 days. During normal operation, small DM jobs ran in under a minute and large ones taking only a few minutes.
Failed attempts at resolution thus far:
1. Processed all applications from the OLAP server
2. Confirmed issue is specific to our appset and is not present in ApShell
3. Copied packages from ApShell to application to eliminate package corruption
4. Windows security updates were applied to all three servers but I assume this would also impact ApShell.
5. Cleared tblDTSLog history
6. Rebooted all three servers
7. Suspected antivirus however, problem persists with antivirus disabled on all three servers.
Other Observations
There are several tables in the SQL database named k2import# and several stored procedures named DMU_k2import#. My guess is these did not get removed because I killed the hung up jobs. I'm not sure if their existence is causing any issues.
To make the long story short, how can I narrow down at which point the jobs are hanging up or what is taking the longest time? I have turned on Debug Script but I don't' have documentation to make sense of all this info. What exactly is happening when I run a Clear package? At this point, my next step is to run SQL Profiler to get a look into what is going on behind the scenes on the sql server. I also want to rule out the COM+ objects on the web server but not sure where to start.
Any help is greatly appreciated!!
Thank you,
HiteshHi ,
The problem seems to be related to database. Do you have any maintenance plan for database?
It is specific for your appset because each appset has own database.
I suspect you have to run an sp_updatestats (Update Statistics) for your database and I think the issue with your jobs hang will be solved.
DMU_K2_importXXX table are coming from hang imports ...you can delete these tables because it is just growing the size of database and for sure are not used anymore.
Regards
Sorin Radulescu -
I was backing up my iphone by changing the location of library because i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up.
Also tell me about the performance of iphone 4 with ios 7.1.1...........
T0X1Crabidrabbit wrote:
Can I back up my iPhone 4S to my ipad 3 (64 gb)?
no
rabidrabbit wrote:
However, now I don't have enough space in iCloud to backup either device. Why not?
iCloud only give so much space for free storage, then if you exceed the limit of 5gb you have to pay for additional storage. -
R/3 Extraction taking too long to load data into BW
HI There,
I'm trying to extract SAP Standard extractor 0FI_AP_4 into BW, and its taking endless time.
Even the Extract checker RSA3 is taking too long to execute the data. Dont know why its taking so long to execute.
Since there in not much data to take such a long time.
Enhanced the datasource with three fields from BSEG using user exits.
Is that the reason why its taking too long? Does User Exit slows down the extraction process?
What measures i should take to quicken the process?
Thanks for your time
VandanaThanks for all you replies.
Please go through the steps I've gone through :
- Installed the Business Content and its in version 3.5
- Changed the update rules, Transfer rules and migrated the datasource to BI 7
- Enhanced the 0FI_AP_3 to include three fields BSEG table
- Ran RSA3 and the new fields are showing but the loading is quite slow.
- Commented the code and ran RSA3 and with little difference data is showing up
- Removed the comments and ran, its fine, though it takes little more time then previous step...but data is showing up
- Replicated the datasource into BW
- Created the info package and started the init process (before this deleted the previous stored init process)
- Data isn't loading and please see the error message below.
Diagnosis
The data request was a full update. In this case, the corresponding table in the source system does not
contain any data. System Response Info IDoc received with status 8. Procedure Check the data basis in the source system.
- Checked the transformation between datasource 0FI_AP_4 and Infosource ZFI_AP_4
and I DID NOT found the three fields which i enhanced from BSEG table in the 0FI_AP_4 datasource.
- Replicated the datasource 0FI_AP_4 again, but no change.
Now...I dont know whats happening here.
When i check the datasource 0FI_AP_4 in RSA6, i can see the three new fields from BSEG.
When i check RSA3, i can see the data getting populated with the three new fields from BSEG,
When i check the fields in the datasource 0FI_AP_4 in BW, I can see the three new fields. It shows
that the connection between BW and R/3 is fine, isn't it?
Now...Can anyone please suggest me how to go forward from here?
Thanks for your time
Vandana -
SQL Statement taking too long to get the data
Hi,
There are over 2500 records in a table and when retrieve all using ' SELECT * From Table' it is taking too long to get the data. ie .. 4.3 secs.
Is there any possible way to shorten the process time.
ThanksHi Patrick,
Here is the sql statement and table desc.
ID Number
SN Varchar2(12)
FN Varchar2(30)
LN Varchar2(30)
By Varchar(255)
Dt Date(7)
Add Varchar2(50)
Add1 Varchar2(30)
Cty Varchar2(30)
Stt Varchar2(2)
Zip Varchar2(12)
Ph Varchar2(15)
Email Varchar2(30)
ORgId Number
Act Varchar2(3)
select A."FN" || '' '' || A."LN" || '' ('' || A."SN" || '')'' "Name",
A."By", A."Dt",
A."Add" || ''
'' || A."Cty" || '', '' || A."Stt" || '' '' || A."Zip" "Location",
A."Ph", A."Email", A."ORgId", A."ID",
A."SN" "OSN", A."Act"
from "TBL_OPTRS" A where A."ID" <> 0 ';
I'm displaying all rows in a report.
if I use 'select * from TBL_OPTRS' , this also takes 4.3 to 4.6 secs.
Thanks. -
I can't view my website at www.artisancandies.com, even though it's working and everyone else seems to see it. No, I don't have a firewall, and it's not because of my internet provider - I have AT&T at work, and Comcast at home. My husband can see the site on his laptop. I tried dumping my cache in both Firefox and Safari, but it didn't work. I looked at it through proxify.com, and can see it that way, so I know it works. This is so frustrating, because I used to only see it when I typed in artisancandies.com - it would never work for me if I typed in www.artisancandies.com. Now it doesn't work at all. This is the message I get in Firefox:
"The connection has timed out. The server at www.artisancandies.com is taking too long to respond."
Please help!!!
Kristen ScottLinc, here's what I've got from what you asked me to do. I hope you don't mind, but it was simple enough to leave everything in, so you could see the progression:
Kristen-Scotts-Computer:~ kristenscott$ kextstat -kl | awk ' !/apple/ { print $6 $7 } '
Kristen-Scotts-Computer:~ kristenscott$ sudo launchctl list | sed 1d | awk ' !/0x|apple|com\.vix|edu\.|org\./ { print $3 } '
WARNING: Improper use of the sudo command could lead to data loss
or the deletion of important system files. Please double-check your
typing when using sudo. Type "man sudo" for more information.
To proceed, enter your password, or type Ctrl-C to abort.
Password:
com.microsoft.office.licensing.helper
com.google.keystone.daemon
com.adobe.versioncueCS3
Kristen-Scotts-Computer:~ kristenscott$ launchctl list | sed 1d | awk ' !/0x|apple|edu\.|org\./ { print $3 } '
com.google.keystone.root.agent
com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae
Kristen-Scotts-Computer:~ kristenscott$ ls -1A {,/}Library/{Ad,Compon,Ex,Fram,In,La,Mail/Bu,P*P,Priv,Qu,Scripti,Sta}* 2> /dev/null
/Library/Components:
/Library/Extensions:
/Library/Frameworks:
Adobe AIR.framework
NyxAudioAnalysis.framework
PluginManager.framework
iLifeFaceRecognition.framework
iLifeKit.framework
iLifePageLayout.framework
iLifeSQLAccess.framework
iLifeSlideshow.framework
/Library/Input Methods:
/Library/Internet Plug-Ins:
AdobePDFViewer.plugin
Disabled Plug-Ins
Flash Player.plugin
Flip4Mac WMV Plugin.plugin
Flip4Mac WMV Plugin.webplugin
Google Earth Web Plug-in.plugin
JavaPlugin2_NPAPI.plugin
JavaPluginCocoa.bundle
Musicnotes.plugin
NP-PPC-Dir-Shockwave
Quartz Composer.webplugin
QuickTime Plugin.plugin
Scorch.plugin
SharePointBrowserPlugin.plugin
SharePointWebKitPlugin.webplugin
flashplayer.xpt
googletalkbrowserplugin.plugin
iPhotoPhotocast.plugin
npgtpo3dautoplugin.plugin
nsIQTScriptablePlugin.xpt
/Library/LaunchAgents:
com.google.keystone.agent.plist
/Library/LaunchDaemons:
com.adobe.versioncueCS3.plist
com.apple.third_party_32b_kext_logger.plist
com.google.keystone.daemon.plist
com.microsoft.office.licensing.helper.plist
/Library/PreferencePanes:
Flash Player.prefPane
Flip4Mac WMV.prefPane
VersionCue.prefPane
VersionCueCS3.prefPane
/Library/PrivilegedHelperTools:
com.microsoft.office.licensing.helper
/Library/QuickLook:
GBQLGenerator.qlgenerator
iWork.qlgenerator
/Library/QuickTime:
AppleIntermediateCodec.component
AppleMPEG2Codec.component
Flip4Mac WMV Export.component
Flip4Mac WMV Import.component
Google Camera Adapter 0.component
Google Camera Adapter 1.component
/Library/ScriptingAdditions:
Adobe Unit Types
Adobe Unit Types.osax
/Library/StartupItems:
AdobeVersionCue
HP Trap Monitor
Library/Address Book Plug-Ins:
SkypeABDialer.bundle
SkypeABSMS.bundle
Library/Internet Plug-Ins:
Move_Media_Player.plugin
fbplugin_1_0_1.plugin
Library/LaunchAgents:
com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae.plist
com.apple.FolderActions.enabled.plist
com.apple.FolderActions.folders.plist
Library/PreferencePanes:
A Better Finder Preferences.prefPane
Kristen-Scotts-Computer:~ kristenscott$ -
Hello Friends,
The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
Scenario:
Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
Questions are…
What is best option?
IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
Thanks
Karthikeyan Jothihttp://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
Processing
hundreds of millions records can be done in less than an hour.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Update statement taking too long to execute
Hi All,
I'm trying to run this update statement. But its taking too long to execute.
UPDATE ops_forecast_extract b SET position_id = (SELECT a.row_id
FROM s_postn a
WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME)))
WHERE position_level = 7
AND b.am_id IS NULL;
SELECT COUNT(*) FROM S_POSTN;
214665
SELECT COUNT(*) FROM ops_forecast_extract;
49366
SELECT count(*)
FROM s_postn a, ops_forecast_extract b
WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME));
575What could be the reason for update statement to execute so long?
Thankspolasa wrote:
Hi All,
I'm trying to run this update statement. But its taking too long to execute.
What could be the reason for update statement to execute so long?You haven't said what "too long" means, but a simple reason could be that the scalar subquery on "s_postn" is using a full table scan for each execution. Potentially this subquery gets executed for each row of the "ops_forecast_extract" table that satisfies your filter predicates. "Potentially" because of the cunning "filter/subquery optimization" of the Oracle runtime engine that attempts to cache the results of already executed instances of the subquery. Since the in-memory hash table that holds these cached results is of limited size, the optimization algorithm depends on the sort order of the data and could suffer from hash collisions it's unpredictable how well this optimization works in your particular case.
You might want to check the execution plan, it should tell you at least how Oracle is going to execute the scalar subquery (it doesn't tell you anything about this "filter/subquery optimization" feature).
Generic instructions how to generate a useful explain plan output and how to post it here follow:
Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the {noformat}[{noformat}code{noformat}]{noformat} tag before and {noformat}[{noformat}/code{noformat}]{noformat} tag after or the {noformat}{{noformat}code{noformat}}{noformat} tag before and after to enhance readability of the output provided:
In SQL*Plus:
SET LINESIZE 130
EXPLAIN PLAN FOR <your statement>;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
In 9i and above, if the "Predicate Information" section is missing from the DBMS_XPLAN.DISPLAY output but you get instead the message "Plan table is old version" then you need to re-create your plan table using the server side script "$ORACLE_HOME/rdbms/admin/utlxplan.sql".
In previous versions you could run the following in SQL*Plus (on the server) instead:
@?/rdbms/admin/utlxplsA different approach in SQL*Plus:
SET AUTOTRACE ON EXPLAIN
<run your statement>;will also show the execution plan.
In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
When your query takes too long ...
and post the "tkprof" output here, too.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Attribute Change run taking too long time to complete.
Hi all,
Attribute change run has been taking too long time to complete.It has to realign 50 odd aggreagates, some by delta , some by reconstruction. But inspite of all the aggregates it used to finish in quick time earlier. But since last 4-5 days it is taking indefinite time to finish.
Can anyone please suggest what all reasons may be causing this? and what possibly can be the solution to the problem? It is becoming a big issue. So kindly help with ur advises.
Promise to reward your answer liberally.
Regards,
Pradyut.Hi,
Check with your functional owners in R/3 if there are mass changes/realignments or classification changes are going on regarding master data. e.g. reasigning materials to other material groups. This causes a major realignment in BW for all the aggregates. Otherwise check for parameterchanges / patches or missing db stats with your sap basis team.
Kind regards, Patrick Rieken. -
WebI Report is taking too long time to opening
Hi All,
When iam trying to open the WebI report in Infoview , it is taking long time to open and refresh,
Please suggest me a solution.
Thanks in advance..
Regards,
MaheshHi,
As the issue you are facing is that the webi report is taking too long to open and refresh, I would recommend the below steps.
1. Check whether the webi report is set to "Refreh on Open" if yes probably you need to uncheck, save the report and open it again.
2. Try to run the same query in the backend database and see if it returns the data.
3. Try to run refresh the report for a smaller data selection.
4. make the report run on a specific webi server, and when refreshing have your BOBJ admin monitor that process to see if the process is going in a hung state, using High memory etc.
5. restart webi process and run again
Thanks,
aKs -
Database open (recovery) taking too long
Hi,
Ive been using your awesome BerkeleyDB Java Edition for a couple of years, and have been very happy with it.
I am currently facing an issue with trying to open the database after a disk-full issue (which resulted in the database being unable to write, and hence not closed properly).
While recovery seems to be operating, it has been taking an inordinate amount of time - 16 hours so far. My database has data of around 200GB, which inflated to over 450GB during deletion of entries, hence gobbling up all free space on disk.
My questions are:
* Should i continue to wait for recovery?
* Is there any chance that recovery is looping?
* Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?
Some other information that may help:
* The recovery has decreased the size of the last significant file, and created 3 new files since it started running.
* I have been monitoring the open files (using lsof), and they change every now and then to other files, though a good amount of its time is spent near the end of the database.
Thus, i feel like recovery is running normally, just taking too long. Please let me know your opinion.
A few other things i should mention regarding my issue:
* The database was, till yesterday, running on bdb java 3.3.75. After running several hours of recovery, i upgraded to 4.1.10 (since i read about a possible recovery looping bug in one of the versions)
* Once 4.1.10 started recovery, it spat out errors regarding the last 2 files. Only on deleting those 2 files (the last being 0 bytes, the 2nd-last being about 5k) did the recovery start. Note that the older 3.3.75's recovery never complained about those files. I can post the errors here if relevant.
* Some of the jdb files (about 500 files out of the 47,000 files that make up the database) are 100 MB files, since i had experimented with larger sized files for a few days, then reverted the setting.
Would any of these above affect a successful recovery?
My setup is:
OS:Linux CentOS 5.2, 64-bit, kernel 2.6.18-92.el5
JVM: Sun Java 1.6.0_20, 64-bit
Memory: 16 GB RAM, of which 8 GB is allocated to the java process (-Xmx8000M -Xms8000M)
BDB cache set to use 6GB RAM (envconfig.setCacheSize(6000000000))
Only the BDB basic API is being used (Environment, database, cursors). We do not use DPL, or HA features.
Awaiting your kind response,
Sushant AHi Sushant,
* Should i continue to wait for recovery?* Is there any chance that recovery is looping?>
I'm not aware of a bug that would cause recovery to loop, however, you may want to take thread dumps to see if it is progressing. It isn't easy to tell, however, since each phase of recovery is in fact a loop. What you can tell easily from the thread dumps is whether recovery is blocked (completely stopped) for some reason. I don't know of a bug that would cause this, but it's something I would check for.
Assuming it is not blocked, I suggest that you leave recovery running, and additionally (in parallel) try to obtain some information about your log. While recovery is running you can run the DbPrintLog utility, which does not itself run recovery. I suggest running the following command, which will tell us in general what your log looks like and in particular how far apart the checkpoints are:
java -jar je-x.y.z.jar DbPrintLog -h <envHome> -S > <output>Please post the output.
If checkpoints are not running in your application for some reason, or they are running very infrequently, this can cause VERY long recoveries. Unfortunately, you may have such a problem in your app and not be aware of it, until you crash and have to recover. To guard against this sort of thing in the future, you should keep an eye on the checkpoint frequency. EnvironmentStats.getNCheckpoints and getEndOfLog can together be used to tell how much log is written between checkpoints. We will also be able to see this from the DbPrintLog -S output.
* Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?DbDump normally runs recovery. DbDump with the -r or -R option does not run recovery, but has other drawbacks. With -r, a large amount of memory may be necessary to dump an accurate representation of your data set. If this fails because you run out of memory, -R can be used, but this will dump multiple versions of each record and it will be up to you to interpret the output.
If regular recovery does not succeed, then DbDump -r is the next thing to try.
Would any of these above affect a successful recovery?No, I don't believe so.
--mark -
SQL Update statement taking too long..
Hi All,
I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
Oracle Version: 11.2.0.1 64bit
OS: Windows 2008 64bit
desc temp_person;
Name Null? Type
PERSON_ID NOT NULL NUMBER(10)
DISTRICT_ID NOT NULL NUMBER(10)
FIRST_NAME VARCHAR2(60)
MIDDLE_NAME VARCHAR2(60)
LAST_NAME VARCHAR2(60)
BIRTH_DATE DATE
SIN VARCHAR2(11)
PARTY_ID NUMBER(10)
ACTIVE_STATUS NOT NULL VARCHAR2(1)
TAXABLE_FLAG VARCHAR2(1)
CPP_EXEMPT VARCHAR2(1)
EVENT_ID NOT NULL NUMBER(10)
USER_INFO_ID NUMBER(10)
TIMESTAMP NOT NULL DATE
CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
Index created.
ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
Index analyzed.
explain plan for update temp_person
2 set first_name = (select trim(f_name)
3 from ext_names_csv
4 where temp_person.PERSON_ID=ext_names_csv.p_id
5 and temp_person.DISTRICT_ID=ext_names_csv.ed_id);
Explained.
@?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 3786226716
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 82095 | 4649K| 2052K (4)| 06:50:31 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 82095 | 4649K| 191 (1)| 00:00:03 |
|* 3 | EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV | 1 | 178 | 24 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
19 rows selected.By the looks of it the update is going to take 6 hrs!!!
ext_names_csv is an external table that have the same number of rows as the PERSON table.
ROHO@rohof> desc ext_names_csv
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
F_NAME VARCHAR2(300)
L_NAME VARCHAR2(300)Anyone can help diagnose this please.
Thanks
Edited by: rsar001 on Feb 11, 2011 9:10 PMThank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
Table created.
SQL> desc ext_person
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
FST_NAME VARCHAR2(300)
LST_NAME VARCHAR2(300)
SQL> select count(*) from ext_person;
COUNT(*)
93383
SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
Index created.
SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
SQL> explain plan for update temp_person
2 set first_name = (select fst_name
3 from ext_person
4 where temp_person.PERSON_ID=ext_person.p_id
5 and temp_person.DISTRICT_ID=ext_person.ed_id);
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 1236196514
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 93383 | 1550K| 186K (50)| 00:37:24 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 93383 | 1550K| 191 (1)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID| EXTT_PERSON | 9 | 1602 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EXT_PERSON_ED | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
SQL> explain plan for MERGE INTO temp_person t
2 USING (SELECT fst_name ,p_id,ed_id
3 FROM ext_person) ext
4 ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
5 WHEN MATCHED THEN
6 UPDATE set t.first_name=ext.fst_name;
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 2192307910
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | MERGE STATEMENT | | 92307 | 14M| | 1417 (1)| 00:00:17 |
| 1 | MERGE | TEMP_PERSON | | | | | |
| 2 | VIEW | | | | | | |
|* 3 | HASH JOIN | | 92307 | 20M| 6384K| 1417 (1)| 00:00:17 |
| 4 | TABLE ACCESS FULL| TEMP_PERSON | 93383 | 5289K| | 192 (2)| 00:00:03 |
| 5 | TABLE ACCESS FULL| EXT_PERSON | 92307 | 15M| | 85 (2)| 00:00:02 |
Predicate Information (identified by operation id):
3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
Note
- dynamic sampling used for this statement (level=2)
21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
Thank you all for your ideas that helped us get to the solution.
Much appreciated.
Thanks -
Oracle - Query taking too long (Materialized view)
Hi,
I am extracting billing informaiton and storing in 3 different tables... in order to show total billing (80 to 90 columns, 1 million rows per month), I've used a materialized view... I do not have indexes on 3 billing tables - do have 3 indexes on Materialized view...
at the moment it's taking too long to query the data (running a query via toad fails and shows "Out of Memory" error message; runing a query via APEX though is providing results but taking way too long)...
Please advice how to make the query efficient...tparvaiz,
Is it possible when building your materialized view to summarize and consolidate the data?
Out of a million rows, what would your typical user do with that amount data if they could retrieve it readily? The answer to this question may indicate if and how to summarize the data within the materialized view.
Jeff
Edited by: jwellsnh on Mar 25, 2010 7:02 AM -
Query taking too long to finish
Hi,
I'm running a query which is
Delete from msg where ID IN (select ID from deletedtrans );
It's taking too long to complete, it has been running for 24 hours already and not completed executing the query, I cancelled the query. I don't understand why it's taking too long, does anyone have any idea? I feel that this query should not take too long to completeThat seems to be too small piece of information to comment anything.
1. How many records are there in "deletedtrans" table ?
2. How many records from "msg" table are expected to be deleted ?
3. Are statistics up-to-date on "msg" and "deletedtrans" tables ?
4. Is "ID" column defined as NULL or NOT NULL in both "msg" and "deletedtrans" tables ? (Not sure whether this will cause any problem, but...)
5. Is this statement being executed when other users/applications are accessing/updating "msg" table ?
Maybe you are looking for
-
HT1725 Itunes download failure
I downloaded a song and the download completed but when I play the song it stops afer 30 seconds when the song is 3:39 long. I believe my internet connection was messed up so how can I get the full song without re purchasing it?
-
I had to shift from win 7 to win xp. On iTunes on win 7 videos in mp4 format were synced without any problem, where on win xp the same mp4 files cannot be synced. Why? Can any one help me out?
-
Hi all, our task is to develop a new application based on the data model of an old application. As the old application must remain available and running, no changes are possible to the tune the database! (currently running on 9.2.0.7) We've put toget
-
Hi, I just installed LR3 on my Mac Pro running the latest version of OSX, When in the Library & Develop modules, my Canon 5D mkII raw files are previewing soft in the main window. I remember having this issue with Photoshop CS 4 & 5, but that was cor
-
Time capsule as a repeater in a mixed PC/Mac environment
Hello all, I have a mixed environment network that has a Sonicwall TZ 200w as their main router. I would like to enhance the wifi signal either with an Access Point or some type of Repeater. Furthermore, I am looking for a NAS device for all the PCs