Truncate performance on 10g vs. 8/9i
Has anyone experienced better truncate performance on 10g vs. earlier versions? We've had crippling episodes on 8i and are hoping for a noticeable improvement.
One thing a truncate does is to require dbwr to write any cached buffers of the table to disk prior to the object being truncated. At first consideration this may not appear to make sense to someone, however when the requirement to support time based forward recovery to the moment just prior to the truncate command being issued then this does indeed make perfect sense. Dirty blocks for the target table and indexes have to be flushed to disk prior to the object header(s) being marked as empty.
Then under dictionary management uet$ and fet$ have to be updated for the extents. The single ST lock on the database is used to single thread access to these two tables so multiple concurent truncates, Create tables, Drop tables and the resulting extent allocations that have to happen for each of these tasks is potentially bad news as each session attempts to grab the single ST lock.
To eliminate contention for the ST lock is why Oracle introduced temporary table locks (used with sort segments)
and part of the reason behing locally managed tablespaces.
Under dictionary management you can help lession contention for the ST lock by first defining your temporary tablespace to be of mode temporary (create tablespace temp temporary) or to use the newest form of create temporary tablespace temp tempfile 'xx' and by assigning extent sizes to minimize the number of extents allocated per time period. Conversion of tablespaces to being locally managed is a big help here also.
HTH -- Mark D Powell --
Similar Messages
-
Loosing performance in 10g against 8i
I've created database under 10g from dump
file from 8i database. I've tried some select statements
on new created database, but it was about 2x slower
much more accessing the disk than on 8i.
I increased PGA_AGGREGATE_TARGET parameter
and get the performance to the level of 8i that way.
But because of PGA_AGGREGATE_TARGET is value of
memory available to all server processes attached
to the instance whereas e.g. SORT_AREA_SIZE set in 8i
exists for each user process,
I tried with more sessions and the performance in
10g was lower again than in 8i. The size of
PGA_AGGREGATE_TARGET parameter is now much greater
than the default value 20% of SGA. What should I do,
continue in growing that parameter, or the reason
of massive disk usage in new created 10g database
can be in something else? Thanks for advise a lot,
P.Make sure that you have updated statistics. This is very important. Also, I suggest you don't simply change parameters on the fly. First, understand where the bottleneck is. Have you enabled SQL tracing? Did you examine the trace file using TKProf? Have you looked at the V$session_wait, v$session_event views? Can you provide smoe more details on the queries that are slower and provide their TKProf output. Find the source of the problem first before making any changes.
The massive disk activity could be related to paging to disk, or performing sort operations on disk.
-Raj Suchak
[email protected] -
Slow performance Designer 10g (on 10gR2 database)
We are busy testing Designer 10g version 10.1.2.5 (windows XP) on 10gR2 64 bit database (Sun solaris) . The repository was migrated from Designer 6.0.
But the performance/response is rather slow. Example: At first, opening a server model diagram took 2 minutes or more. By searching the forum we found the tip for the "alter system set OPTIMIZER_SECURE_VIEW_MERGING = false;". That made a big difference.
But now we still have response problems: Expanding the treeview for the list of tables, views or snapshots takes much longer then in Designer 6.0.
Also opening for example Design Editor takes longer then one normally expects, allthough some delay can be expected because we have a lot of applications (100) in the repository.
Is it because it is now written in Java, or are more database optimizations possible?
Paul.Have you computed statistics using the Repository Administration Utility (RAU)? The default percentage of 20% is usually good enough, but you could go higher.
Do a View Objects in RAU and check for missing, disabled or invalid objects. If you find any, there are ways to correct the situation, mostly under the Recreate button.
Make sure that no-one else is using the repository, then press the Recreate button in RAU and use the selection labeled: Truncate Temporary Tables. Sometimes these tables get too full and can impact performance.
Under the Options menu in RAU, there is an item labeled: Enable Performance Enhancements. To be honest with you, I've never noticed this item before, and I don't know for sure what it does. Then again, I've never had any serious performance problems in Designer. It might be worth your while to back up your repository, then turn this on. -
Slow Performance Forms 10g !!!!
Hi,
I have migrated several forms from 6i to 10g and i use a 11g Database.
Most of the users accessing from different countries complain of slow performance but the basic idea of migration is to get a better performance. My users mostly work on laptops.
I am unable to find where the problem is. Is this might be due to network traffic since they are accessing via internet or any server related issues. Please help me to find and fix the problem and is there any tips for gaining better performance in using forms 10g.
Regards,
SureshSo, were they running on 6i via laptop and accessing via the internet before you migrated ? (ie: was it 6i client server or 6i web forms)
Did you migrate the database too ?
Slow performance as in : it used to be seconds and now it's taking minutes ?
Steve -
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.dbdan wrote:
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.Here is something you might try as a starting point:
Capture the output of the following (to a table, send to Excel, or spool to a file):
SELECT
STAT_NAME,
VALUE
FROM
V$OSSTAT
ORDER BY
STAT_NAME;
SELECT
STAT_NAME,
VALUE
FROM
V$SYS_TIME_MODEL
ORDER BY
STAT_NAME;
SELECT
EVENT,
TOTAL_WAITS,
TOTAL_TIMEOUTS,
TIME_WAITED
FROM
V$SYSTEM_EVENT
WHERE
WAIT_CLASS != 'Idle'
ORDER BY
EVENT;Wait a known amount of time (5 minutes or 10 minutes)
Execute the above SQL statements again.
Subtract the starting values from the ending values, and post the results for any items where the difference is greater than 0. The Performance Tuning Guide (especially the 11g version) will help you understand what each item means.
To repeat what Ed stated, do not randomly change parameters (even if someone claims that they have successfully made the parameter change 100s of times).
You could also try a Statspack report, but it might be better to start with something which produces less than 70 pages of output.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Oracle Performance on 10G with ECC 6.0
We recently upgrade from SAP 46c to ECC 6.0 with Oracle 10G.
I have implemented Note 830576(including underscore parameters) and Note 838725 for creating statistics. We have horrible performance on SAP transactions.
Can someone share their Oracle 10G parameters on ECC 6.0. We have 12GB memory with 2 CPU on the DEV box.
Please help as this is causing issues with Development effort and unit testing.
Thanks
TonyTony, applying specific settings for Oracle 10G is definitely a must. However, you will need to make sure that kernel settings are in place for the hardware platform you operate on. Check storage subsystem (if you use EMC or other similar solution). At times storage subsystem may be the culprit.
Then move to Oracle: check for indexes w/ poor storage quality, make sure you rebuild them regularly. Reorganize tables and make sure that the statistics are up to date.
Then move to application layer and start analyzing buffers and in general performance.
good luck -
Terrible Performance with 10g preview
I have serious performance problems with the 10g preview during basic operations.
Very frequently, it just locks up, using 100% CPU for 5-10 seconds.
It seems to occur every time I change window, for example
- double click on an error message in either the Compiler window or the Structure window and I have to wait 10 seconds before the cursor is active in the code editor.
- double click on a file in the navigator, 10 seconds before the opened file can be edited.
- click on an item in the HTML component palette
- close a code editor window. Can't use the next window for 10 seconds (this is less reliable than the others)
I am running NT4 SP6 on a PIII 500Mhz with 512M of memory. According to task manager, I am not using all the memory yet.I hit the virus scanner problem when I first started using 903, but that is not the problem now.
Here is a dump produced as described.
Full thread dump OJVM Client VM (9.0.5.972 4dopv):
"AWT-EventQueue-0" prio: 6 state: runnable
"AWT-Shutdown" prio: 5 state: waiting for notification of monitor 0x1D1D7C24
void java.lang.Object.wait(long)
native code
void java.lang.Object.wait()
Object.java:426
void sun.awt.AWTAutoShutdown.run()
[locked monitor 0x1D1D7C24]
AWTAutoShutdown.java:259
void java.lang.Thread.run()
Thread.java:536
"Java2D Disposer" prio: 10 state: waiting for notification of monitor 0x1D257D84
void java.lang.Object.wait(long)
native code
java.lang.ref.Reference java.lang.ref.ReferenceQueue.remove(long)
[locked monitor 0x1D257D84]
ReferenceQueue.java:111
java.lang.ref.Reference java.lang.ref.ReferenceQueue.remove()
ReferenceQueue.java:127
void sun.java2d.Disposer.run()
Disposer.java:97
void java.lang.Thread.run()
Thread.java:536
"WeakDataReference polling" prio: 1 state: waiting for notification of monitor 0x1E4FB124
void java.lang.Object.wait(long)
native code
java.lang.ref.Reference java.lang.ref.ReferenceQueue.remove(long)
[locked monitor 0x1E4FB124]
ReferenceQueue.java:111
java.lang.ref.Reference java.lang.ref.ReferenceQueue.remove()
ReferenceQueue.java:127
void oracle.ide.util.WeakDataReference$Cleaner.run()
WeakDataReference.java:88
void java.lang.Thread.run()
Thread.java:536
"AWT-Windows" prio: 6 state: runnable
void sun.awt.windows.WToolkit.eventLoop()
native code
void sun.awt.windows.WToolkit.run()
WToolkit.java:253
void java.lang.Thread.run()
Thread.java:536
"IdeMinPriorityTimer" prio: 1 state: waiting for notification of monitor 0x1F74216C
void java.lang.Object.wait(long)
native code
void java.lang.Object.wait()
Object.java:426
void java.util.TimerThread.mainLoop()
[locked monitor 0x1F74216C]
Timer.java:403
void java.util.TimerThread.run()
Timer.java:382
"IconOverlayTrackerTimer" prio: 5 state: waiting for notification of monitor 0x1CFCA064
void java.lang.Object.wait(long)
native code
void java.lang.Object.wait()
Object.java:426
void java.util.TimerThread.mainLoop()
[locked monitor 0x1CFCA064]
Timer.java:403
void java.util.TimerThread.run()
Timer.java:382
"IconOverlayTrackerTimer" prio: 5 state: waiting for notification of monitor 0x1E1D64E4
void java.lang.Object.wait(long)
native code
void java.lang.Object.wait()
Object.java:426
void java.util.TimerThread.mainLoop()
[locked monitor 0x1E1D64E4]
Timer.java:403
void java.util.TimerThread.run()
Timer.java:382
"TimerQueue" prio: 5 state: waiting for notification of monitor 0x1D51E514
void java.lang.Object.wait(long)
native code
void javax.swing.TimerQueue.run()
[locked monitor 0x1D51E514]
TimerQueue.java:231
void java.lang.Thread.run()
Thread.java:536
"Finalizer" prio: 8 state: waiting for notification of monitor 0x1CDFC134
void java.lang.Object.wait(long)
native code
java.lang.ref.Reference java.lang.ref.ReferenceQueue.remove(long)
[locked monitor 0x1CDFC134]
ReferenceQueue.java:111
java.lang.ref.Reference java.lang.ref.ReferenceQueue.remove()
ReferenceQueue.java:127
void java.lang.ref.Finalizer$FinalizerThread.run()
Finalizer.java:159
"Reference Handler" prio: 10 state: waiting for notification of monitor 0x1CDFC124
void java.lang.Object.wait(long)
native code
void java.lang.Object.wait()
Object.java:426
void java.lang.ref.Reference$ReferenceHandler.run()
[locked monitor 0x1CDFC124]
Reference.java:113
"WaitCursorTimer" prio: 5 state: waiting for notification of monitor 0x1F11E28C
void java.lang.Object.wait(long)
native code
void java.lang.Object.wait()
Object.java:426
void java.util.TimerThread.mainLoop()
[locked monitor 0x1F11E28C]
Timer.java:403
void java.util.TimerThread.run()
Timer.java:382
"main" prio: 5 state: idle
"VM Tasks" prio: 1 state: runnable
"Signal Dispatcher" prio: 1 state: idle -
Hi,
Will there be any performance gain in SQLs in 10g database (same server configuration) compared to 9i? If yes, what are the reasons?
Thanks
SAuser593719 wrote:
Hi,
Will there be any performance gain in SQLs in 10g database (same server configuration) compared to 9i? If yes, what are the reasons?
Thanks
SAAs Sb said, the answer truly is "it depends" and to be more precise, you may find more hiccups on your way when you would be working in 10g and if you have used lots of hints over your queries in 9i. The supported optimizer mode in 10g is Cost Based mode where rather than working on straight set of few rules, the estimated cost incurred on each step is calculated by oracle and than the plan is decided. So if you were forcing an index in 9i for your query, probably going for the same plan in 10g may not be really good. This is not to say that it would be always the case but you would need to do a deep check rather than looking for a generalized rule like you mentioned that whether youe queries would be better in X release than Y or not?
HTH
Aman.... -
Headstart performance in 10g database
A drawback of the business rules implementation of Headstart is the overhead by the CAPI and TAPI code. We are going to upgrade our database from 9i to 10gR2. According to Oracle this should increase the performance of PL/SQL. Are the experiences with CAPI/TAPI performance in a 10g database compared to 9i and with the effects of native compilation of PL/SQL?
We have found the cause. The capi performs a query on qms_rowstack very often and in 10g this query used the wrong index. By changing the order of the columns in qms_rws_uk1 index (package_name as first column) the query uses this column, resulting in the same performance as in 9i.
-
Horrible performance with 10g EM / Dataguard on Windows
Intstalled 10g EM with Dataguard a few weeks back. Just recently started to notice terrible performance e.g. memory, paging .... Nothing new has been added to the machine so I'm pretty stumped as to why this is happening all of a sudden. Are there any unecessary services that can be shut down within Oracle. I've been getting this msg all day.
Also my Total IO/sec is 286.23
Name=OC4J_EM
Type=OC4J
Host=xx-db.xx.nt.xx.edu
Metric=UpDown Status
Timestamp=Mar 24, 2006 12:51:08 PM EST
Severity=Critical
Message=The OC4J instance is down
Rule Name=OC4J Availability and Critical States
Rule Owner=SYSMAN
thanks.
Message was edited by:
dbogesdorferI'm assuming you meant 10g Grid Control because I see you talked about the Oracle Container for Java. I do see a lot of paging in my case and that's because I didn't have other disks to separate the software install from the repository. The installation document does talk about that.
The services critical to GC are Process Manager, ASControl and the agent. The other services in my installation are all set to manual so they didn't need any tweaking. Not sure what other services DataGuard will add...Did you DataGuard your GC repository then for redundancy?
Do you see any errors on the Management System tab? Does rebooting the server help at all? -
Slow performance after 10g upgrade
After upgrade from 9.2.0.3.0 to 10.2.0.1.0, the following query runs extremely slow, any suggestion how to adjust the DB settings without re-writing this query
SELECT SUM (ABS (ship_net_clc_amt))
FROM ship_dtl
WHERE order_section_number = '940007320686'
AND SUBSTR (misc_charge_code, 1, 2) IN ('72', '77', 'A9')
AND ics_status_flag2 IN ('W', 'S')
AND dsr_trans_date >= to_date('1/1/2008', 'MM/DD/YYYY')
AND dsr_trans_date < to_date('1/31/2008', 'MM/DD/YYYY')
AND product_base_number NOT IN (SELECT DISTINCT sku
FROM ap_smb_sku_based
WHERE fiscal_period = '200801'
AND country = 'TAIWAN')
AND quota_product_line_code IN (SELECT pl
FROM ap_pl_bu
WHERE bu = '3C')
The exec plan for 10g (optimizer mode = ALL ROWS) is
------------------------------------------------------------------------------------------------------------------------------|
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop ||
------------------------------------------------------------------------------------------------------------------------------|
| 0 | SELECT STATEMENT | | 1 | 49 | 7 (15)| 00:00:01 | | ||
| 1 | SORT AGGREGATE | | 1 | 49 | | | | ||
|* 2 | FILTER | | | | | | | ||
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SHIP_DTL | 1 | 42 | 1 (0)| 00:00:01 | 39 | 39 ||
| 4 | NESTED LOOPS | | 1 | 49 | 5 (20)| 00:00:01 | | ||
| 5 | SORT UNIQUE | | 1 | 7 | 3 (0)| 00:00:01 | | ||
|* 6 | TABLE ACCESS FULL | AP_PL_BU | 1 | 7 | 3 (0)| 00:00:01 | | ||
| 7 | PARTITION RANGE SINGLE | | 3 | | 0 (0)| 00:00:01 | 39 | 39 ||
|* 8 | INDEX RANGE SCAN | IX_SHIP_DTL_ORD_SEC_NM | 3 | | 0 (0)| 00:00:01 | 39 | 39 ||
| 9 | PARTITION RANGE SINGLE | | 1 | 39 | 2 (0)| 00:00:01 | 25 | 25 ||
|* 10 | TABLE ACCESS FULL | AP_SMB_SKU_BASED | 1 | 39 | 2 (0)| 00:00:01 | 25 | 25 ||
------------------------------------------------------------------------------------------------------------------------------|
The exec plan for 9i (optimizer mode=choose) for
----------------------------------------------------------------------------------------------------------------|
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop ||
----------------------------------------------------------------------------------------------------------------|
| 0 | SELECT STATEMENT | | 1 | 47 | 10 | | ||
| 1 | SORT AGGREGATE | | 1 | 47 | | | ||
| 2 | FILTER | | | | | | ||
| 3 | HASH JOIN SEMI | | 1 | 47 | 8 | | ||
| 4 | TABLE ACCESS BY LOCAL INDEX ROWID| SHIP_DTL | 1 | 36 | 5 | 13 | 13 ||
| 5 | INDEX RANGE SCAN | IX_SHIP_DTL_ORD_SEC_NR | 9 | | 3 | 13 | 13 ||
| 6 | TABLE ACCESS FULL | AP_PL_BU | 2 | 22 | 2 | | ||
| 7 | TABLE ACCESS FULL | AP_SMB_SKU_BASED | 1 | 21 | 2 | 13 | 13 ||
Thanks.
Lizship_dtl has 94690531 records
ap_smb_sku_based has 19 records
ap_plbu has 19 records
The query returns a single row, because it calculates the sum. The query could be better written to get rid of IN or NOT IN which are costly. But I just wonder why the plans/performance are so different?
I used analyze to analyze other tables and it is still working fine in 10g. Now based on the plans, do you think anything wrong ?
10g -
-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib ||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | SELECT STATEMENT | | 1 | 50 | 10 (-550| 00:00:01 | | | | | ||
| 1 | SORT AGGREGATE | | 1 | 50 | | | | | | | ||
|* 2 | FILTER | | | | | | | | | | ||
|* 3 | PX COORDINATOR | | | | | | | | | | ||
| 4 | PX SEND QC (RANDOM) | :TQ10002 | 1 | 50 | 66 (5)| 00:00:01 | | | Q1,02 | P->S | QC (RAND) ||
|* 5 | FILTER | | | | | | | | Q1,02 | PCWC | ||
| 6 | MERGE JOIN | | 1 | 50 | 66 (5)| 00:00:01 | | | Q1,02 | PCWP | ||
| 7 | SORT JOIN | | 2 | 14 | 4 (50)| 00:00:01 | | | Q1,02 | PCWP | ||
| 8 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | ||
| 9 | PX RECEIVE | | 4 | 28 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | ||
| 10 | PX SEND HASH | :TQ10000 | 4 | 28 | 2 (0)| 00:00:01 | | | | S->P | HASH ||
| 11 | SORT UNIQUE | | 4 | 28 | 2 (0)| 00:00:01 | | | | | ||
|* 12 | TABLE ACCESS FULL | AP_PL_BU | 4 | 28 | 2 (0)| 00:00:01 | | | | | ||
|* 13 | SORT JOIN | | 1 | 43 | 62 (2)| 00:00:01 | | | Q1,02 | PCWP | ||
| 14 | PX RECEIVE | | 1 | 43 | 61 (0)| 00:00:01 | | | Q1,02 | PCWP | ||
| 15 | PX SEND HASH | :TQ10001 | 1 | 43 | 61 (0)| 00:00:01 | | | Q1,01 | P->P | HASH ||
| 16 | PX PARTITION RANGE ITERATOR | | 1 | 43 | 61 (0)| 00:00:01 | KEY | KEY | Q1,01 | PCWC | ||
|* 17 | TABLE ACCESS BY LOCAL INDEX ROWID| SHIP_DTL | 1 | 43 | 61 (0)| 00:00:01 | KEY | KEY | Q1,01 | PCWP | ||
|* 18 | INDEX RANGE SCAN | IX_SHIP_DTL_ORD_SEC_NR | 14 | | 49 (0)| 00:00:01 | KEY | KEY | Q1,01 | PCWP | ||
| 19 | PARTITION RANGE SINGLE | | 1 | 39 | 2 (0)| 00:00:01 | 14 | 14 | | | ||
|* 20 | TABLE ACCESS FULL | AP_SMB_SKU_BASED | 1 | 39 | 2 (0)| 00:00:01 | 14 | 14 | | | ||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------|Plan for 9i
-----------------------------------------------------------------------------------------------------------------------------------------------|
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop | TQ |IN-OUT| PQ Distrib ||
-----------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | SELECT STATEMENT | | 1 | 53 | 74 | | | | | ||
| 1 | SORT AGGREGATE | | 1 | 53 | | | | | | ||
| 2 | FILTER | | | | | | | | | ||
| 3 | FILTER | | | | | | | | | ||
| 4 | HASH JOIN SEMI | | 1 | 53 | 72 | | | 30,01 | P->S | QC (RAND) ||
| 5 | PARTITION RANGE ITERATOR | | | | | KEY | KEY | 30,01 | PCWP | ||
| 6 | TABLE ACCESS BY LOCAL INDEX ROWID| SHIP_DTL | 1 | 42 | 54 | KEY | KEY | 30,01 | PCWP | ||
| 7 | INDEX RANGE SCAN | IX_SHIP_DTL_ORD_SEC_NR | 21 | | 37 | KEY | KEY | 30,01 | PCWP | ||
| 8 | TABLE ACCESS FULL | AP_PL_BU | 2 | 22 | 2 | | | 30,00 | S->P | BROADCAST ||
| 9 | TABLE ACCESS FULL | AP_SMB_SKU_BASED | 1 | 39 | 2 | 14 | 14 | | | ||
-----------------------------------------------------------------------------------------------------------------------------------------------| -
Performance Issue : 10g is faster and 9i slower
Hi,
Please find below the query which behaves differently in 10 g and 9i
SELECT
BMS_FACMAS_ACPMAS.FACILITY,
BMS_FACMAS_ACPMAS.ADDRESS,
SUBSTR(BMS.VENMAS.NAME, 1, 50),
BMS.ACPMAS.ACP_NO,
BMS.ASCCOD.VALUE,
BMS.ACPPAY.PAYMENT_AMOUNT,
BMS.ACPACC.CHARGES,
BMS.ACPPAY.START_DATE,
--TRUNC(BMS.ACPPAY.END_DATE),
ENDDATE.ENDDT,
BMS.ACPMAS.STATUS,
BMS.ACPMAS.STATUS_DATE,
BMS.ASVCOD.VALUE,
BMS_FACMAS_ACPMAS.GROSS_FEET,
BMS.AFRCOD.VALUE,
nvl(BMS.PROCOD.VALUE,'N/A'),
ACP_Correct.CORP_ID,
BMS_FACMAS_ACPMAS.GROSS_YARDS,
BMS_FACMAS_ACPMAS.CITY || ',' || BMS_FACMAS_ACPMAS.STATE
FROM
BMS.FACMAS BMS_FACMAS_ACPMAS,
BMS.ACPPAY,
BMS.VENMAS,
BMS.CONMAS,
BMS.ACPACC,
BMS.ACPMAS,
BMS.EMPMAS ACP_Correct,
BMS.EMPMAS ACP_AreaManager,
BMS.EMPMAS ACP_Director,
BMS.EMPMAS ACP_Reviewer,
BMS.PROCOD,
BMS.ASVCOD,
BMS.AFRCOD,
BMS.ASCCOD,
( Select Max(END_DATE) ENDDT, ACPMAS.ACP_ID from BMS.ACPPAY, BMS.ACPMAS where
BMS.ACPMAS.ACP_ID=BMS.ACPPAY.ACP_ID
GROUP BY ACPMAS.ACP_ID
) ENDDATE
WHERE
( BMS.VENMAS.STATUS='A' )
AND ( BMS.ACPMAS.STATUS <> 'X' )
AND (
BMS.ASVCOD.VALUE IN 'JANITORIAL'
AND SUBSTR(BMS.VENMAS.NAME, 1, 50) LIKE 'UNIVERSAL%'
AND BMS.ACPMAS.STATUS IN ('I', 'A', 'T')
AND ACP_Correct.CORP_ID LIKE 'ME5077'
AND ACP_AreaManager.CORP_ID LIKE '%'
AND ACP_Director.CORP_ID LIKE '%'
AND ACPPAY.START_DATE <= SYSDATE
AND ACPPAY.END_DATE >= SYSDATE
AND ENDDATE.ACP_ID=ACPMAS.ACP_ID
AND (( ACP_Correct.SUPERVISOR_ID=ACP_AreaManager.EMPLOYEE_ID )
OR ACP_Correct.EMPLOYEE_ID=ACP_AreaManager.EMPLOYEE_ID
AND (( ACP_Director.EMPLOYEE_ID=ACP_AreaManager.SUPERVISOR_ID )
OR ACP_Director.EMPLOYEE_ID=ACP_AreaManager.Employee_ID
AND ( BMS.ASVCOD.CODE_ID=BMS.ACPMAS.SERVICE_TYPE_ID )
AND ( BMS.ACPMAS.CORRECT_ID=ACP_Correct.EMPLOYEE_ID )
AND ( BMS.CONMAS.VENDOR_ID=BMS.VENMAS.VENDOR_ID )
AND ( BMS.ACPMAS.FREQUENCY_ID=BMS.AFRCOD.CODE_ID )
AND ( BMS.ACPACC.ACP_ID=BMS.ACPMAS.ACP_ID )
AND ( BMS.ACPMAS.ACP_ID=BMS.ACPPAY.ACP_ID )
AND ( BMS.ACPPAY.SCHEDULE_ID=BMS.ASCCOD.CODE_ID )
AND ( BMS.CONMAS.CONTRACT_ID=BMS.ACPMAS.CONTRACT_ID )
AND ( BMS_FACMAS_ACPMAS.FACILITY_ID=BMS.ACPMAS.FACILITY_ID )
AND ( BMS_FACMAS_ACPMAS.PROPERTY_ID=BMS.PROCOD.CODE_ID(+) )
AND BMS.ACPMAS.REVIEWER_ID = ACP_Reviewer.EMPLOYEE_ID(+)
In 10g the Query takes 5 secs while in 9i it takes more than 3 minutes.
Also find below the Explain Plan for both the versionsHi and welcome to the forum,
Please post:
1) the output of this query, from both databases:
SQL> select name, value from v$parameter where name like '%optim%';2) explain plans from the query, again: from both databases
edit
And read:
[How to post a tuning request | http://forums.oracle.com/forums/thread.jspa?threadID=863295]
[When your query takes too long | http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]
edit2
And bookmark (for future reasons ;) )
http://tahiti.oracle.com
http://asktom.oracle.com
edit3
And always put your code between the codetags: in order to preserve indentation.
( See the OTN [FAQ | http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ] )
Edited by: hoek on Jun 19, 2009 1:26 PM -
Dear SQL Experts,
We are trying to delete 35 Million rows from a table with 60 Million rows. Table is non-partitioned, non-clustered, and no LOBs.
Here's what I'm doing:
1. creating a new table, using insert append to copy all the rows I want to retain from original table
2. truncate the original table
3. Import rows from new table to originial table.
My question is - Original table has indexes on it. Does dropping indexes help the truncate run faster? I know for delete it does... not sure if indices matter for truncate..Kindly, share if you think any other method is
more germane.
ThanksHi,
if the table is partitioned then it might be faster to insert rows using ALTER TABLE EXCHANGE PARTITION.
It would work even if there's only one partition (so one could partition the table just to make such operations faster,
and it won't affect anything else):
create table t1 (id number, y number, z number)
partition by range(id)(
partition p values less than(maxvalue)
insert into t1
(id, y, z)
select level id, dbms_random.value y, dbms_random.value z
from dual
connect by level <= 1e5;
create index i$t1 on t1(id);
create table t2
as
select *
from t1
where y<=0.5;
truncate table t1;
alter table t1 exchange partition (p) with table t2;
I wonder if it's possible to do the same thing with a non-partitioned table.
Best regards,
Nikolay -
Hi,
This an outpout of two hours awr report of EBS oracle version 10203.
Its shows few ITL locks that should be fixed by increasing the size of the INITRANS to 20.
Currently the INITRANS is set to 11.
The FREELIST is set to 4 and also the FREELIST GROUP is set to 4
There are 8 indexes on this table (they are all have the same INITRANS+FREELIST+FREELIST GROUP)
The table contain 250 million records.
The table is not partitioned.
How can i find which of the 8 indexes i should deal with ?
They are all start with the name: "RA_CUST_TRX_LINE_GL_" ?
Would you consider doing more things beside increasing the INITRANS ?
My second question is regarding the last section which shows the
same indexes waiting on Buffer Busy Waits .
Is there an event that i can use in order to find what cause the
index to wait so many times ?
Please note that the issue is not if to partition the table or not , i already working in test env. to partition the table.
I would like to get your advices regarding the current situation.
Thanks
tagSegments by ITL Waits DB/Inst: xxx/xxx Snaps: 8311-8313
-> % of Capture shows % of ITL waits for each top segment compared
-> with total ITL waits for all segments captured by the Snapshot
Tablespace Subobject Obj. ITL % of
Owner Name Object Name Name Type Waits Capture
AR AR_INDEX1 RA_CUST_TRX_LINE_GL_ INDEX 10 18.18
AR AR_INDEX1 RA_CUSTOMER_TRX_LINE INDEX 9 16.36
AR AR_INDEX1 RA_CUST_TRX_LINE_GL_ INDEX 9 16.36
AR AR_INDEX1 AR_PAYMENT_SCHEDULES INDEX 5 9.09
AR AR_INDEX1 AR_PAYMENT_SCHEDULES INDEX 5 9.09
Segments by Buffer Busy Waits DB/Inst: xxx/xxx Snaps: 8311-8313
-> % of Capture shows % of Buffer Busy Waits for each top segment compared
-> with total Buffer Busy Waits for all segments captured by the Snapshot
Buffer
Tablespace Subobject Obj. Busy % of
Owner Name Object Name Name Type Waits Capture
AR AR_INDEX1 RA_CUST_TRX_LINE_GL_ INDEX 41,671 20.30
AR AR_INDEX1 RA_CUST_TRX_LINE_GL_ INDEX 22,248 10.84
AR AR_INDEX1 IL_RA_CUST_TRX_LINE_ INDEX 18,067 8.80
AR AR_INDEX1 RA_CUST_TRX_LINE_GL_ INDEX 15,571 7.58
AR AR_DATA RA_CUST_TRX_LINE_GL_ TABLE 15,075 7.34
tagHi Mr Lewis,
You wrote :
tagChecking buffer busy waits - you need to know whether these are "read by other session"
or "real" buffer busy waits as this part of the report doesn't distinguish the classes.
So check the wait times again for "read by other session" and "buffer busy waits" to see what the spread is.I recheck the AWR report and found at the top 5 events: read by other session
tagTop 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 10,998 39.9
db file sequential read 2,545,797 8,793 3 31.9 User I/O
read by other session 1,081,643 2,852 3 10.4 User I/O
library cache pin 18,450 1,253 68 4.5 Concurrenc
db file scattered read 115,039 1,226 11 4.5 User I/O
-------------------------------------------------------------And also:
tag Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
db file sequential read 2,545,797 .0 8,793 3 142.3
read by other session 1,081,643 .0 2,852 3 60.4
library cache pin 18,450 .1 1,253 68 1.0
db file scattered read 115,039 .0 1,226 11 6.4
log file parallel write 111,366 .0 803 7 6.2
SQL*Net more data from clien 28,701 .0 661 23 1.6
enq: TX - index contention 25,492 .0 303 12 1.4
log file sync 14,065 .0 205 15 0.8
latch: cache buffers chains 670,408 .0 135 0 37.5
Log archive I/O 6,751 .0 120 18 0.4
library cache load lock 1,539 .0 97 63 0.1
buffer busy waits 234,058 .0 61 0 13.1So, there is 5 times more : "read by other session" than "buffer busy waits".
What that mean ? what should i check ?
Thanks Again -
Performance Problem between Oracle 9i to Oracle 10g using Crystal XI
We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
Our technical Setup:
Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
Database server is Oracle 10g
What we have concluded:
Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
Oracle 10g no longer supports the /*+ RULE */ hint.
Verify DB Query:
select /*+ RULE */ *
from
(select /*+ RULE */ null table_qualifier, o1.owner table_owner,
o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
o1.object_type), o1.object_type) table_type, null remarks from all_objects
o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
table_type, null remarks from all_objects o3, all_synonyms s where
o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
null remarks from all_synonyms s1 where s1.db_link is not null ) tables
WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
3
SQL From Main Report:
SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
SQL From Sub Report:
SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
Does anyone have any suggestions on how we can improve the report performance with 10g?Hi Eric,
Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
Here are the Doc Ids, if you are interested:
Note 377037.1
Note:364822.1
Thanks again for your response.
Venu Boddu.
Maybe you are looking for
-
30EA3 - Edit Table/Create Table(Advanced) - Table Name, Text Box Size
I am not sure if its the same on non-Linux versions, but in the edit table dialog, the table name text box size is tiny. Its only wide enough that you can see approx 7 chars. Is it possible to increase this? The size of the text box in the simple cre
-
Need to repartition to install Lion?
When I am propmted to select the disk to install Lion on, my startup disk has a message This Disk doesn't use the GUID Partition Table Scheme... What? I need to repartition my mac to install Lion? I have lost my systems disk so now I am screwed? ***,
-
E mail facility for MIRO doc from R3 to mail server.
Dear All, Is ther any E mail facility available for MIRO doc from R3 to mail server? I want to fire e mail for each MIRO document which is posted in SAP R3 with the detail of MIRO doc number & respective PO number to concern person. Thanks. Upesh.
-
Getting Error ORA-00604 While Creating View from front Ent
Hello All, I am getting ORA-00604 while creating view from the front end. And some time I get Maximum cursor open, It is ok amy be I am opening cursors in while loop. While ORA-00604 is giving lots of truble to me. In help it said to contact Oracle s
-
When running actions in Photoshop CC I am now getting an error.
At my job I use actions all the time and on my last day of work, 02-04-15, all my actions were working just find and I came in yesterday, 02-09-15, and now all I am getting is the error. Absolutely NOTHING was changed on my computer at all. If anyone