Large Uniform Extent Size = Slow TRUNCATE?
Here's the scenario...
We have a a tablespace with the following storage parameter:
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 32M
Users were complaining about slow TRUNCATE performance. I saw the same when I created a table with 30,000 rows - same as the user was complaining about - in the same tablespace.
I proceeded to move the objects from the schema the user was referencing to a tablespace with:
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
... and the TRUNCATE executed in the expected time (less than a second) for the same amount of rows in the same table structure.
Why does a large UNIFORM extent size (such as 32M in this case) cause for slow TRUNCATE performance? I wasn't able to find an exact cause of this in the forums or on Metalink thus far.
Version: Oracle DB 10.2.0.3
System Info:
Linux ilqaos01c 2.6.9-55.0.12.ELsmp #1 SMP Wed Oct 17 08:15:59 EDT 2007 x86_64
Thanks.
Robert Sislow wrote:
The Metalink article was helpful, however, the database we're on is version 10.2.0.3, and the article is referencing 9.2.0.4.
Additionally, the last few responses in this thread are referring to concurrent TRUNCATE operations. The TRUNCATE that we're running is a single-thread TRUNCATE on a very small table - about 8000 rows.
After executing a 10046 level 12 trace, and using the Trace Analyzer tool, we've found that the "local write wait" event is taking up ~90% of the statement's activity for each run. Once again, all we can find that's causing this is the fact the the extent size in the tablespace with the table where we're seeing the slowness is set to UNIFORM size of 32M.
You're using ASSM (automatic segment space management), which means you have a number of bitmap space management blocks scattered through the object.
If you're running with 32MB uniform extents, the first extent will be 4096 blocks, and there will be one level 2 bitmap, 64 level 1 bitmaps, and the segment header block at the start of the extent. With autoallocate, the first extent will start with one level 2 bitmap, one (or possibly 2) level 1 bitmap(s) and the segment header block.
When you truncate an object, all the space managment blocks in the first extent (and any extents you keep) have to be reset to show 100% free space - this means they may all have to be read into the buffer cache before being updated and written back with local writes (i.e. writes by the process, not by dbwr).
So you have to wait for 66 reads and writes in one case and 3 (or 4) reads and writes in the other case. This helps to explain part of the difference. However, a local write wait should NOT take the best part of a second - so there must be a configuration problem somewhere in your setup. (e.g. issues with async I/O, or RAID configuration).
Regards
Jonathan Lewis
Similar Messages
-
Why use uniform extent allocation?
version- 11.2.0.2.0
Hello guys, I've been reading UNIFORM vs AUTOALLOCATE extent allocation.
I've read the following articles.
https://blogs.oracle.com/datawarehousing/entry/parallel_load_uniform_or_autoallocate
Ask Tom: On Loading and Extents
https://forums.oracle.com/thread/2518951
From what I understood, autoallocate trumps the uniform in all scenarios (unless I am missing something).
In the thread "AUTOALLOCATE vs UNIFORM SIZE"
for the benefits of autoallocate and uniform size allocation Kh$n wrote
Benefits of AUTOALLOCATE
* Prevents space fragmentation.
Benefits of UNIFORM extent sizes
* Prevents fragmentation.
(I dont understand what is the difference between those two fragmentation prevention, are those benefits one and the same?)
even in scenarios where we know exactly how much data will be loaded, there is always a chance of extent wastage and with out extent trimming that space will be unusable.
Can someone please explain in which cases we use uniform extent allocation?
for suppose we use uniform extent allocation and we have lot of unused space from the extent allocation, can that space be reclaimed using shrink space command for tables and indexes?
Thank YouExtent trimming, to the best of my knowledge, is something that only happens when you are using parallel query to do large loads, not something that happens during normal OLTP type operations. As with anything called "automatic" in Oracle, though, the internals are subject to change across versions (and patchsets) and are not necessarily documented, so it is entirely possible for behaviors to change over time. Relying on specific internal behaviors is generally not a good idea.
The example I gave (assuming you reverse the truncating of A and the loading of C, as Hemant pointed out) produces "fragentation" when you're using automatic extent management. It's not a particularly realistic scenario, but it is possible. If you never delete data, never truncate tables, (and, presumably, never shrink tables), extents would never be deallocated and there would, therefore, never be holes. That is just as true of ancient dictionary managed tablespaces as well as locally managed tablespaes whether you're using uniform or autoallocated extents.
Shrinking a table has nothing to do with defragmenting a tablespace. It is simply compacting the data in the table and then potentially deallocating extents. You can do that with any locally managed tablespace. There is still the possibility, of course, that you have just enough data in the table that you need to allocate 1 extra extent when you only need space for 1 row in 1 block. So there may be some number of MB of "wasted" space per segment (though, again, this is generally not something that is a practical concern since the data in tables generally changes over time and it's generally not worth the effort of worrying about a few MB).
Justin
For your third question, assuming both extents are part of the same segment, assuming that the space is actually usable based on things like the PCTUSED setting of the table, and assuming a nice, simple conventional path insert in a single-user, Oracle would use the free space in the extent for new inserts before allocating a new extent. Oracle generally doesn't allocate new extents unless it needs to (there are caveats to this-- if the only blocks with free space have a relatively large fraction of their space used such that a particular new insert only fits in 1 of the 1 million blocks in the currently allocated extents, Oracle will potentially give up before finding the 1 in a million block that it would need an may allocate a new extent).
Message was edited by: JustinCave -
Any guidelines on uniform ext size?
Hi,
Are there any guidelines, to be followed for optimal performance of tablespaces, in setting the uniform extent size of locally managed tablespaces? I am particularly interested in the percentage of the total tablespace size, to be used as the extent size. Even any other guideline will be helpful.
Thanks
YashI dont think there's is a "RULE" about that, just good sense i think.
If your biggest table is 2GB, make one tablespace (or more if you make partitionning on it) for this table with something like 500MB extent. Make a specific tablespace for the index too with extent like 200MB.
You should also have littles tables such as parameters tables with little size like few KB to 100KB => make a tablespace for theses littles tables with a 32KB extent for example and a tablespace for the indexes with a 16KB extent size.
Fred -
AUTOALLOCATE or UNIFORM extents for LMTs?
Hi,
I checked metalink and other resources, but couldn't find a conclusive statement indicating as to which one of the following is the best and recommended option:
1) LMTs with AUTOALLOCATE or
2) LMTs with UNIFORM EXTENT sizes
Any pros and cons would be appreciated.
Thanks
SSA classic example where AUTOALLOCATE is very good is the 3rd party application where you don't know which tables are going to be big, which ones small, and which one won't be used at all If you put everything into AUTOALLOCATE it avoids wasting space, but lets the big tables grow without producing a vast number of extents.
If you are in control of the application and have good information about sizing of a few critical objects, you might choose to use UNIFORM sizing for adminstrative and monitoring reasons - it gives you the option for watching objects grow at a predictable rate.
If you are doing a lot of parallel work with scratch tables (CTAS, insert /*+ append */ then you might want to read up about the possible conflict between PX and AUTOALLOCATE at the following URL:
http://jonathanlewis.wordpress.com/2007/05/29/autoallocate-and-px/
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
In my first pass of the OATM (Oracle Apps Tablespace Model) utility in release 11.5.10.2, the database actually grew by almost 40G to 285G. Not huge growth, but not quite what I expected. We migrated to uniform extents of either 128k or 1mb, depending on the tablespace. After the migration, many of the empty/unused tables took a larger initial extent than they had before the migration. The old initial extent was usually either 80k or 40k. I counted over 20,000 objects (tables, indexes, lobs) that grew from their old initial allocation to either 128k or 1mb. So, I'm thinking of migrating by schema instead of all schemas at once and migrating the unused schemas to a few tablespaces with smaller initial extents. Has anybody else pursued a similar strategy? That means that future DBA's might have the task of migrating the unused tables to active tablespaces if those products ever do become licensed here, which seems to be one of the downsides of that strategy. I'm also considering migrating some of the larger objects to a 10mb initial extent.
Hi;
Please check below notes for extent strategy
Oracle Applications Tablespace Migration Utility User Documentation 269291.1
Oracle Applications Tablespace Model Release 11i - Tablespace Migration Utility 248857.1
Oracle Applications Tablespace Model FAQs 269293.1
New Oracle Applications Tablespace Model and Migration Utility 248173.1
Oracle® Applications Concepts Release 11i (11.5.10) Part No. B13892-01
Hope it heps
Regard
Helios -
Uniform SGA sizes or not?
Our shop is currently running Oracle Applications 11.5.10.2 on a single node database(10.2.0.2) on Sun Solaris10. We are planning to migrate to RAC(10.2.0.3) and have started work on our development servers. We are planning our production architecture and have a question related to SGA sizing between nodes.
We our trying to get a better understanding of what our SGA sizes on each of our nodes should be if we have one large capacity node and several smaller capacity nodes. By capacity I mean the size of memory and number of CPUs. The biggest concern is the size of memory. On our large box we will have 48G and on each subsequent node we will have 32G. The SGA on the big node is currently about 12G and the PGA 8G. Combined with other memory requirements we can consume 28-30G on our server. My questions are: What is Oracle's recommendation for sizing the SGAs on the smaller nodes? If we size them based on our large node would we run into memory shortages on the smaller nodes? Does RAC require a uniform SGA size across all nodes? Thanks.Oracle does not require the same SGA on all nodes. All machines that are intended to share the same workload will typically have similar SGA sizing. However, machines that are intended to carry different workloads will typically have different SGA configurations.
The SGA is node specific and is defined by the workload expected on that node. Realize that the SGA has a number of pools, the major ones being 'data buffer' and 'shared pool'.
In very crude terms, the shared pool is used to handle the SQL & PL/SQL, and the buffer is used to handle the data being affected by the SQL and PL/SQL.
If a node is designated to handle specific workload ("this is the batch machine", "this is the backup machine", "these machines are for the customer service team's web screens") the SQL thrown at the node will be potentially different fromn any other node and it should be sized for that workload. This is standard 'performance tuning'.
Workload segregation is typically accomplished using the 'Service' capability of the database and listener.
IOW, RAC does not eliminate the DBA's need to tune based on workload. The tuning is done at the instance level. -
Edit next extent size of the Cluster table
Hi Guys
I need to change the next extent size of the a table.
I ran se14 but i am not able to get into edit mode, because there is no button for edit mode.
Reason: Cluster table
Two questions:
1. Why there is no Edit button? Is it because this table doesnot exixt at DB level.
2. How can i change the next extent size for a Cluster table from sql prompt or from brtools if possible.
Information:
I am facing this issue only in DEV, QAS boxes, where as in Production its fine.
Regards
Ricky
Edited by: Ricky kayshap on Dec 9, 2008 3:52 PMHi,
Cluster Tables doesn't exist in DB, Because of that you can't make changes to extents at DB level.
if you experiencing some space issue. I woud suggest to check the underline Transparent tables and make changes to those.
hope this helps.
Kalyan. -
We are half way through a book that comprises 100 single page articles. However it is already nearly 500 MB and this isn't sustainable.
Does the following affect the file size:
Is the Folio file size affected by the number of individual articles, would it be smaller if we had stacks of say 10 articles each with 10 pages rather than 100 single pages?
Every page has a two picture (JPG) object state, the first image is an extreme elargement of the image that visible only for a about a second before the full frame image appears. Each page has a caption using Pan overlay that can be dragged into the page using a small tab.Does an Object State increase the file size over and above the images contained within it?
We have reduced the JPGs to the minimum acceptable quality and there is no video in the Folio.
Any ideas would be much appreciated?800 MB worth of video sounds crazy.
Of course, a high number of videos can bring you to that.
I saw bigger dps apps. I think the apple limit lies around 4 gb (remember,
that is more than 25% of a whole 16 gb iPad)
The mp4 video codec does a really good job while keeping the quality high.
And the human eye is more forgiving to quality when it comes to moving
images compared to still imagery.
I wrote a collection of tipps and ideas how to reduce your file size.
http://www.google.de/url?sa=t&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fdigitalpublis hing.tumblr.com%2Fpost%2F11650748389%2Freducing-folio-filesize&ei=uVbeTv_yD--M4gTY_OWbBw&u sg=AFQjCNHroLkcl-neKlpeidULpQdosl08vw
—Johannes
(mobil gesendet. fat fingers. beware!)
Am 06.12.2011 18:32 schrieb "gnusart" <[email protected]>:
Re: Large folio file size created by gnusart<http://forums.adobe.com/people/gnusart>in
Digital Publishing Suite - View the full discussion<http://forums.adobe.com/message/4067148#4067148> -
Large Block Chunk Size for LOB column
Oracle 10.2.0.4:
We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
[LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
Below is the output of v$db_cache_advice:
select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
C1 C2 C3 C4
2976 368094 1.2674 150044215
5952 736188 1.2187 144285802
8928 1104282 1.1708 138613622
11904 1472376 1.1299 133765577
14880 1840470 1.1055 130874818
17856 2208564 1.0727 126997426
20832 2576658 1.0443 123639740
23808 2944752 1.0293 121862048
26784 3312846 1.0152 120188605
29760 3680940 1.0007 118468561
29840 3690835 1 118389208
32736 4049034 0.9757 115507989
35712 4417128 0.93 110102568
38688 4785222 0.9062 107284008
41664 5153316 0.8956 106034369
44640 5521410 0.89 105369366
47616 5889504 0.8857 104854255
50592 6257598 0.8806 104258584
53568 6625692 0.8717 103198830
56544 6993786 0.8545 101157883
59520 7361880 0.8293 98180125With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
Each LOB column has its own LOB table so each column can have its own LOB chunk size.
The LOB data type is not known for being space efficient.
There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
HTH -- Mark D Powell -- -
Can we change the initial extent size of tablespaces
Hi All,
Can we change the initial extent size of tablespaces.
Oracle version-11.2.0.1
OS - IBM AIX
Please suggest.
Thanks and Regards,There is no way to redefine the initial extent but by dropping/recreating it. But you can try to deallocate the unused space beyond the High Water Mark (alter table table_name deallocate unused keep 0;), as shown on the next demo:
Madrid @ Re: Resizing initial extent
Regards
Girish Sharma -
One of my Keynote files has grown to 704 MB and now takes forever to save. How can I reduce the size of this file? I suspect some photos in the file are larger in MB size then they need to be.
ThanksYou'd need to try exporting your images from iPhoto as smaller images before using them in Keynote. I'm not sure if there's a simple way to compress the images that are already in Keynote.
-
How to display graphics larger than canvas size?
How do I display graphics larger than canvas size in Java AWT?
I tried setting the canvas size() to a value larger than my monitor size, and then adding scroll bars to the canvas, but the scroll bars and the canvas won't go beyond the monitor size in pixels, which is only 800, so the large graphic I try to display gets cut off at the bottom.
How can I overcome this problem? Has anybody encounter a similar dilemma before?import java.awt.*;
import java.awt.event.*;
import java.awt.geom.*;
public class AWTSizing
public static void main(String[] args)
LargeCanvas canvas = new LargeCanvas();
ScrollPane scrollPane = new ScrollPane();
scrollPane.add(canvas);
Frame f = new Frame();
f.addWindowListener(new WindowAdapter()
public void windowClosing(WindowEvent e)
System.exit(0);
f.add(scrollPane);
f.setSize(400,400);
f.setLocation(200,200);
f.setVisible(true);
class LargeCanvas extends Canvas
int w, h;
final int PAD = 10;
Rectangle r1, r2, r3;
Rectangle[] rects;
boolean firstTime;
public LargeCanvas()
w = 360;
h = 360;
firstTime = true;
public void paint(Graphics g)
super.paint(g);
Graphics2D g2 = (Graphics2D)g;
g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
RenderingHints.VALUE_ANTIALIAS_ON);
if(firstTime)
initShapes();
g2.setPaint(Color.red);
g2.draw(r1);
g2.draw(r2);
g2.draw(r3);
private void initShapes()
r1 = new Rectangle(w/4, h/4, w/2, h*3/4);
r2 = new Rectangle(w/2, h/2, w/2, h/2);
r3 = new Rectangle(w*5/8, h/6, w*3/5, h*2/3);
rects = new Rectangle[] { r1, r2, r3 };
firstTime = false;
invalidate();
getScrollPane().validate();
private ScrollPane getScrollPane()
ScrollPane scrollPane = null;
Component c = this;
while((c = c.getParent()) != null)
if(c instanceof ScrollPane)
scrollPane = (ScrollPane)c;
break;
return scrollPane;
public Dimension getPreferredSize()
Dimension d = new Dimension(w, h);
if(rects == null) // before calling initShapes
return d;
Rectangle r;
for(int j = 0; j < rects.length; j++)
r = rects[j];
if(r.x + r.width + PAD > w)
d.width += r.x + r.width + PAD - w;
if(r.y + r.height + PAD > h)
d.height += r.y + r.height + PAD - h;
return d;
} -
Migrating LONG RAW to BLOB and optimizing extent size
Hi all,
I got a quite fragmented table with a LONG RAW column I want to migrate to BLOB and defragment.
DB version is Oracle9i Release 9.2.0.4.0 and this is a production environment.
I know MOVE and/or CTAS are not possible with LONG RAW columns
So, how can I do that? Is ALTER TABLE MODIFY the only possibility to migrate from LOING RAW to BLOB?
Since ALTER TABLE MODIFY will lock the whole table preventing any DML operation, I need at least a rough estimate of the time needed for this operation. How can I do that?
Since this table is quite fragmented, I also want to rebuilt it using a different extent size.
I think I should issue a ALTER TABLE MOVE... after having performed the "ALTER TABLE MODIFY".
Can I do something better to minimize unavailability to DML operations?
thanks,
andreaHi,
Is this an OCCI question?
I don't see that "to_blob" is documented anywhere. The "to_lob" function can be used to convert long raw columns, but its use is pretty specific and certainly not for general query use.
Regards,
Mark
EDIT1: Well, my local documentation set does not have "to_blob" in it at all. However, it is in the 11.1 SQL Language Reference on OTN:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/functions186.htm#sthref2358
Despite the fact that the documentation mentions "long raw" the function appears to only work with "raw" data in my 11.1 tests.
What's your goal here?
Edited by: Mark Williams on Jun 8, 2009 7:15 PM -
Large .bpel file size vs performance
how does large .bpel file size affect performance,say that I have a process of .9 mgb size with around 10000 line how does this affect the instance creation ,fetching and message creation during the process life cycle.
Edited by: arababah on Mar 8, 2010 7:23 AMJohnk93 wrote:
MacDLS,
I recently did a little house-cleaning on my startup drive (only 60Gb) and now have about 20Gb free, so I don't think that is the problem.
It's probably not a very fast drive in the first place...
I know that 5MB isn't very big, but for some reason it takes a lot longer to open these scanned files in photoshop (from aperture) than the 5MB files from my camera. Any idea why this is?
Have a look at the file size of one of those externally edited files for a clue - it won't be 5MB. When Aperture sends a file out for editing, it creates either a PSD or an uncompressed TIFF after applying any image adjustments that you've applied in Aperture, and sends that out. Depending on the settings in Aperture's preferences this will be in either 8-bit or 16-bit.
As a 16-bit uncompressed TIFF, a 44 megapixel image weighs in at a touch over 150MB...
Ian -
My wrist is 225mm. Will I be able to find a watch band to fit my larger than average size?
There are more third party accessories available for iPhones than any other phone, best I can tell. It seems not unreasonable to expect that there will be a similar market for Apple Watch accessories.
Maybe you are looking for
-
Lenovo n200 3000 0769-BAG vista camera driver?
Hello my friends. I have Lenovo n200 3000 0769-BAG model laptop. My operating system is vista home basic x86. My integrated camera doesn't works sometimes. This event happens frequently at now. The camera driver is not original, it is automatically i
-
Error While Creating Runtime Systems on an Existing Track in NWDI
Hi, There is already a track in NWDI with No runtime Systems. I have to create the RunTime Systems to do development on this track. When I try to select the Development checkbox in Runtime systems, select configure without wizard, Gave Deploy Host
-
Error when trying to access a secure website.
load: class oracle.forms.engine.Main not found. Java.lang.ClassNotFoundException: oracle.forms.engine.Main Obviously this is just part of what is shown in the java console, but I was hoping that this might be enough. I am trying to access a website a
-
Not able to connect to oracle 11g using toad9.6
Hey... I installed 11g on Win7- 64 bit. I'm able to connect to DB using Sql*plus and cmd prompt but when i try to connect through toad, it says no client installed. I installed 11g with "Desktop class" option. any suggestions ???
-
I have need in the map editor to tag a wall as having an infinate DB loss when predicting RF heat maps. The rooms in question are operating theatres and as such are lead lined as part of ratiation shielding for mobile X-Ray equipment. I am placing 11