Large Block Chunk Size for LOB column
Oracle 10.2.0.4:
We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
[LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
Below is the output of v$db_cache_advice:
select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
C1 C2 C3 C4
2976 368094 1.2674 150044215
5952 736188 1.2187 144285802
8928 1104282 1.1708 138613622
11904 1472376 1.1299 133765577
14880 1840470 1.1055 130874818
17856 2208564 1.0727 126997426
20832 2576658 1.0443 123639740
23808 2944752 1.0293 121862048
26784 3312846 1.0152 120188605
29760 3680940 1.0007 118468561
29840 3690835 1 118389208
32736 4049034 0.9757 115507989
35712 4417128 0.93 110102568
38688 4785222 0.9062 107284008
41664 5153316 0.8956 106034369
44640 5521410 0.89 105369366
47616 5889504 0.8857 104854255
50592 6257598 0.8806 104258584
53568 6625692 0.8717 103198830
56544 6993786 0.8545 101157883
59520 7361880 0.8293 98180125
With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
Each LOB column has its own LOB table so each column can have its own LOB chunk size.
The LOB data type is not known for being space efficient.
There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
HTH -- Mark D Powell --
Similar Messages
-
Adjusting chunk size for virtual harddisks
My data partition with VHD images is constantly run out of space. So I decided to repartition the drive in the next days. While doing that, I will redo most or all images files to sparsify the data inside the guest. There is one issue:
To reduce the amount of space needed on the host I want to reduce the "allocation chunk size" for (dynamically expanding/ sparse) virtual harddisks. Its my understanding that if a guest is writing to a filesystem block the host does actually allocate
more than the "guest block size". For example, if a guest ext3 filesystem has blocksize 4K and it writes to block #123, the host will not just allocate space for this singe 4K block at offset #123. It may allocate a much larger chunk in case the
guest attempts also to write ti #124. And so on.
A few months ago I read somewhere that this "allocation chunk size" can be adjusted. Either when the VHD image is created, or globally for all images. It was done with some powershell cmd AFAIK.
For my purpose I want to reduce it to a minimum, even if it comes with some performance cost.
How can this property be adjusted?
Thanks.
OlafHi Olaf,
It seems that it is beyond my ability to explain this .
But I have read this article mentioned the effect between VHD and physical disk :
http://support.microsoft.com/kb/2515143
If I understand correctly , the physical disk and virtual disk only have two type of "sector" size (512 and 4K , vhd only support 512 )As for the "allocation chunk size" that you mentioned , I think it determined by different file system
(such as NTFS and ext3 ).
Maybe the powershell cmd you mentioned is " set-vhd " , it has a parameter " physicalsectorsizetype " only with two value 512 and 4096 .
For details please refer to the following link:
http://technet.microsoft.com/en-us/library/hh848561.aspx
Hope this hlepsBest Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Change Font Size for Table Column
Hi,
How can we set different font sizes for table columns?
Thanks,
Uma.AHi
Set Design property of table column
Aslo check this link
http://help.sap.com/erp2005_ehp_03/helpdata/EN/66/18b44145143831e10000000a155106/frameset.htm
Thanks -
Set custom font size for one column
hey guys! I am trying to decrease the font-size for one column based on an lov and used
style="font-size:8pt"It does not work though! Do you have a hint?
brgds,
sebHello Seb,
>> Might it be important whether the column is of type "...based on lov"?
If by "...based on lov" you mean “Display as Text (based on LOV, does not save state)” the answer is yes. This type of column is not treated as an updatable column (the implementation is by using a regular <td> tag and not the <input> tag) so the Apex engine ignores the Element Attributes field.
As far as I can tell, this column can only be styled in the query itself (and you need to implement the LOV yourself).
Another option is to use a select list and disabled it, but the column will appear in gray, and with small font size it doesn’t look that good.
Regards,
Arie.
♦ Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
♦ Forthcoming book about APEX: Oracle Application Express 3.2 – The Essentials and More -
Avoiding the LOB chunk updates as LOB columns are not propagated.
Hi,
We have a setup where we are using the streams along with messaging gateway to propagate the changes to websphere MQ. we are deleting the LOB columns at the capture process itself using the delete_column function. but in case of LOB tables, insert statements are propagating in two parts(1 insert and 1 update, which is for lob column). we need to eliminate this second statement(update) as we do not require this statement. any help would be greatly appreciated.
Regards,
AnkitThis amount to a transformation on the capture. You have no choice than abandon the built in function and create a transform function and attach this transform function on the capture using an action context. Action context are process that automatically fire.
I wrote a note on this, it is not an easy matter but there is enough information on how to do it.
http://sourceforge.net/apps/mediawiki/smenu/index.php?title=How_to_Transform_capture
If nevertheless you find a way to use a transform function at capture site using a built in function, please let us know. -
Optimizing chunks size for filesending with NetStream.send() method
Hi,
I would like to implement a p2p filesending application. Unfortunately object replication is not the best solution for this so I need to implement it myself.
The process is the following:
Once the user has selected the file for sending, I load it to the memory. After it is completed I cut the file's data into chunks and send these chunks with the NetStream.send() method - to get the progress of filesending -.
How can I optimize the size of the chunks for the fastest filesending method or what is the best size for a chunk?Hi there
Please submit a Wish Form to ask for some future version of Captivate to offer such a feature! (Link is in my sig)
Cheers... Rick
Helpful and Handy Links
Captivate Wish Form/Bug Reporting Form
Adobe Certified Captivate Training
SorcerStone Blog
Captivate eBooks -
2014 In-Memory Table Bucket Size for varchar column
i'm trying to create an In-Memory Table with a hash index on a varchar column. If you have a numeric field, it's supposed to be twice the unique values, but how do you calculate the BUCKET_COUNT for a varchar(20) column? I've an Email column and i want to
create an index on it but i don't know what the BUCKET_COUNT should be... i cannot find any help about it, every tutorial or help explains hash indexes with numeric values only.
thanks!I do have a question, though. What happens if there is a hash collision? Yes, things slow down a little, but I wonder how much. Hash table twice the size (row count) of the data, can be pretty large. Especially if the data is
not unique and is going to produce collisions anyway! I assume it will still work even if the hash table bucket count is
half the number of data rows. Remember, Hekaton is 100x faster, so what if a few collisions slows it down to 97x?
Well, it is not 100x; I think they have achieved something like 30x in
their demos.
I am not sure that you can have non-unique hash indexes, but in any case it only make sense if duplicates are occasionaly.
An occasional hash collision is not going cost you a lot, but if you started with a bucket count of one million, and now have five million rows, you are certainly losing performance.
Erland Sommarskog, SQL Server MVP, [email protected] -
Maximum size for document column data
Good afternoon:
I am trying to upload a new document to my search index that has a field containing PDF searchable text data at a size of 247 KB. I am getting the following in a 207 response:
One or more document fields are too large to process.
What is the maximum size of data for upload? I checked this link (http://msdn.microsoft.com/en-us/library/azure/dn798934.aspx) but I'm not sure it is defined here. Thanks!Hi Pablo,
I recently encountered the same syndrome (using the 2014-07-31-Preview API). I looked at the data we are trying to send, and one of the fields is having about 78 KiB of text. That field is marked as "searchable"
and "suggestions". Would the presence of "suggestions" imply a similar limit to field size?
I can see now the "suggestions" attribute has been removed from the latest API. If I switch to the latest, and use only "searchable", I'm assuming the issue should possibly go away?
Thanks. -
Font Size for Favorites column too small in Leopard
I just installed Leopard and find that the font used in the Favorites column is smaller than that used by Tiger and is too small for me to read. I see that the View Options allow font size to be changed for the file and folder names in finder windows, but I can't find anywhere to set the font of the Favorites column. Can anyone help me with a hack?
Guess not. Thanks anyway.
-
Large iCloud backup size for some apps
Was poking around trying to get my backup down to size and noticed a few apps are major offenders:
Facebook - 28 MB
Words for Friends - 3.6 MB
Yelp 8.7 MB
NYC Mate - 68.3 MB
I'm not a programmer but it seems to me the only thing that should be getting backed up is perhaps my login credentials. This shouldn't be an enormous file that gets uploaded each night. One of those apps NYC Mate doesn't even have a login, so I wonder what it's sending to the cloud.
So, this is more of a curious gripe than a complaint. Should we be telling app developers to slim down what gets backed up each night in the interest of preserving bandwidth? Or is this a non-issue?This needs to be addressed because you will not be able to use an incomplete backup. Not being able to back up to iCloud can be caused by a corrupt existing backup that needs to be deleted, or by data on your device that is causing the backup to fail. To troubleshoot these, try deleting your last iCloud backup (if you have one) by turning off iCloud Backup in Settings>iCloud>Storage & Backup, then tap Manage Storage, tap your device under Backups, then tap Delete Backup. Then go back and turn iCloud Backup back on and try backing up again.
If it still won't back up, you may have an app or something in your camera roll that is causing the backup to fail. To locate which one, go to Settings>iCloud>Storage & Backup>Manage Storage, tap the name of your device under Backups, under Backup Options tap Show All Apps, then turn them all to Off (including camera roll) and try backing up again. If the backup is successful, then the camera roll and/or one of your apps is causing the backup to fail and you'll have to located by process of elimination. Turn the camera roll On and try backing up again. If it succeeds, turn some of your apps to On and try backing up again. If it succeeds again, turn some more apps to On then try again; repeat this process until it fails. Eventually you'll be able to locate the problem app and exclude it from your backup. -
Header1 size is required for Table Column header
Hi Friends,
I wanted to use Header1 size for Table column headers. Pls help me how to do.
Now I am using external label with Header1 size to the table but labels are not aligned properly with the table columns.
Regards,
Lakshmi Prasad.Hi,
For headers design property is not their so you cant change the font. Other option is to change the theme (Not tried personally).
Regards
Ayyapparaj -
What is block size for dma transfers? Can it be set?
Can't find Application Note 011, "DMA Fundamental on Various PC Platforms" on the NI website.
Can someone please send me the link?
What I'm trying to figure out is what is the packet size (chunk size) for DMA transfers on NI M series boards?
i.e. how many samples are collected into the boards FIFO buffer before
DMA transfer takes place. (How many samples (or bytes) are
transferred at a time?)
Is this packet size (chunk size) configurable? If so , what is minimum value that it can take on?
Thanks,
MauriceSee post at http://forums.ni.com/ni/board/message?board.id=170&message.id=162527.
-
Minimum block size for DMA transfers
Can't find Application Note 011, "DMA Fundamental on Various PC Platforms" on the NI website.
Can someone please send me the link?
What I'm trying to figure out is what is the packet size (chunk size) for DMA transfers on NI M series boards?
i.e. how many samples are collected into the boards FIFO buffer before
DMA transfer takes place. (How many samples (or bytes) are
transferred at a time.
Is this packet size (chunk size) configurable? If so , what is minimum value that it can take on?
Thanks,
MauriceSee post at http://forums.ni.com/ni/board/message?board.id=170&message.id=162527.
-
Conflict resolution for a table with LOB column ...
Hi,
I was hoping for some guidance or advice on how to handle conflict resolution for a table with a LOB column.
Basically, I had intended to handle the conflict resolution using the MAXIMUM prebuilt update conflict handler. I also store
the 'update' transaction time in the same table and was planning to use this as the resolution column to resolve the conflict.
I see however these prebuilt conflict handlers do not support LOB columns. I assume therefore I need to code a customer handler
to do this for me. I'm not sure exactly what my custom handler needs to do though! Any guidance or links to similar examples would
be very much appreciated.Hi,
I have been unable to make any progress on this issue. I have made use of prebuilt update handlers with no problems
before but I just don't know how to resolve these conflicts for LOB columns using custom handlers. I have some questions
which I hope make sense and are relevant:
1.Does an apply process detect update conflicts on LOB columns?
2.If I need to create a custom update/error handler to resolve this, should I create a prebuilt update handler for non-LOB columns
in the table and then a separate one for the LOB columns OR is it best just to code a single custom handler for ALL columns?
3.In my custom handler, I assume I will need to use the resolution column to decide whether or not to resolve the conflict in favour of the LCR
but how do I compare the new value in the LCR with that in the destination database? I mean how do I access the current value in the destination
database from the custom handler?
4.Finally, if I need to resolve in favour of the LCR, do I need to call something specific for LOB related columns compared to non-LOB columns?
Any help with these would be very much appreciated or even if someone can direct me to documentation or other links that would be good too.
Thanks again. -
Satandard column sizes for DB design
Hi,
I am failed to search for standard size for the columns(like First name, Postal Id, Phone no...) for tables for a new application . My client is US based. Is there any standards document or recommendations or best practices available to follow?
Thanks and Regards
Hesh.
Edited by: user10734616 on Feb 11, 2009 1:36 AMuser10734616 wrote:
Hi,
I am failed to search for standard size for the columns(like First name, Postal Id, Phone no...) for tables for a new application . My client is US based. Is there any standards document or recommendations or best practices available to follow?
Thanks and Regards
Hesh.
Edited by: user10734616 on Feb 11, 2009 1:36 AMIt's entirelyl dependent on the data. In the US, Social Security Number is always 9 characters (11 if you store the hyphens), phone numbers should be 10 to allow for area code (12 if you include the hyphens), zip codes should be 9 (10, if you include the hyphen). Names are your best guess as to what seems reasonable, but if you make them varchar2 (as you should) there is no harm in over-specifing (first_name varchar2(50)).
Please be sure you use the correct data type. Even though SSN and phone numbers are represented with numerical characters, they are not numbers. If you doubt that, see what happens if you put a ssn that begins with the character '0' (zero) into a number field, then try to select on that ssn.
Maybe you are looking for
-
I am new to mac and i am trying to download the new software
my mac is telling my my start up disk doesnt have enough space i barely have any thing on this lab top and i also have no clue what or where this start up disk is or how to access it please help me i am getting very frusterated.
-
Ipod nano 4th Gen - not responding
Hi, I've tried the five Rs. No joy. Tried factory reset. No joy, just error message saying ipod cannot reset. Screen is illuminated, but dead apart from that. Any pearls of wisdom from anyone? Cheers Andy
-
Export parts of my library (referenced) to be used on other Mac w/ Aperture
I need to export parts of my library (all items are referenced, none internal) to be used on other Mac with Aperture. So my images, projects and albums shall end up on the other Mac in Aperture. A simple proejct export doe snot do the job as the path
-
ADF UIX WebStart - large memory footprint
Hi everyone, I am running a three-tier model jclient app with java webstart. java sdk 5.0 with jvm 5.0 - it creates a large footprint and then it says no more memory left to allocate to the app. i looked at if the jvm was the cause but i am using jav
-
Save Image to system's ClipBoard .PLEASE HELP!!!!!
Hi, I'm trying to save an Image from my GUI to the System's ClipBoard but I just can't get it to work. I've overriden a JPanel's paint() method where I draw my own stuff. I've used the following code which I found in the forum : class ImageSelection