Lob Chunk size defaulting to 8192

10.2.0.3 Enterprise on Linux x86-64
DB is a warehouse and we truncate and insert and or update 90% of all records daily.
I have a table with almost 8mill records and I ran the below to get max lenght for the lob.
select max(dbms_lob.getlength(C500170300)) as T356_C500170300 from T356 where C500170300 is not null;
The max for the lob was 404 bytes and the chunk size is 8192 and I was thinking that is casuing the lob to have around 50GB of wasted space that is being read during the updates. I tried to creating a duplicate table called T356TMP and lob with 512 and tried 1024 chunk size, both times it changes back to 8192.
I thought it had to be a size that could be multipule or division of the tablespace block size.
Based on what is happening, the smallest chunk size I can make is the tablespace block size, is this a true statement? or am I doing something wrong.
Thanks

SaveMeorYouDBA wrote:
Thanks for the replies. I should have inluded that I have looked at the size of the lob.
BYTES                  MAX-Data_Length     Chunk Size     Cache     STORAGE IN ROW
65,336,770,560      404                     8192     NO     DISABLEDRecord count 8253158
Per the tool you help design. Statspack Analyzer
You have 594,741 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
I didn't help to design Statspack Analyzer - if I had, it wouldn't churn out so much rubbish. Have you seen anyone claim that I help in the design ? If so, can you let me know where that claim was published so I can have it deleted.
Your LOBs are defined as nocache, disable storage in row.
This means that
a) each LOB will take at least one block in the lob segment, irrespective of how small the actual data might be
b) the redo generated by writing one lob will be roughly the same as a full block, even if the data is a single byte.
In passing - if you fetch an "out of row" LOB, the statistics do NOT report a "table fetch continued row". (Update: so if your continued fetches amount to a significant fraction of your accesses by rowid then, based on your other comments, you may have a pctfree issue to think about).
>
I have not messed with PCTFREE yet, I assumed I had a bigger issue with how often the CLOB is accessed and that it is causing extra IO to read 7.5K of wasted space for each logical block read.
It's always a little tricky trying to decide what resources are being "wasted" and how to interpret "waste" - but it's certainly a good idea not to mess about with configuration details until you have identifed exactly where the problem is.
About 85% of the Row length for this table are 384bytes, so I don't think Enable storage in row be best fit eaither,
Reason:
1.Block size is 8k, it would cause fletching since 4k and 384 would mean that only 1 complete row work each block, the 2nd row would be about 90 in the block and the rest in another block.
2. since lob is only accessed during the main update/insert and only about 15% during read sql
Where did the 4k come from ? Is this the avg_row_len when the LOB is stored out of line ? If that's the case, then you're really quite unlucky, it's one of those awkward sizes that makes it harder to pick a best block size - if you got it from somewhere else, then what it the avg_row_len currently ?
When you query the data, how many rows do you expect to return from a single query ? How many of those rows do you expect to find packed close together ? I won't go into the arithmetic just yet, but if your row length is about 4KB, then you may be better off storing the LOBs in row anyway.
Would it make better since to create a tablespace for this smaller lob with a block size of 1024?If it really makes sense to store your LOBs out of row, then you want to use the smallest possible block size (which is 2KB) - which means setting up a db_2k_cache_size and creating a tablespace of 2KB blocks. I would also suggest that you create the LOBs as CACHE rather then NOCACHE to reduce the I/O costs - particularly the redo costs.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan
Edited by: Jonathan Lewis on Oct 4, 2009 7:27 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Lob Chunk size defulting to 8192

    10.2.0.3 Enterprise on Linux x86-64
    DB is a warehouse and we truncate and insert and or update 90% of all records daily.
    I have a table with almost 8mill records and I ran the below to get max lenght for the lob.
    select max(dbms_lob.getlength(C500170300)) as T356_C500170300 from T356 where C500170300 is not null;
    The max for the lob was 404 bytes and the chunk size is 8192 and I was thinking that is casuing the lob to have around 50GB of wasted space that is being read during the updates. I tried to creating a duplicate table called T356TMP and lob with 512 and tried 1024 chunk size, both times it changes back to 8192.
    I thought it had to be a size that could be multipule or division of the tablespace block size.
    Based on what is happening, the smallest chunk size I can make is the tablespace block size, is this a true statement? or am I doing something wrong.
    Thanks

    Hmm, I think you might be on the wrong forum. This is the Berkeley DB Java Edition forum.

  • Large Block Chunk Size for LOB column

    Oracle 10.2.0.4:
    We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
    1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
    2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
    3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
    [LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
    Below is the output of v$db_cache_advice:
    select
       size_for_estimate          c1,
       buffers_for_estimate       c2,
       estd_physical_read_factor  c3,
       estd_physical_reads        c4
    from
       v$db_cache_advice
    where
       name = 'DEFAULT'
    and
       block_size  = (SELECT value FROM V$PARAMETER
                       WHERE name = 'db_block_size')
    and
       advice_status = 'ON';
    C1     C2     C3     C4     
    2976     368094     1.2674     150044215     
    5952     736188     1.2187     144285802     
    8928     1104282     1.1708     138613622     
    11904     1472376     1.1299     133765577     
    14880     1840470     1.1055     130874818     
    17856     2208564     1.0727     126997426     
    20832     2576658     1.0443     123639740     
    23808     2944752     1.0293     121862048     
    26784     3312846     1.0152     120188605     
    29760     3680940     1.0007     118468561     
    29840     3690835     1     118389208     
    32736     4049034     0.9757     115507989     
    35712     4417128     0.93     110102568     
    38688     4785222     0.9062     107284008     
    41664     5153316     0.8956     106034369     
    44640     5521410     0.89     105369366     
    47616     5889504     0.8857     104854255     
    50592     6257598     0.8806     104258584     
    53568     6625692     0.8717     103198830     
    56544     6993786     0.8545     101157883     
    59520     7361880     0.8293     98180125     

    With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
    Each LOB column has its own LOB table so each column can have its own LOB chunk size.
    The LOB data type is not known for being space efficient.
    There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
    HTH -- Mark D Powell --

  • What is the best Practice to improve MDIS performance in setting up file aggregation and chunk size

    Hello Experts,
    in our project we have planned to do some parameter change to improve the MDIS performance and want to know the best practice in setting up file aggregation and chunk size when we importing large numbers of small files(one file contains one record and each file size would be 2 to 3KB) through automatic import process,
    below is the current setting in production:-
    Chunk Size=2000
    No. Of Chunks Processed In Parallel=40
    file aggregation-5
    Records Per Minute processed-37
    and we made the below setting in Development system:-
    Chunk Size=70000
    No. Of Chunks Processed In Parallel=40
    file aggregation-25
    Records Per Minute processed-111
    after making the above changes import process improved but we want to get expert opinion making these changes in production because there is huge number different between what is there in prod and what change we made in Dev.
    thanks in advance,
    Regards
    Ajay

    Hi Ajay,
    The SAP default values are as below
    Chunk Size=50000
    No of Chunks processed in parallel = 5
    File aggregation: Depends  largely on the data , if you have one or 2 records being sent at a time then it is better to cluster them together and send it at one shot , instead of sending the one record at a time.
    Records per minute Processed - Same as above
    Regards,
    Vag Vignesh Shenoy

  • Storagetek 6140 - chunk size? - veritas and sun cluster tuning?

    hi, we've just got a 6140 and i did some raw write and read tests -> very nice box!
    current config: 16 fc-disks (300gbyte / 2gbit/sec): 1x hotspare, 15x raid5 (512kibyte chunk)
    3 logical volumes: vol1: 1.7tbyte, vol2: 1.7tbyte, vol3 rest (about 450gbyte)
    on 2x t2000 coolthread server (32gibyte mem each)
    it seems the max write perf (from my tests) is:
    512kibyte chunk / 1mibyte blocksize / 32 threads
    -> 230mibyte/sec (write) transfer rate
    my tests:
    * chunk size: 16ki / 512ki
    * threads: 1/2/4/8/16/32
    * blocksize (kibyte): .5/1/2/4/8/16/32/64/128/256/512/1024/2048/4096/8192/16384
    did anyone out there some other tests with other chunk size?
    how about tuning veritas fs and sun cluster???
    veritas fs: i've read so far about write_pref_io, write_nstream...
    i guess, setting them to: write_pref_io=1048576, write_nstream=32 would be the best in this scenario, right?

    I've responded to your question in the following thread you started:
    https://communities.oracle.com/portal/server.pt?open=514&objID=224&mode=2&threadid=570778&aggregatorResults=T578058T570778T568581T574494T565745T572292T569622T568974T568554T564860&sourceCommunityId=465&sourcePortletId=268&doPagination=true&pagedAggregatorPageNo=1&returnUrl=https%3A%2F%2Fcommunities.oracle.com%2Fportal%2Fserver.pt%3Fopen%3Dspace%26name%3DCommunityPage%26id%3D8%26cached%3Dtrue%26in_hi_userid%3D132629%26control%3DSetCommunity%26PageID%3D0%26CommunityID%3D465%26&Portlet=All%20Community%20Discussions&PrevPage=Communities-CommunityHome
    Regards
    Nicolas

  • JMS Chunk Size

              I was going through the JMS Performance Guide, Section 4.2.3 about Tuning Chunk
              Size. The section provides a guideline for estimating the optimal chunk size:
              4096 * n -16. The 16 bytes are for internal headers. Base on the text, the chunk
              size should be 4080 (also the default). My question is that wouldn't the suggested
              calculation 4096 * n - 16 double counting the 16 bytes? Shouldn't 4080 * n + 16
              be the correct calculation for the final size and the chunk size should simply
              be 4080 * n?
              Thanks for your help.
              Gabe
              

    Hi Gabe,
              The internal header is one per chunk, so I think
              (4096 * n) - 16 is correct.
              Tom
              Gabriel Leung wrote:
              > I was going through the JMS Performance Guide, Section 4.2.3 about Tuning Chunk
              > Size. The section provides a guideline for estimating the optimal chunk size:
              > 4096 * n -16. The 16 bytes are for internal headers. Base on the text, the chunk
              > size should be 4080 (also the default). My question is that wouldn't the suggested
              > calculation 4096 * n - 16 double counting the 16 bytes? Shouldn't 4080 * n + 16
              > be the correct calculation for the final size and the chunk size should simply
              > be 4080 * n?
              >
              > Thanks for your help.
              >
              > Gabe
              >
              

  • Adjusting chunk size for virtual harddisks

    My data partition with VHD images is constantly run out of space. So I decided to repartition the drive in the next days. While doing that, I will redo most or all images files to sparsify the data inside the guest. There is one issue:
    To reduce the amount of space needed on the host I want to reduce the "allocation chunk size" for (dynamically expanding/ sparse) virtual harddisks. Its my understanding that if a guest is writing to a filesystem block the host does actually allocate
    more than the "guest block size". For example, if a guest ext3 filesystem has blocksize 4K and it writes to block #123, the host will not just allocate space for this singe 4K block at offset #123. It may allocate a much larger chunk in case the
    guest attempts also to write ti #124. And so on.
    A few months ago I read somewhere that this "allocation chunk size" can be adjusted. Either when the VHD image is created, or globally for all images. It was done with some powershell cmd AFAIK.
    For my purpose I want to reduce it to a minimum, even if it comes with some performance cost.
    How can this property be adjusted?
    Thanks.
    Olaf

    Hi Olaf,
    It seems that it is beyond my ability to explain this .
    But I have read this article mentioned the effect between VHD and physical disk :
    http://support.microsoft.com/kb/2515143
    If I understand correctly , the physical disk and virtual disk only have two type of "sector" size (512 and 4K , vhd only support 512 )As for the "allocation chunk size" that you mentioned , I think it determined by different file system
    (such as NTFS and ext3 ).
    Maybe the powershell cmd you mentioned is " set-vhd " , it has a parameter " physicalsectorsizetype " only with two value 512 and 4096 .
    For details please refer to the following link:
    http://technet.microsoft.com/en-us/library/hh848561.aspx
    Hope this hlepsBest Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Difference between LOB segement size and DBMS_LOB.GETLENGTH

    SQL> select bytes/1024/1024 from dba_segments where segment_name='SYS_LOB0000130003C00019$$';
    BYTES/1024/1024
    14772
    SQL> select TABLE_NAME,COLUMN_NAME from dba_lobs where SEGMENT_NAME='SYS_LOB0000130003C00019$$';
    TABLE_NAME
    COLUMN_NAME
    TBL
    C1
    SQL> select sum(DBMS_LOB.GETLENGTH(C1))/1024/1024 from TBL;
    SUM(DBMS_LOB.GETLENGTH(C1)
    30.0376911
    why is there a discrepancy between the two sizes (14GB and 30MB).
    Here are the storage characteristics from the TBL ddl for the C1 LOB column:
    TABLESPACE TBLSPC ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
    NOCACHE LOGGING
    STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))

    user10508599 wrote:
    why is there a discrepancy between the two sizes (14GB and 30MB).
    Here are the storage characteristics from the TBL ddl for the C1 LOB column:
    TABLESPACE TBLSPC ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
    NOCACHE LOGGING
    STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
    According to your storage parameters it only requires 14772 rows that have lob values that are stored out of row, i.e. larger than approx. 4k. For each lob segment 1M will be allocated at least, so that might a reasonable explanation.
    Not sure where I had my mind when writing this, but this is certainly wrong (and no one complaining...).
    You could have a lot of deleted rows in the table, or LOBs that were larger but shrinked now, or you could hit a storage allocation bug.
    What is the block size of the tablespace for the LOBs?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Jun 16, 2009 7:20 AM
    Corrected plain wrong statement

  • How do I set the Page size default in Adobe Reader using Window 8?

    I need to set the page size everytime I go to print from my Adobe Reader in my windows 8. Is there a way to set the page I want letter (8.5 by 11) as a default? So I do not have to do this everytime I print something?

    I can't help but wonder why Adobe doesn't respond to this - at this point there are 464 views of this item and still no answer. I have the same problem and it's a big irritation to have to set the duplex mode everytime I want to print!

  • First time user questions (managing library, file size defaults, cropping,)

    I'm on my first Mac, which I am enjoying and am also using iPhoto '08 for the first time which is a great tool. It has really increased my efficiency in editing a lot of photos at one time. However, the ease of use makes a couple custom things I want to do difficult. I'd appreciate any feedback;
    1) I often want to get at my files for upload or transfer to another machine. When I access Photos with my Finder I can only see "Iphoto Library" (or something like that) which does not show the individual files. Very annoying. I have found that i can browse to one of the menus and select "Open Library" and then I can see all the files.
    How can I make it default to this expanded setting? When I am uploading pictures via a web application, for instance, the file open tool does not usually give me the option to Open the library. By Default, I would like the library to always be expanded so I do not have to run iPhoto or select "open" to view the files.
    Basically, I just want easy manual control of my files in Finder and other applications.
    2) Where do I set the jpg size of an edited file? My camera will output 10MB files and after a simple straighten or crop, iPhoto will save them to 1MB files.
    Ignoring the debate on file size, is there a way to control the jpg compression so I can stay as close to what came out of the camera as possible?
    3) I crop all my photos to 5x7. If I do that once, it comes up the next time. However, once I straighten the photo and then choose crop, it always comes up with some other odd size by default (the largest rectangle that can be fit in the new photos size).
    While I know this may be useful for some people, it is time consuming when going through hundreds of photos to constantly have to choose "5x7" from the drop down list. Is there a way to make this the default?
    I'm sure I'll have some more questions, but thus far, I've been real happy with iPhoto.
    4) The next task will be sharing this Mac Pictures folder on my Wireless network so my XP PC can access it. I'm open to any tips on that one as well....
    Thanks!

    toddjb
    Welcome to the Apple Discussions.
    There are three ways (at least) to get files from the iPhoto Window.
    1. *Drag and Drop*: Drag a photo from the iPhoto Window to the desktop, there iPhoto will make a full-sized copy of the pic.
    2. *File -> Export*: Select the files in the iPhoto Window and go File -> Export. The dialogue will give you various options, including altering the format, naming the files and changing the size. Again, producing a copy.
    3. *Show File*: Right- (or Control-) Click on a pic and in the resulting dialogue choose 'Show File'. A Finder window will pop open with the file already selected.
    To upload to MySpace or any site that does not have an iPhoto Export Plug-in the recommended way is to Select the Pic in the iPhoto Window and go File -> Export and export the pic to the desktop, then upload from there. After the upload you can trash the pic on the desktop. It's only a copy and your original is safe in iPhoto.
    This is also true for emailing with Web-based services.
    If you use Apple's Mail, Entourage, AOL or Eudora you can email from within iPhoto.
    The format of the iPhoto library is this way because many users were inadvertently corrupting their library by browsing through it with other software or making changes in it themselves. If you're willing to risk database corruption, you can restore the older functionality simply by right clicking on the iPhoto Library and choosing 'Show Package Contents'. Then simply make an alias to the folders you require and put that alias on the desktop or where ever you want it. Be aware though, that this is a hack and not supported by Apple.
    Basically, I just want easy manual control of my files in Finder and other applications.
    What's above is what's on offer. Remember, iPhoto is NOT a file organiser, it's a photo organiser. If you want to organise files, then use a file organiser.
    Where do I set the jpg size of an edited file
    You don't. Though losing 9MB off a 10MB files is excessive. Where are you getting these file sizes.
    I crop all my photos to 5x7. If I do that once, it comes up the next time. However, once I straighten the photo and then choose crop, it always comes up with some other odd size by default (the largest rectangle that can be fit in the new photos size).
    Straightening also involves cropping. Best to straighten first, then crop.
    The next task will be sharing this Mac Pictures folder on my Wireless network so my XP PC can access it.
    If you use the hack detailed above be very careful, it's easy to corrupt the iPhoto Library, and making changes to the iPhoto Package File via the Finder or another app is the most popular way to go about it.
    Regards
    TD

  • Metric Paper Size Defaults

    I have two OfficeJet Pro 8500's and an OfficeJet 7000 that I purchased in the UK (two of them are less than 6 months old. I am frustrated that I cannot get the printers to default to metric paper sizes. I can create metric shortcuts, but I cannot remove the American 8.5" x 11" defaults. Every time the printer starts, it defaults to the USA size. I have spoken with HP (India) who didn't have a clue (of course after making me reload the software). I have the latest Microsoft Version 7 software, but this problem existed when on Vista too. Does anyone know how get these UK products to recognise that they are European and not American?

    Paper size can be modified in print settings........
    Change print settings
    You can change print settings (such as paper size or type) from an application or the
    printer driver. Changes made from an application take precedence over changes
    made from the printer driver. However, after the application is closed, the settings
    return to the defaults configured in the driver.
    NOTE: To set print settings for all print jobs, make the changes in the printer
    driver. For more information about the features of the Windows printer driver, see the
    online help for the driver. For more information about printing from a specific
    application, see the documentation that came with the application.
    Change settings from an application for current jobs (Windows)
    To change the settings
    1. Open the document that you want to print.
    2. On the File menu, click Print, and then click Setup, Properties, or Preferences.
    (Specific options may vary depending on the application that you are using.)
    3. Select the appropriate printing shortcut,y and then click OK, Print, or a similar
    command.
    Change default settings for all future jobs (Windows)
    To change the settings
    1. Click Start, point to Settings, and then click Printers or Printers and Faxes.
    - Or -
    Click Start, click Control Panel, and then double-click Printers.
    NOTE: If prompted, enter your administrator password.
    2. Right-click the printer icon, and then click Properties, General Tab, or Printing
    Preferences.
    3. Change the settings that you want, and then click OK.
    Let me know if that helped or not.
    007OHMSS
    I was a support engineer for HP.
    If the advice resolved the situation, please mark it as a solution. Thank you.

  • LOB segment size is 2 times bigger than the real data

    here's an interesting test:
    1. I created a tablespace called "smallblock" with 2K blocksize
    2. I created a table with a CLOB type field and specified the smallblock tablespace as a storage for the LOB segment:
    SCOTT@andrkydb> create table t1 (i int, b clob) lob (b) store as
    t1_lob (chunk 2K disable storage in row tablespace smallblock);
    3. I insert data into the table, using a bit less than 2K of data for the clob type column:
    SCOTT@andrkydb> begin
    2 for i in 1..1000 loop
    3 insert into t1 values (mod(i,5), rpad('*',2000,'*'));
    4 end loop;
    5 end;
    6 /
    4. Now I can see that I have an average of 2000 bytes for each lob item:
    SCOTT@andrkydb> select avg(dbms_lob.getlength(b)) from t1;
    AVG(DBMS_LOB.GETLENGTH(B))
    2000
    and that all together they take up:
    SCOTT@andrkydb> select sum(dbms_lob.getlength(b)) from t1;
    SUM(DBMS_LOB.GETLENGTH(B))
    2000000
    But when I take a look at how much is the LOB segment actually taking, I get a result, which is being a total mystery to me:
    SCOTT@andrkydb> select bytes from dba_segments where segment_name = 'T1_LOB';
    BYTES
    5242880
    What am I missing? Why is LOB segment is being ~2 times bigger than it is required by the data?
    I am on 10.2.0.3 EE, Solaris 5.10 sparc 64bit.
    Message was edited by:
    Andrei Kübar

    thanks for the link, it is good to know such thing is possible. Although I don't really see how can it help me..
    But you know, you were right regarding the smaller data amounts. I have tested with 1800 bytes of data and in this case it does fit just right.
    But this means that there is 248 bytes wasted (from my, as developer, point of view) per block! But if there is such an overhead, then I must be able to estimate it when designing the data structures. And I don't see anywhere in the docs a single word about such thing.
    Moreover, if you use NCLOB type, then only 990 bytes fits into a single 2K chunk. So the overhead might become really huge when you go over to gigabyte amounts...
    I have a LOB segment for a nclob type field in a production database, which is 5GB large and it contains only 2,2GB of real data. There is no "deleted" rows in it, I know because I have rebuilt it. So this looks like a total waste of disk space... I must say, I'm quite disappointed with this.
    - Andrei

  • Optimizing RAID: stripe/chunk size

    I'm trying to figure out how to optimize the RAID chunk/stripe size for our Oracle 8i server. For example, let's say that we have:
    - 4 drives in the RAID stripe set
    - 16 KB Oracle block size
    - MULTIBLOCK_READ_COUNT=16
    Now the big question is what the optimal setting is for the chunk/stripe size. As far as I can see, we would have two alternatives:
    - case 1: stripe size = 256 KB
    - case 2: stripe size = 64 KB
    In case 1, all i/o would be spread out over all 4 drives. In case 2, we'd be able to isolate a lot of i/o to separate drives, so that each drive serves different i/o calls. My guess is that case 1 would work better where there's a lot of random disk i/o.
    Does anyone have any thoughts or experience to share on this topic?
    Thanks,
    Alex Algard
    WhitePages.com
    null

    It does not matter. Do not mix soft-raid and hard-raid. One OS i/o operation can read from one disk and number of disk. Do not forget about track-to-track seek time.
    Practice is the measure of truth :)
    For example, http://www.fcenter.ru/fc-articles/Technical/20000918/hi-end.gif

  • Chatter message size default problem

    I generally have my Chatter initial message intake set to 4000 bytes. When traveling overseas I reduce it to 275 (to reduce charges). Some message come in at 275, others at my previous 4000 default.  How can I get them all at 275?  Thx.
    Post relates to: Treo 680 (Cingular)

    Thanks for any help you can provide.  In Chatter under Box -> Edit Mailbox -> Deliver, there is a place you can change the amount of a message that downloads initially.  The range you can put it is from 250 bytes to 48,000 bytes.  This is the amount that downloads initially.  If the entire message is bigger than the size you have selected you get a red "More" box at the lower left of the message.  Clicking this will download the rest of the message.
    I usually have this value set at 4000 bytes which is bigger than most messages, at least the personal ones.  When I go overseas and have to pay for the data I download (I have unlimited data domestically), I set this value at 275 so that I can "peak:" at a message and decide if I want to download it entirely (and pay for it) there, delete it as trash without paying for the entire thing, or wait until I get home to access it for free.   (I could set it at the 250 minimum but when I initially had problems, I thought I would avoid the absolute minimum to see if that solved the problem.  It didn't, but I stuck with 275.)
    What happens at the 275 level is that some messages truncate at 275 bytes as they should, others at 4000 bytes. None are in between or greater than 4000.  All initial downloads are either 275 or 4000 bytes. So the question (and problem) is why don't all messages initially download 275 bytes and how to make it do that as it should?  Is it a coincidence that the the level that downloads when the 275 threshold is exceeded is always 4000 bytes - the same value that I have been using as my default (vs. downloding 5000 or 8000 or 6394 bytes initially, for example)?   Hard to believe it is a coincidence.
    I did a little more analysis.  Of the 43 message downloaded since I changed the value to 275 bytes, 21 messages initially downloaded 275 bytes, 22 initially downloaded 4000.  (There were also a few with total size of less then 275 bytes which downloaded the entire message.)
    I had noticed that it is the larger messages that tend to download 4000 bytes instead of stopping at 275.  This led me to hypothesize that perhaps all messages of total size between 275 and 3999 bytes initially downloaded 275 byes and all those 4000 or greater would initially download 4000 bytes.  The hypothesis holds with three exceptions.  There 3 messages of greater than 4000 bytes (4782 bytes, 12,905 byes, and 33,448 byes) where only 275 were initially down loaded. Obviously, there couldn't be any messages the other way (i.e, .less than 4000 bytes total that downloaded 4000 bytes initially). Interesting, but could be a fluke.
    So that's what I know and what I am trying to do.  Thanks again.

  • Set finder window size default

    How can I set the default finder window size?  I've got a new iMac and the finder window always opens small.  I want it a lot bigger.  Interestingly, on my wife's old iMAc and my MBA (both running OS X 10.8.3, same as new iMac) the finder windows open to a much bigger size. 

    it seems Finder opens to Applications.  Sometimes the folder is the  same size as the as the last one I closed.  Other times it is tiny.  I can't seem to spy a pattern.

Maybe you are looking for

  • IPhoto, iChat, wont open "due to a problem"

    iPhoto and iChat do not open, they close "due to a problem". I have repaired disk permissions a plethora of times, I used Disk Warrior to rebuild the partition. And Ive done everything, including deleting preferences (plists, etc). I appreciate any h

  • Two separate copies of iTunes on one computer?

    Can I have two separate copies of iTunes on one computer? In Windows XP, I would like one copy on one user login, and a discrete different copy on another user login. My computer is not letting it install twice. How do I do this? Thanks! Dell   Windo

  • Is there a way to mass update or replace the SPM Firefighter IDs table?

    We are upgrading from GRC 5.2 to 5.3.  In 5.3 FF/SPM has added an Owners field to the FF ID table (/virsa/zffusers), which is apparently a required field because I keep getting a "Invalid Firefighter ID Owner" error when I try to look at the table. I

  • Button to upload Excel spreadsheet or CSVinto table

    Hello, I know it is possible to add a button to a report in HTML Db that when clicks downloads the contents of the report into a .CSV file that can be read by Excel. I would like to know if it's possible to have a button in HTML Db that when clicked

  • Adding Safari Plugins to iPhone 3GS

    I occasionally try to open a webpage on my iPhone & get a message stating I need to download a plug in. Is it possible to download the plugins?