Lob Chunk size defulting to 8192

10.2.0.3 Enterprise on Linux x86-64
DB is a warehouse and we truncate and insert and or update 90% of all records daily.
I have a table with almost 8mill records and I ran the below to get max lenght for the lob.
select max(dbms_lob.getlength(C500170300)) as T356_C500170300 from T356 where C500170300 is not null;
The max for the lob was 404 bytes and the chunk size is 8192 and I was thinking that is casuing the lob to have around 50GB of wasted space that is being read during the updates. I tried to creating a duplicate table called T356TMP and lob with 512 and tried 1024 chunk size, both times it changes back to 8192.
I thought it had to be a size that could be multipule or division of the tablespace block size.
Based on what is happening, the smallest chunk size I can make is the tablespace block size, is this a true statement? or am I doing something wrong.
Thanks

Hmm, I think you might be on the wrong forum. This is the Berkeley DB Java Edition forum.

Similar Messages

  • Lob Chunk size defaulting to 8192

    10.2.0.3 Enterprise on Linux x86-64
    DB is a warehouse and we truncate and insert and or update 90% of all records daily.
    I have a table with almost 8mill records and I ran the below to get max lenght for the lob.
    select max(dbms_lob.getlength(C500170300)) as T356_C500170300 from T356 where C500170300 is not null;
    The max for the lob was 404 bytes and the chunk size is 8192 and I was thinking that is casuing the lob to have around 50GB of wasted space that is being read during the updates. I tried to creating a duplicate table called T356TMP and lob with 512 and tried 1024 chunk size, both times it changes back to 8192.
    I thought it had to be a size that could be multipule or division of the tablespace block size.
    Based on what is happening, the smallest chunk size I can make is the tablespace block size, is this a true statement? or am I doing something wrong.
    Thanks

    SaveMeorYouDBA wrote:
    Thanks for the replies. I should have inluded that I have looked at the size of the lob.
    BYTES                  MAX-Data_Length     Chunk Size     Cache     STORAGE IN ROW
    65,336,770,560      404                     8192     NO     DISABLEDRecord count 8253158
    Per the tool you help design. Statspack Analyzer
    You have 594,741 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
    I didn't help to design Statspack Analyzer - if I had, it wouldn't churn out so much rubbish. Have you seen anyone claim that I help in the design ? If so, can you let me know where that claim was published so I can have it deleted.
    Your LOBs are defined as nocache, disable storage in row.
    This means that
    a) each LOB will take at least one block in the lob segment, irrespective of how small the actual data might be
    b) the redo generated by writing one lob will be roughly the same as a full block, even if the data is a single byte.
    In passing - if you fetch an "out of row" LOB, the statistics do NOT report a "table fetch continued row". (Update: so if your continued fetches amount to a significant fraction of your accesses by rowid then, based on your other comments, you may have a pctfree issue to think about).
    >
    I have not messed with PCTFREE yet, I assumed I had a bigger issue with how often the CLOB is accessed and that it is causing extra IO to read 7.5K of wasted space for each logical block read.
    It's always a little tricky trying to decide what resources are being "wasted" and how to interpret "waste" - but it's certainly a good idea not to mess about with configuration details until you have identifed exactly where the problem is.
    About 85% of the Row length for this table are 384bytes, so I don't think Enable storage in row be best fit eaither,
    Reason:
    1.Block size is 8k, it would cause fletching since 4k and 384 would mean that only 1 complete row work each block, the 2nd row would be about 90 in the block and the rest in another block.
    2. since lob is only accessed during the main update/insert and only about 15% during read sql
    Where did the 4k come from ? Is this the avg_row_len when the LOB is stored out of line ? If that's the case, then you're really quite unlucky, it's one of those awkward sizes that makes it harder to pick a best block size - if you got it from somewhere else, then what it the avg_row_len currently ?
    When you query the data, how many rows do you expect to return from a single query ? How many of those rows do you expect to find packed close together ? I won't go into the arithmetic just yet, but if your row length is about 4KB, then you may be better off storing the LOBs in row anyway.
    Would it make better since to create a tablespace for this smaller lob with a block size of 1024?If it really makes sense to store your LOBs out of row, then you want to use the smallest possible block size (which is 2KB) - which means setting up a db_2k_cache_size and creating a tablespace of 2KB blocks. I would also suggest that you create the LOBs as CACHE rather then NOCACHE to reduce the I/O costs - particularly the redo costs.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan
    Edited by: Jonathan Lewis on Oct 4, 2009 7:27 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Large Block Chunk Size for LOB column

    Oracle 10.2.0.4:
    We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
    1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
    2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
    3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
    [LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
    Below is the output of v$db_cache_advice:
    select
       size_for_estimate          c1,
       buffers_for_estimate       c2,
       estd_physical_read_factor  c3,
       estd_physical_reads        c4
    from
       v$db_cache_advice
    where
       name = 'DEFAULT'
    and
       block_size  = (SELECT value FROM V$PARAMETER
                       WHERE name = 'db_block_size')
    and
       advice_status = 'ON';
    C1     C2     C3     C4     
    2976     368094     1.2674     150044215     
    5952     736188     1.2187     144285802     
    8928     1104282     1.1708     138613622     
    11904     1472376     1.1299     133765577     
    14880     1840470     1.1055     130874818     
    17856     2208564     1.0727     126997426     
    20832     2576658     1.0443     123639740     
    23808     2944752     1.0293     121862048     
    26784     3312846     1.0152     120188605     
    29760     3680940     1.0007     118468561     
    29840     3690835     1     118389208     
    32736     4049034     0.9757     115507989     
    35712     4417128     0.93     110102568     
    38688     4785222     0.9062     107284008     
    41664     5153316     0.8956     106034369     
    44640     5521410     0.89     105369366     
    47616     5889504     0.8857     104854255     
    50592     6257598     0.8806     104258584     
    53568     6625692     0.8717     103198830     
    56544     6993786     0.8545     101157883     
    59520     7361880     0.8293     98180125     

    With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
    Each LOB column has its own LOB table so each column can have its own LOB chunk size.
    The LOB data type is not known for being space efficient.
    There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
    HTH -- Mark D Powell --

  • Storagetek 6140 - chunk size? - veritas and sun cluster tuning?

    hi, we've just got a 6140 and i did some raw write and read tests -> very nice box!
    current config: 16 fc-disks (300gbyte / 2gbit/sec): 1x hotspare, 15x raid5 (512kibyte chunk)
    3 logical volumes: vol1: 1.7tbyte, vol2: 1.7tbyte, vol3 rest (about 450gbyte)
    on 2x t2000 coolthread server (32gibyte mem each)
    it seems the max write perf (from my tests) is:
    512kibyte chunk / 1mibyte blocksize / 32 threads
    -> 230mibyte/sec (write) transfer rate
    my tests:
    * chunk size: 16ki / 512ki
    * threads: 1/2/4/8/16/32
    * blocksize (kibyte): .5/1/2/4/8/16/32/64/128/256/512/1024/2048/4096/8192/16384
    did anyone out there some other tests with other chunk size?
    how about tuning veritas fs and sun cluster???
    veritas fs: i've read so far about write_pref_io, write_nstream...
    i guess, setting them to: write_pref_io=1048576, write_nstream=32 would be the best in this scenario, right?

    I've responded to your question in the following thread you started:
    https://communities.oracle.com/portal/server.pt?open=514&objID=224&mode=2&threadid=570778&aggregatorResults=T578058T570778T568581T574494T565745T572292T569622T568974T568554T564860&sourceCommunityId=465&sourcePortletId=268&doPagination=true&pagedAggregatorPageNo=1&returnUrl=https%3A%2F%2Fcommunities.oracle.com%2Fportal%2Fserver.pt%3Fopen%3Dspace%26name%3DCommunityPage%26id%3D8%26cached%3Dtrue%26in_hi_userid%3D132629%26control%3DSetCommunity%26PageID%3D0%26CommunityID%3D465%26&Portlet=All%20Community%20Discussions&PrevPage=Communities-CommunityHome
    Regards
    Nicolas

  • What is the best Practice to improve MDIS performance in setting up file aggregation and chunk size

    Hello Experts,
    in our project we have planned to do some parameter change to improve the MDIS performance and want to know the best practice in setting up file aggregation and chunk size when we importing large numbers of small files(one file contains one record and each file size would be 2 to 3KB) through automatic import process,
    below is the current setting in production:-
    Chunk Size=2000
    No. Of Chunks Processed In Parallel=40
    file aggregation-5
    Records Per Minute processed-37
    and we made the below setting in Development system:-
    Chunk Size=70000
    No. Of Chunks Processed In Parallel=40
    file aggregation-25
    Records Per Minute processed-111
    after making the above changes import process improved but we want to get expert opinion making these changes in production because there is huge number different between what is there in prod and what change we made in Dev.
    thanks in advance,
    Regards
    Ajay

    Hi Ajay,
    The SAP default values are as below
    Chunk Size=50000
    No of Chunks processed in parallel = 5
    File aggregation: Depends  largely on the data , if you have one or 2 records being sent at a time then it is better to cluster them together and send it at one shot , instead of sending the one record at a time.
    Records per minute Processed - Same as above
    Regards,
    Vag Vignesh Shenoy

  • Adjusting chunk size for virtual harddisks

    My data partition with VHD images is constantly run out of space. So I decided to repartition the drive in the next days. While doing that, I will redo most or all images files to sparsify the data inside the guest. There is one issue:
    To reduce the amount of space needed on the host I want to reduce the "allocation chunk size" for (dynamically expanding/ sparse) virtual harddisks. Its my understanding that if a guest is writing to a filesystem block the host does actually allocate
    more than the "guest block size". For example, if a guest ext3 filesystem has blocksize 4K and it writes to block #123, the host will not just allocate space for this singe 4K block at offset #123. It may allocate a much larger chunk in case the
    guest attempts also to write ti #124. And so on.
    A few months ago I read somewhere that this "allocation chunk size" can be adjusted. Either when the VHD image is created, or globally for all images. It was done with some powershell cmd AFAIK.
    For my purpose I want to reduce it to a minimum, even if it comes with some performance cost.
    How can this property be adjusted?
    Thanks.
    Olaf

    Hi Olaf,
    It seems that it is beyond my ability to explain this .
    But I have read this article mentioned the effect between VHD and physical disk :
    http://support.microsoft.com/kb/2515143
    If I understand correctly , the physical disk and virtual disk only have two type of "sector" size (512 and 4K , vhd only support 512 )As for the "allocation chunk size" that you mentioned , I think it determined by different file system
    (such as NTFS and ext3 ).
    Maybe the powershell cmd you mentioned is " set-vhd " , it has a parameter " physicalsectorsizetype " only with two value 512 and 4096 .
    For details please refer to the following link:
    http://technet.microsoft.com/en-us/library/hh848561.aspx
    Hope this hlepsBest Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • JMS Chunk Size

              I was going through the JMS Performance Guide, Section 4.2.3 about Tuning Chunk
              Size. The section provides a guideline for estimating the optimal chunk size:
              4096 * n -16. The 16 bytes are for internal headers. Base on the text, the chunk
              size should be 4080 (also the default). My question is that wouldn't the suggested
              calculation 4096 * n - 16 double counting the 16 bytes? Shouldn't 4080 * n + 16
              be the correct calculation for the final size and the chunk size should simply
              be 4080 * n?
              Thanks for your help.
              Gabe
              

    Hi Gabe,
              The internal header is one per chunk, so I think
              (4096 * n) - 16 is correct.
              Tom
              Gabriel Leung wrote:
              > I was going through the JMS Performance Guide, Section 4.2.3 about Tuning Chunk
              > Size. The section provides a guideline for estimating the optimal chunk size:
              > 4096 * n -16. The 16 bytes are for internal headers. Base on the text, the chunk
              > size should be 4080 (also the default). My question is that wouldn't the suggested
              > calculation 4096 * n - 16 double counting the 16 bytes? Shouldn't 4080 * n + 16
              > be the correct calculation for the final size and the chunk size should simply
              > be 4080 * n?
              >
              > Thanks for your help.
              >
              > Gabe
              >
              

  • Difference between LOB segement size and DBMS_LOB.GETLENGTH

    SQL> select bytes/1024/1024 from dba_segments where segment_name='SYS_LOB0000130003C00019$$';
    BYTES/1024/1024
    14772
    SQL> select TABLE_NAME,COLUMN_NAME from dba_lobs where SEGMENT_NAME='SYS_LOB0000130003C00019$$';
    TABLE_NAME
    COLUMN_NAME
    TBL
    C1
    SQL> select sum(DBMS_LOB.GETLENGTH(C1))/1024/1024 from TBL;
    SUM(DBMS_LOB.GETLENGTH(C1)
    30.0376911
    why is there a discrepancy between the two sizes (14GB and 30MB).
    Here are the storage characteristics from the TBL ddl for the C1 LOB column:
    TABLESPACE TBLSPC ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
    NOCACHE LOGGING
    STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))

    user10508599 wrote:
    why is there a discrepancy between the two sizes (14GB and 30MB).
    Here are the storage characteristics from the TBL ddl for the C1 LOB column:
    TABLESPACE TBLSPC ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
    NOCACHE LOGGING
    STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
    According to your storage parameters it only requires 14772 rows that have lob values that are stored out of row, i.e. larger than approx. 4k. For each lob segment 1M will be allocated at least, so that might a reasonable explanation.
    Not sure where I had my mind when writing this, but this is certainly wrong (and no one complaining...).
    You could have a lot of deleted rows in the table, or LOBs that were larger but shrinked now, or you could hit a storage allocation bug.
    What is the block size of the tablespace for the LOBs?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Jun 16, 2009 7:20 AM
    Corrected plain wrong statement

  • LOB segment size is 2 times bigger than the real data

    here's an interesting test:
    1. I created a tablespace called "smallblock" with 2K blocksize
    2. I created a table with a CLOB type field and specified the smallblock tablespace as a storage for the LOB segment:
    SCOTT@andrkydb> create table t1 (i int, b clob) lob (b) store as
    t1_lob (chunk 2K disable storage in row tablespace smallblock);
    3. I insert data into the table, using a bit less than 2K of data for the clob type column:
    SCOTT@andrkydb> begin
    2 for i in 1..1000 loop
    3 insert into t1 values (mod(i,5), rpad('*',2000,'*'));
    4 end loop;
    5 end;
    6 /
    4. Now I can see that I have an average of 2000 bytes for each lob item:
    SCOTT@andrkydb> select avg(dbms_lob.getlength(b)) from t1;
    AVG(DBMS_LOB.GETLENGTH(B))
    2000
    and that all together they take up:
    SCOTT@andrkydb> select sum(dbms_lob.getlength(b)) from t1;
    SUM(DBMS_LOB.GETLENGTH(B))
    2000000
    But when I take a look at how much is the LOB segment actually taking, I get a result, which is being a total mystery to me:
    SCOTT@andrkydb> select bytes from dba_segments where segment_name = 'T1_LOB';
    BYTES
    5242880
    What am I missing? Why is LOB segment is being ~2 times bigger than it is required by the data?
    I am on 10.2.0.3 EE, Solaris 5.10 sparc 64bit.
    Message was edited by:
    Andrei Kübar

    thanks for the link, it is good to know such thing is possible. Although I don't really see how can it help me..
    But you know, you were right regarding the smaller data amounts. I have tested with 1800 bytes of data and in this case it does fit just right.
    But this means that there is 248 bytes wasted (from my, as developer, point of view) per block! But if there is such an overhead, then I must be able to estimate it when designing the data structures. And I don't see anywhere in the docs a single word about such thing.
    Moreover, if you use NCLOB type, then only 990 bytes fits into a single 2K chunk. So the overhead might become really huge when you go over to gigabyte amounts...
    I have a LOB segment for a nclob type field in a production database, which is 5GB large and it contains only 2,2GB of real data. There is no "deleted" rows in it, I know because I have rebuilt it. So this looks like a total waste of disk space... I must say, I'm quite disappointed with this.
    - Andrei

  • Optimizing RAID: stripe/chunk size

    I'm trying to figure out how to optimize the RAID chunk/stripe size for our Oracle 8i server. For example, let's say that we have:
    - 4 drives in the RAID stripe set
    - 16 KB Oracle block size
    - MULTIBLOCK_READ_COUNT=16
    Now the big question is what the optimal setting is for the chunk/stripe size. As far as I can see, we would have two alternatives:
    - case 1: stripe size = 256 KB
    - case 2: stripe size = 64 KB
    In case 1, all i/o would be spread out over all 4 drives. In case 2, we'd be able to isolate a lot of i/o to separate drives, so that each drive serves different i/o calls. My guess is that case 1 would work better where there's a lot of random disk i/o.
    Does anyone have any thoughts or experience to share on this topic?
    Thanks,
    Alex Algard
    WhitePages.com
    null

    It does not matter. Do not mix soft-raid and hard-raid. One OS i/o operation can read from one disk and number of disk. Do not forget about track-to-track seek time.
    Practice is the measure of truth :)
    For example, http://www.fcenter.ru/fc-articles/Technical/20000918/hi-end.gif

  • DB Size is reduced (LOB segment size) after Migration from Win To Linux

    Hi Friends,
    We have migrated Oracle 11.2.0.1 Database (64 bit) from Windows (64 Bit) To Linux (64 Bit) using RMAN convert database at target.
    After Migration i could see the size of LOB Segment is very less in targe so as the DB Size.
    Is it baecuas the conversion extracts only the used segments from the source datafilles (or) am i losing some data?
    SQL> DECLARE
      2    db_ready BOOLEAN;
      3  BEGIN
      4    db_ready :=
      5         DBMS_TDB.CHECK_DB('Linux IA (64-bit)',DBMS_TDB.SKIP_READONLY);
      6  END;
      7  /
    PL/SQL procedure successfully completed.
    SQL> DECLARE
      2       external BOOLEAN;
      3  BEGIN
      4      /* value of external is ignored, but with SERVEROUTPUT set to ON
      5       * dbms_tdb.check_external displays report of external objects
      6       * on console */
      7      external := DBMS_TDB.CHECK_EXTERNAL;
      8  END;
      9  /
    PL/SQL procedure successfully completed.
    Regards,
    DB

    >Is it baecuas the conversion extracts only the used segments from the source datafilles (or) am i losing some data?
    I suspect that the source DB has many LOB rows deleted.

  • Optimizing chunks size for filesending with NetStream.send() method

    Hi,
    I would like to implement a p2p filesending application. Unfortunately object replication is not the best solution for this so I need to implement it myself.
    The process is the following:
    Once the user has selected the file for sending, I load it to the memory. After it is completed I cut the file's data into chunks and send these chunks with the NetStream.send() method  - to get the progress of filesending -.
    How can I optimize the size of the chunks for the fastest filesending method or what is the best size for a chunk?

    Hi there
    Please submit a Wish Form to ask for some future version of Captivate to offer such a feature! (Link is in my sig)
    Cheers... Rick
    Helpful and Handy Links
    Captivate Wish Form/Bug Reporting Form
    Adobe Certified Captivate Training
    SorcerStone Blog
    Captivate eBooks

  • Avoiding the LOB chunk updates as LOB columns are not propagated.

    Hi,
    We have a setup where we are using the streams along with messaging gateway to propagate the changes to websphere MQ. we are deleting the LOB columns at the capture process itself using the delete_column function. but in case of LOB tables, insert statements are propagating in two parts(1 insert and 1 update, which is for lob column). we need to eliminate this second statement(update) as we do not require this statement. any help would be greatly appreciated.
    Regards,
    Ankit

    This amount to a transformation on the capture. You have no choice than abandon the built in function and create a transform function and attach this transform function on the capture using an action context. Action context are process that automatically fire.
    I wrote a note on this, it is not an easy matter but there is enough information on how to do it.
    http://sourceforge.net/apps/mediawiki/smenu/index.php?title=How_to_Transform_capture
    If nevertheless you find a way to use a transform function at capture site using a built in function, please let us know.

  • Performance impact in Oracle 8i - BLOB vs BFILE

    Hi Guys,
    We are evaluting intermedia to store multimedia objects.
    Does any know if storing and retreiving documents in Oracle database has impact on standard data stored in the database?
    Is it worth having a seperate database instance for storing tables with intermedia objects?
    Pal

    Part 2:
    Example 1: Let us estimate the storage requirements for a data set consisting of 500 video clips comprising a total size of 250MB (average size 512K bytes). Assume a LOB chunk size of 32768 bytes. Our model estimates that we need (8000 * 32) bytes or 250 k bytes for the index and 266 MB to hold the media data. Since the original media size is 250 MB, this represents about a 6.5% storage overhead for storing the media data in the database. The following table definition could be used to store this amount of data.
    create table video_items
    video_id number ,
    video_clip ordsys.ordvideo
    -- storage parameters for table in general
    tablespace video1 storage (initial 1M next 10M )
    -- special storage parameters for the video content
    lob (video_clip.source.localdata) store as
    (tablespace video2 storage (initial 260k next 270M )
    disable storage in row nocache nologging chunk 32768);
    Example 2: Let us estimate the storage requirements for a data set consisting of 5000 images with an average size of 56K bytes. The total amount of media data is 274 MB. Since the average image size is smaller, it is more space efficient to choose a smaller chunk size, say 8K, to store the data in the lob. Our model estimates that we will need about 313 MB to store the data and a little over 1 MB to store the index. In this case the 40 MB of storage required beyond the raw media content size represents a 15% overhead.
    Estimating retrieval costs
    Performance testing has shown that Oracle can achieve equivalent and even higher throughput performance for media content retrieval than a file system. The test was configured to retrieve media data from a server system to a requesting client system. In the database case, simple C client programs used OCI with LOB read callbacks to retrieve the data from the database. For the file system case, the client program used the standard C library functions to read data from the file system. Note that in this client server configuration, files are served remotely by the file server system. In essence, we are comparing distributed file system performance with Oracle database and SQLNet performance. These tests were performed on Windows NT 4 SP5.
    Although Oracle achieved higher absolute performance, the relative CPU cost per unit of throughput ranged from 1.7 to 3 times the file system cost. (For these tests, database performance ranged from 3.4 million to 9 million bytes/sec while file system performance ranged from 2.6 million bytes/sec to 7 million bytes/sec as the number of clients ranged from 1 to 5) One reason for the very high relative CPU cost at the higher end of performance is that as the 100 Mbs network approaches saturation, the system used more CPU to achieve the next increment of throughput. If we restrict ourselves to not exceeding 70% of network utilization, then the database can use up to 2.5 times as much CPU as the file system per unit of throughput.
    NOTE WELL: The extra CPU cost factors pertain only to media retrieval aspect of the workload. They do not apply to the entire system workload. See example.
    Example: A file based media asset system uses 10% of a single CPU simply to serve media data to requesting clients. If we were to store the media in an Oracle database and retrieve content from the database then we could expect to need 20-25% of a single CPU to serve content at the same throughput rate.

  • Replicat  : GGS ERROR    171  Unknown data type received 0x54 49 .

    I am trying to run an initial load as shown in the tutorial at http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/goldengate/11g/GGS_Sect_Config_WinUX_ORA_to_WinUX_ORA.pdf and I am seeing this error in the REPLICAT process.
    I have tried using the SOURCEDEFS clause as well (Even though the source and Target structures are exactly similar), but I run into the same issue.
    ** Run Time Messages **
    *2011-04-14 12:02:15 GGS ERROR 171 Unknown data type received <0x54 49>.*
    The only other Indication that I see in the report file is the following Warning message
    *2011-04-14 12:02:15 GGS WARNING 201 Rounding up LOBWRITESIZE 32528 to be a multiple of LOB chunk size (16324).*
    LOBWRITESIZE = 32648 bytes.
    Here are the other details. If you need any more information, please let me know
    GoldenGate Delivery for Oracle
    Version v9.5.1.31 Build 003
    HP-UX 11.23 (optimized 64-bit), Oracle 10g on Jun 24 2008 13:43:23
    Copyright GoldenGate Software, Inc. 1995-2008
    Starting at 2011-04-14 12:02:09
    Database Version:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for HPUX: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    I'd appreciate any advice on fixing this issue.
    Thanks,
    Rajesh.

    Looks like the error is because we have ASM set up at our place.. and I did not pay enough attention to this part of the tutorial Guide..
    Note: When Oracle Automatic Storage Management (ASM) is in use, the TRANLOGOPTIONS ASMUSER and ASMPASSWORD must be set in the Extract parameter file. For more information refer to the GoldenGate for Windows & UNIX Administrator and Reference manuals.
    Posting the reply here so that It'd be useful for anyone running into the same issue later.
    Cheers!
    Rajesh.

Maybe you are looking for

  • Page Breaks for Customer Open Balance Letter when printing

    Hi All, We are running 'Customer Open Balance Letter' report and printing the output(View Output). Which is printed irrespective of page breaks. I mean print is like in a single page two letters got printed, which in not expected. Are we missing any

  • Blue banner bar not loading web pages

    I'm having or have had for some time a problem with pages fully loading on Safari 3.2.1. Even now as I write this the blue banner bar only loads to about 75% and stops. Other times is less than 50%. I have 12.1 GB out of 40.0 GB free on my iBook. I r

  • Excise paid to capture in sap

    Hi expert I have paid some excise duty from  bank to government . i want to know the process , one should must carry out in SAP with Tcode Regard Nabil

  • FieldName: null.java.lang.Exception

    I just bought a new iMac. I had a PC that had all my iTunes music on it. I have moved all of those tunes to my new iMac and many play fine on iTunes on the new iMac. Others ask for authentication and when I try to do this, it says my username and pas

  • I bought a Lexmark 8300 printer but it wont work on my mac

    I would like to see if I could get this to work on my mac. any thoughts? iMac G5 20 Mac OS X (10.4.2) iMac G5 20   Mac OS X (10.4.2)