Multi Block Size

Oracle 10.2.0.4:
We are creating tablespace of 32K block to place BLOB table. I needed some suggestion on the following:
1. Currently our DB_CACHE_SIZE is set to 0. Do I need to set DB_nk_CACHE_SIZE to 0? Would oracle auto tune this parameter if I set to 0? What's preferable?
2. If I enable CACHE on LOB column would it affect other online users? I am assming there will be a separate buffer pool for 32K block size and probably will not affect online users.

1. Say your DB_BLOCK_SIZE (for the LOB Segment Tablespace) is 8KB. Say your CHUNKSIZE is 32KB.
Say you insert a LOB of 10KB. Oracle's write will be 32KB.
Say you insert a LOB of 100KB. Oracle's writes will be 4 32KB chunks.
2. For a normal table, multiple rows will fit into a block. The "free" space is initial reserved based on PCTFREE. However, actual usage will vary based on the pattern of INSERTs and DELETEs. ASSM manages the candidacy of a block for new rows based on the free space.
3. Before you create a 32KB tablespace, you have to allocate a 32K CACHE. That requires a restart !
4. If you use an SPFILE , the parameter can be "unset" by using the
"ALTER SYSTEM RESET 'db_file_multiblock_read_count'; " command.
TEST TEST TEST !!
You must test the impact of 32KB on your LOBs and other issues. Note that this means that you'd be setting up a separate cache. Whatever cache space the LOB was using the DEFAULT cache earlier is now "released" to other tables/indexes etc. However, if the new 32KB CACHE is much lesser than the space it used to take in the DEFAULT, then the LOB operations may be slower !
Also, obviously your I/Os are now larger with a larger CHUNK size
TEST TEST TEST !!
Test for the impact of unsettting db_file_multiblock_read_count and relying on SYSTEM Stats.

Similar Messages

  • Cluster multi-block requests were consuming significant database time

    Hi,
    DB : 10.2.0.4 RAC ASM
    OS : AIX 5.2 64-bit
    We are facing too much performance issues and CPU idle time becoming 20%.Based on the AWR report , the top 5 events are showing that problem is in cluster side.I placed 1st node AWR report here for your suggestions.
    WORKLOAD REPOSITORY report for
    DB Name DB Id Instance Inst Num Release RAC Host
    PROD 1251728398 PROD1 1 10.2.0.4.0 YES msprod1
    Snap Id Snap Time Sessions Curs/Sess
    Begin Snap: 26177 26-Jul-11 14:29:02 142 37.7
    End Snap: 26178 26-Jul-11 15:29:11 159 49.1
    Elapsed: 60.15 (mins)
    DB Time: 915.85 (mins)
    Cache Sizes
    ~~~~~~~~~~~ Begin End
    Buffer Cache: 23,504M 23,504M Std Block Size: 8K
    Shared Pool Size: 27,584M 27,584M Log Buffer: 14,248K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 28,126.82 2,675.18
    Logical reads: 526,807.26 50,105.44
    Block changes: 3,080.07 292.95
    Physical reads: 962.90 91.58
    Physical writes: 157.66 15.00
    User calls: 1,392.75 132.47
    Parses: 246.05 23.40
    Hard parses: 11.03 1.05
    Sorts: 42.07 4.00
    Logons: 0.68 0.07
    Executes: 930.74 88.52
    Transactions: 10.51
    % Blocks changed per Read: 0.58 Recursive Call %: 32.31
    Rollback per transaction %: 9.68 Rows per Sort: 4276.06
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.87 Redo NoWait %: 100.00
    Buffer Hit %: 99.84 In-memory Sort %: 99.99
    Library Hit %: 98.25 Soft Parse %: 95.52
    Execute to Parse %: 73.56 Latch Hit %: 99.51
    Parse CPU to Parse Elapsd %: 9.22 % Non-Parse CPU: 99.94
    Shared Pool Statistics Begin End
    Memory Usage %: 68.11 71.55
    % SQL with executions>1: 94.54 92.31
    % Memory for SQL w/exec>1: 98.79 98.74
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 18,798 34.2
    gc cr multi block request 46,184,663 18,075 0 32.9 Cluster
    gc buffer busy 2,468,308 6,897 3 12.6 Cluster
    gc current block 2-way 1,826,433 4,422 2 8.0 Cluster
    db file sequential read 142,632 366 3 0.7 User I/O
    RAC Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
    Begin End
    Number of Instances: 2 2
    Global Cache Load Profile
    ~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
    Global Cache blocks received: 14,112.50 1,342.26
    Global Cache blocks served: 619.72 58.94
    GCS/GES messages received: 2,099.38 199.68
    GCS/GES messages sent: 23,341.11 2,220.01
    DBWR Fusion writes: 3.43 0.33
    Estd Interconnect traffic (KB) 122,826.57
    Global Cache Efficiency Percentages (Target local+remote 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer access - local cache %: 97.16
    Buffer access - remote cache %: 2.68
    Buffer access - disk %: 0.16
    Global Cache and Enqueue Services - Workload Characteristics
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Avg global enqueue get time (ms): 0.6
    Avg global cache cr block receive time (ms): 2.8
    Avg global cache current block receive time (ms): 3.0
    Avg global cache cr block build time (ms): 0.0
    Avg global cache cr block send time (ms): 0.0
    Global cache log flushes for cr blocks served %: 11.3
    Avg global cache cr block flush time (ms): 1.7
    Avg global cache current block pin time (ms): 0.0
    Avg global cache current block send time (ms): 0.0
    Global cache log flushes for current blocks served %: 0.0
    Avg global cache current block flush time (ms): 4.1
    Global Cache and Enqueue Services - Messaging Statistics
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Avg message sent queue time (ms): 0.1
    Avg message sent queue time on ksxp (ms): 2.4
    Avg message received queue time (ms): 0.0
    Avg GCS message process time (ms): 0.0
    Avg GES message process time (ms): 0.0
    % of direct sent messages: 6.27
    % of indirect sent messages: 93.48
    % of flow controlled messages: 0.25
    Time Model Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
    -> Total time in database user-calls (DB Time): 54951s
    -> Statistics including the word "background" measure background process
    time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
    Statistic Name Time (s) % of DB Time
    sql execute elapsed time 54,618.2 99.4
    DB CPU 18,798.1 34.2
    parse time elapsed 494.3 .9
    hard parse elapsed time 397.4 .7
    PL/SQL execution elapsed time 38.6 .1
    hard parse (sharing criteria) elapsed time 27.3 .0
    sequence load elapsed time 5.0 .0
    failed parse elapsed time 3.3 .0
    PL/SQL compilation elapsed time 2.1 .0
    inbound PL/SQL rpc elapsed time 1.2 .0
    repeated bind elapsed time 0.8 .0
    connection management call elapsed time 0.6 .0
    hard parse (bind mismatch) elapsed time 0.3 .0
    DB time 54,951.0 N/A
    background elapsed time 1,027.9 N/A
    background cpu time 518.1 N/A
    Wait Class DB/Inst: PROD/PROD1 Snaps: 26177-26178
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc
    Avg
    %Time Total Wait wait Waits
    Wait Class Waits -outs Time (s) (ms) /txn
    Cluster 50,666,311 .0 30,236 1 1,335.4
    User I/O 419,542 .0 811 2 11.1
    Network 4,824,383 .0 242 0 127.2
    Other 797,753 88.5 208 0 21.0
    Concurrency 212,350 .1 121 1 5.6
    Commit 16,215 .0 53 3 0.4
    System I/O 60,831 .0 29 0 1.6
    Application 6,069 .0 6 1 0.2
    Configuration 763 97.0 0 0 0.0
    Second node top 5 events are as below,
    Top 5 Timed Events
              Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 25,959 42.2
    db file sequential read 2,288,168 5,587 2 9.1 User I/O
    gc current block 2-way 822,985 2,232 3 3.6 Cluster
    read by other session 345,338 1,166 3 1.9 User I/O
    gc cr multi block request 991,270 831 1 1.4 Cluster
    My RAM is 95GB each node and SGA is 51 GB and PGA is 14 GB.
    Any inputs from your side are greatly helpful to me ,please.
    Thanks,
    Sunand

    Hi Forstmann,
    Thanks for your update.
    Even i have collected ADDM report, extract of Node1 report as below
    FINDING 1: 40% impact (22193 seconds)
    Cluster multi-block requests were consuming significant database time.
    RECOMMENDATION 1: SQL Tuning, 6% benefit (3313 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "59qd3x0jg40h1". Look for an alternative plan that does not use
    object scans.
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Inter-instance messaging was consuming significant database
    time on this instance. (55% impact [30269 seconds])
    SYMPTOM: Wait class "Cluster" was consuming significant database
    time. (55% impact [30271 seconds])
    FINDING 3: 13% impact (7008 seconds)
    Read and write contention on database blocks was consuming significant
    database time.
    NO RECOMMENDATIONS AVAILABLE
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Inter-instance messaging was consuming significant database
    time on this instance. (55% impact [30269 seconds])
    SYMPTOM: Wait class "Cluster" was consuming significant database
    time. (55% impact [30271 seconds])
    Any help from your side , please?
    Thanks,
    Sunand

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • Specifying segments and block size manaually

    Hi, just a quick question,
    But could anyone help me understand why someone may manually add segments to a table space (or is it a data file they would be added to) ? does auto extend not take care of this?
    And secondly ... why would you increase or decrease the block size of a segment?... is this because you may have small or large sized rows within a table and want a block size to acompany this?
    Any help would be appriciated

    Hi,
    In Oracle free space can be managed automatically or manually,You specify automatic segment-space management when you create a locally managed tablespace
    Free space can be managed automatically inside database segments. The in-segment free/used space is tracked using bitmaps, as opposed to free lists. Automatic segment-space management offers the following benefits:
    -Ease of use
    -Better space utilization, especially for the objects with highly varying size rows
    -Better run-time adjustment to variations in concurrent access
    -Better multi-instance behavior in terms of performance/space utilization
    For manually managed tablespaces, two space management parameters, PCTFREE and PCTUSED, enable you to control the use of free space for inserts and updates to the rows in all the data blocks of a particular segment. Specify these parameters when you create or alter a table or cluster (which has its own data segment). You can also specify the storage parameter PCTFREE when creating or altering an index (which has its own index segment).
    see this link
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/b_deprec.htm#634923 :)

  • [질문]WAS 이용시 gc cr multi block request 발생

    OS 버젼 과 Word size : AIX 5.3 64BIT
    DB버젼과 Word size : DB VERSION은 10.2.0.3 64BIT
    RAC 환경이며 node1과 node2는 동일한 환경으로 구성되어 있습니다.
    업무상 주로 이용하는 인스턴스는 node1 이구요.
    파티션으로 구성되어 있는 테이블을 full scan하는 쿼리를 수행하는데
    WAS에서 해당 쿼리를 날리면 gc cr multi block request 이벤트가 아주 오래 점유하고 있습니다.
    동일 쿼리를 pl/sql 로 접속해서 2-tier로 쿼리하면 그런 이벤트 없이 바로 떨어지구요.
    2-tier로 쿼리하면 15초,
    WAS에서 쿼리하면 4~5분 정도가 걸리죠.
    도대체 왜 그런지 알수가 없는데, 가이드가 될 만한 힌트라도 좋으니 의견 부탁드립니다.
    감사합니다.

    "액셈이 만들어가는 오라클 백과사전"에서 내용이 잘나와있습니다.
    이 내용부터 확인하세요.
    http://wiki.ex-em.com/index.php/Gc_cr_multi_block_request

  • ORA-27046: file size is not a multiple of logical block size

    Hi All,
    Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
    ERROR -->
    SQL> !pwd
    /oracle/SID/sapreorg
    SQL> @CONTROL.SQL
    ORACLE instance started.
    Total System Global Area 3539992576 bytes
    Fixed Size                  2088096 bytes
    Variable Size            1778385760 bytes
    Database Buffers         1744830464 bytes
    Redo Buffers               14688256 bytes
    CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS  ARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    '/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
    ORA-27046: file size is not a multiple of logical block size
    Additional information: 1
    Additional information: 1895833576
    Additional information: 8192
    Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
    /oracle/SID/102_64/dbs$ grep -i block initSID.ora
    Kindly look into the issue.
    Regards,
    Soumya

    Please chk the following things
    1.SPfile corruption :
    Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
    Then create the control file from the script.
    2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
    3. Has the db_block_size parameter been changed in init file by any chance.
    Regards
    Kausik

  • Mac Pro RAID block size recommendations for working with audio in Logic Pro

    I have recently ordered a Mac Pro and plan to do a RAID configuration across 3 HDD's
    The RAID type i am going to do is a RAID 0 striped.
    The computer is going to be used primarily for audio post production and working with 20+ 24-Bit audio files at any one time within a Logic project.
    I want to know what is the best block size i should use when configuring the RAID.
    I understand that using a higher block size is best for working with large files but do i need to do this in my case or will the default 32k block size be enough?
    Thanks in advance

    Use 64k. Things like databases like having 32k blocks because of all the small files. Audio files are pretty small even at 24-bit 192KHz. Go to 128k if all you are doing is streaming and no samples. But 20+ 24-bit is really not too large anyway considering most modern HDD's can stream 100MB/s off one spindle. You'll probably be fine regardless of the block size you choose. But most audio pro's choose 64k.

  • ORA-00349: failure obtaining block size for '+Z'  in Oracle XE

    Hello,
    I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
    When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Please let me know how to go about resolving this issue.
    Thank you.
    See below for detail:
    Connected.
    SQL> @?/sqlplus/admin/movelogs;
    SQL> Rem
    SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
    SQL> Rem
    SQL> Rem movelogs.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem NAME
    SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
    SQL> Rem
    SQL> Rem DESCRIPTION
    SQL> Rem This script can be used to move online logs from old online
    log
    SQL> Rem location to Flash Recovery Area. It assumes that the database
    SQL> Rem instance is started with new Flash Recovery Area location.
    SQL> Rem
    SQL> Rem NOTES
    SQL> Rem For use to rename online logs after moving Flash Recovery
    Area.
    SQL> Rem The script can be executed using following command
    SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
    SQL> Rem
    SQL> Rem MODIFIED (MM/DD/YY)
    SQL> Rem banand 01/19/06 - Created
    SQL> Rem
    SQL>
    SQL> SET ECHO ON
    SQL> SET FEEDBACK 1
    SQL> SET NUMWIDTH 10
    SQL> SET LINESIZE 80
    SQL> SET TRIMSPOOL ON
    SQL> SET TAB OFF
    SQL> SET PAGESIZE 100
    SQL> declare
    2 cursor rlc is
    3 select group# grp, thread# thr, bytes/1024 bytes_k
    4 from v$log
    5 order by 1;
    6 stmt varchar2(2048);
    7 swtstmt varchar2(1024) := 'alter system switch logfile';
    8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
    9 begin
    10 for rlcRec in rlc loop
    11 stmt := 'alter database add logfile thread ' ||
    12 rlcRec.thr || ' size ' ||
    13 rlcRec.bytes_k || 'K';
    14 execute immediate stmt;
    15 begin
    16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
    17 execute immediate stmt;
    18 exception
    19 when others then
    20 execute immediate swtstmt;
    21 execute immediate ckpstmt;
    22 execute immediate stmt;
    23 end;
    24 execute immediate swtstmt;
    25 end loop;
    26 end;
    27 /
    declare
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Can someone point me in the right direction as to what I may be doing wrong here - Thank you!

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Raid storage usage and block size

    We have two XServe RAID units Raid 5 and we are adding a new 16 bay ACNC raid with 16 1.5TB drives in Raid 6 + Hot Spare. I initialized the Raid 6 with 128K block size. The total data moving from the older raid volumes is around 5.7TB, but on the new Raid it is taking around 7.4TB of space. Is this due to the 128K block size? This is a prepress server so most of the files are quite large, but there may be lots of small files as well.

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Can't change default block size in dbca

    10.1.0.3
    solaris
    I am using the dbca to create a database. When I go to the sizing screen and try to change the default block size this option is always greyed out at 8k.
    does anyone know why? this happens even when i pick a data warehouse template.

    There is a reason Oracle uses 8K as the default database block size for their warehouse template. Changing the default block size to a larger size generally does not result in better performance when both databases are allocated the exact same SGA memory allocations.
    HTH -- Mark D Powell --

  • Change block size for several log-files simultaneously?

    Hi,
    I'm using SignalExpress to record and analyze data.
    Sometimes I want to analyze the recorded data both for a short period of time and for longer time.
    (Imagine creating an average of every second first and then an average of every 10 seconds)
    Then I need to change all the log-files, and also the specific parts of the log-file. See attachment.
    I have sometimes up to 1000 log-files containing signals from 4 different modules, that makes 4000 adjustments to change from block size 10000 to block size 1000.
    Is there any way to adjust all the log-files block size at once?
    Many thanks!
    Anders Hansson
    Engineer
    Attachments:
    NI.JPG ‏95 KB

    Hi,
    Is't anyone else interested in a solution for this operation?
    I reported this to the NI-feedback service and they adviced me to report/request advice here to get a quicker reply.
    So...
    Best regards
    Ingenjör Hansson

  • Mirrored RAID:  MediaKit reports block size error

    I am trying to create a 2nd set up backup drives for my photos.  I have two new iomega 2TB drives, which look essentially identical to drives I'm currently using as my primary backups as a mirrored RAID set.
    I can start the process with freshly erased and reformatted drives (with the default mac format, extended, journaled, unencrypted, not case-sensitive).  And after a minute or three, I see
    "MediaKit reports block size error, usually caused by not being a multiple of 512."
    The RAID options are Mirrored RAID, Mac extended journaled, and options settings are default.
    I see several series of posts with complaints about encrypting RAIDs and disk block sizes, but not unencrypted errors.   I actually started out trying to do this with the 2006 MBP running 10.6.8 and got a different error:  "POSIX reports:  the operation couldn't be completed. Operation not permitted."  I wasn't sure whether the 2TB RAID I already have was set up iwth the older or newer computer--it was definitely before I put Lion on this one--so I tried this one and now have a different error.
    Any idea what the problem might be? 

    Update:  I spent some time on the phone with an Apple support RAID expert, and couldn't figure out what the error was; we couldn't bypass it by playing with partitions on the drives, or any of another couple of manuevers that I've already forgotten.  He noted that his own searches were showing a lot of mentions of similar problems but only with Iomega drives, and he was finding the same links I found earlier about problems creating encrypted drives.  Now trying to decide if it's worth throwing more good money after bad for a call with Iomega support, and waiting to see if the iomega forum is at all helpful.

  • RAID block size for final cut pro x

    Just got one of the new late 2012 27" iMacs and a 6 TB LaCie Thunderbolt drive. Can finally edit the video I took last spring. I'll be using Final Cut Pro X, and doing a lot of multicam stuff with 4 or 5 views and a separate audio track. The LaCie came formatted as a mirrored RAID. I'm going to change that to 0 (Striped RAID set), but am wondering what block size to set. The default is 32k, but I have read that this ought to be increased to the max (256k) for video editing. I have also read it should NOT be increased. And the posts I have read have all been at least 3 years old. So let me ask you all--what block size would you recommend for my situation?
    Thanks in advance!

    Hi Eddie...
    This depends on what kind of source footage you are editing....
    For compressed Video, Audio and Uncompressed audio 128k
    I have only had BAD results with 256k. 64 is also weird. Whereas 32 is fine.
    All my RAIDs have 128k for audio/video editing
    you can go further if you editing Image Sequences.. but according to my own findings and I have been dealing with raid since years.... 128k does the job the best.
    Rule of thumb.... The smaller the file sizes you are putting the RAID the smaller the block size. And vice versa.
    I.e. You would cripple the raid performance if storing a database on it, having a block size of 256. In case of servers and OS 32k would be a good choice, perhaps even 16k if supported.

  • OSD-04001: invalid logical block size (OS 2800189884)

    My Windows 2003 crashed which was running Oracle XE.
    I installed Oracle XE on Windows XP on another machine.
    I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
    When I start the database in WinXP using SQLPLUS i get the following message
    SQL> startup
    ORACLE instance started.
    Total System Global Area 146800640 bytes
    Fixed Size 1286220 bytes
    Variable Size 62918580 bytes
    Database Buffers 79691776 bytes
    Redo Buffers 2904064 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    Wed Apr 25 18:38:36 2007
    ALTER DATABASE MOUNT
    Wed Apr 25 18:38:36 2007
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Wed Apr 25 18:38:36 2007
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Please help.
    Regards,
    Zulqarnain

    Hi Zulqarnain,
    Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
    So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
    Regards

Maybe you are looking for

  • Us Payoll : Creation Of A New Tax Class

    Hi Experts, Has anyone earlier created a new tax class specification of PC 71 so that it is linked to some tax type. If yes how did you handle the impact on the other tax authorities when  the wagetypes  are linked to this specification? Did you crea

  • How to find apostrophes in a VARCHAR2 column

    I have a value "GOT UP LATE - DIDN'T HAVE TIME" stored in varchar2 field of my table through Oracle forms. I want to check whether the values stored in this column have apostrophes or not (e.g. as in DIDN'T). Could any body please let me know how to

  • Issue with Smart View for Planning Connection

    Hi All, I'm facing this issue with Hyperion 11.1.1.3 & 11.1.2. I've created a connection in SmartView by giving the URL as http://ServerName:8300/HyperionPlanning/SmartView. After that i'm able to see "Servers" and when i started expanding that, it d

  • MSN account on the Mail App.

    I have a MSN/Hotmail email account and that is my main e-mail. When i tried to add in on my mail app it says secure connection failed and that my e-mail may not be valid (when it is). from there i click continue, and a window says that it can not get

  • Time Capsule can you change the signal frequencies because it is causeing errors in other things that we have wireless cameras?

    Can you change the frequenices settings on the time capsule and the mac pro to receive it? Because it is causing errors with other wireless cameras that cannot be changed.