Performance of big tablespaces

Hi all.
Does somebody know what way is better?
I need to create big table (about 400Gb), so I need big tablespace. How I need to turning this tablespace, what size of blocks is better? And may be better to create some set of smaller files for this tablespace?
By the way if I create indexes in other tablespace, is it really bad to performance?
Anton.
Edited by: user9050456 on Feb 19, 2010 1:20 AM

Hi Billy,
Sorry I was busy with some issues.You are right putting them on different physical storage will definetly increase accessing the i/o faster But what I usually feel is also better If you want to manage efficiently and to get good performance .
Below is tablespace details of our stage Database.We are not having any performance issues.
SQL> select tablespace_name||' '||file_name from dba_data_files;
TABLESPACE_NAME||''||FILE_NAME
DAQINDEX /export/u03/oradata/AUCS/daqindex_08.dbf
DAQDATA /export/u03/oradata/AUCS/daqdata_09.dbf
SYSAUX /export/u03/oradata/AUCS/sysaux_02.dbf
ONLNINDEX /export/u03/oradata/AUCS/onlnindex_01.dbf
ONLNDATA /export/u03/oradata/AUCS/onlndata_01.dbf
DAQINDEX /export/u03/oradata/AUCS/daqindex_07.dbf
SYSAUX /export/u03/oradata/AUCS/sysaux_01.dbf
BLKORDERINDEX /export/u03/oradata/AUCS/blkorderindex01.dbf
BLKORDERDATA /export/u03/oradata/AUCS/blkorderdata01.dbf
UNDOTBS6 /export/u03/oradata/AUCS/undotbs06.dbf
UNDOTBS5 /export/u03/oradata/AUCS/undotbs05.dbf
TABLESPACE_NAME||''||FILE_NAME
DSERVINDEX /export/u03/oradata/AUCS/dservindex_01.dbf
DSERVDATA /export/u03/oradata/AUCS/dservdata_01.dbf
TRANSPORTINDEX /export/u03/oradata/AUCS/transportindex.dbf
TRANSPORTDATA /export/u03/oradata/AUCS/transportdata.dbf
DAQINDEX /export/u03/oradata/AUCS/daqindex_06.dbf
DAQDATA /export/u03/oradata/AUCS/daqdata_08.dbf
RTBINDEX /export/u03/oradata/AUCS/rtbindex_01.dbf
RTBDATA /export/u03/oradata/AUCS/rtbdata_01.dbf
DAQINDEX /export/u03/oradata/AUCS/daqindex_05.dbf
DAQDATA /export/u03/oradata/AUCS/daqdata_07.dbf
DAQINDEX /export/u03/oradata/AUCS/daqindex_04.dbf
TABLESPACE_NAME||''||FILE_NAME
IMAGEDATA /export/u03/oradata/AUCS/imagedata_01.dbf
DAQDATA /export/u03/oradata/AUCS/daqdata_06.dbf
DAQDATA /export/u03/oradata/AUCS/daqdata_05.dbf
UNDOTBS4 /export/u03/oradata/AUCS/undotbs04.dbf
We are using DAQDATA for keeping tables and DAQINDEX for keeping index.
Thanks,
Rafi.

Similar Messages

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • Oracle XE 10.2.0.1.0 – Performance with BIG full-text indexes

    I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
    Recently I have found out that size of the files to be indexed is 5GB.
    As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
    Lets say that the CONTEXT index size over these files will be 1.5-2GB.
    Number of the concurrent user will be max. 5.
    Does anybody have any experience with Oracle XE performance with the CONTEXT index this BIG?
    (Oracle XE license limitations: 1 GB RAM and 1 CPU)
    Regards.
    Edited by: user10543032 on May 18, 2009 11:36 AM
    Edited by: user10543032 on May 18, 2009 12:10 PM

    I have used the 100% same configuration as above, but now for the Oracle Database 11g R1 11.1.0.7.0 – Production instead of Oracle 10g XE.
    The result is that AUTO_FILTER for Oracle 11g is able to parse Czech language characters from the sample PDF file without any problems.
    The problem with Oracle Text 10g R2 may be I guess:
    1. In embedded fonts as mentioned in the Link: [documentation | http://download-west.oracle.com/docs/cd/B12037_01/text.101/b10730/afilsupt.htm] (I tried to embbed all fonts and the whole character set, but it did not helped)
    2. in the character encoding of the text within the PDF documents.
    I would like to add that also other third party PDF2Text converters have similar issues with the Czech characters in the PDF documents – after text extraction Czech national characters were displayed incorrectly.
    If you have any other remarks, ideas or conclusions please reply :-)

  • Oracle 10g  – Performance with BIG CONTEXT indexes

    I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
    Recently I have found out that size of the files to be indexed is 5GB.
    As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
    Lets say that the CONTEXT index size over these files will be 1.5-2GB.
    Number of the concurrent user will be max. 5.
    I can not easily test it my self yet.
    Does anybody have any experience with Oracle XE or other Oracle Database edition performance with the CONTEXT index this BIG?
    Will Oracle XE hardware resources license limitation be sufficient to handle one CONTEXT indexe this BIG?
    (Oracle XE license limitations: 1 GB RAM and 1 CPU)
    Regards.

    That depends on at least three things:
    (1) what is the range of words that will appear in the document set (wide range of documents = smaller resultsets = better performance)
    (2) how precise are the user's queries likely to be (more precise = smaller resultsets = better performance)
    (3) how many milliseconds are your users willing to wait for results
    So, unfortunately, you'll probably have to experiment a bit before you'll know...

  • Pathological ParallelGC performance w/ big long-lived object (512MB array)

    Hoping to improve performance, we recently added a bloom filter -- backed by a single long-lived 512MB array -- to our application. Unfortunately, it's killed performance -- because the app now spends ~16 of every ~19 seconds in garbage collection, from the moment the big array is allocated.
    My first theory was that the array was stuck in one of the young generations, never capable of being promoted, and thus being endlessly copied back and forth on every minor young collection. However, some tests indicate the big array winds up in "PS Old" right away... which would seem to be a safe, non-costly place for it to grow old. So I'm perplexed by the GC performance hit.
    Here's the tail of a log from a long-running process -- with UseParallelGC on a dual-opteron machine running 32bit OS/VM -- showing the problem:
    % tail gc.log
    697410.794: [GC [PSYoungGen: 192290K->2372K(195328K)] 1719973K->1535565K(1833728K), 16.4679630 secs]
    697432.415: [GC [PSYoungGen: 188356K->1894K(194752K)] 1721549K->1536592K(1833152K), 16.4797510 secs]
    697451.419: [GC [PSYoungGen: 188262K->4723K(195200K)] 1722960K->1540085K(1833600K), 16.4797410 secs]
    697470.817: [GC [PSYoungGen: 191091K->1825K(195520K)] 1726453K->1541275K(1833920K), 16.4763350 secs]
    697490.087: [GC [PSYoungGen: 189025K->8570K(195776K)] 1728475K->1550136K(1834176K), 16.4764320 secs]
    697509.644: [GC [PSYoungGen: 195770K->5651K(192576K)] 1737336K->1555061K(1830976K), 16.4785310 secs]
    697530.749: [GC [PSYoungGen: 189203K->1971K(194176K)] 1738613K->1556430K(1832576K), 16.4642690 secs]
    697551.998: [GC [PSYoungGen: 185523K->1716K(193536K)] 1739982K->1556999K(1831936K), 16.4680660 secs]
    697572.424: [GC [PSYoungGen: 185524K->4196K(193984K)] 1740807K->1560197K(1832384K), 16.4727490 secs]
    I get similar results from the moment of launch on another machine, and 'jmap -heap' (which isn't working on the long-lived process) indicates the 512MB object is in 'PS Old' right away (this is from a quick launch of a similar app):
    jdk1.5.0_04-32bit/bin/jmap -heap 10586Attaching to process ID 10586, please wait...
    Debugger attached successfully.
    Server compiler detected.
    JVM version is 1.5.0_04-b05
    using thread-local object allocation.
    Parallel GC with 2 thread(s)
    Heap Configuration:
    MinHeapFreeRatio = 40
    MaxHeapFreeRatio = 70
    MaxHeapSize = 1887436800 (1800.0MB)
    NewSize = 655360 (0.625MB)
    MaxNewSize = 4294901760 (4095.9375MB)
    OldSize = 1441792 (1.375MB)
    NewRatio = 8
    SurvivorRatio = 8
    PermSize = 16777216 (16.0MB)
    MaxPermSize = 67108864 (64.0MB)
    Heap Usage:
    PS Young Generation
    Eden Space:
    capacity = 157286400 (150.0MB)
    used = 157286400 (150.0MB)
    free = 0 (0.0MB)
    100.0% used
    From Space:
    capacity = 26214400 (25.0MB)
    used = 26209080 (24.99492645263672MB)
    free = 5320 (0.00507354736328125MB)
    99.97970581054688% used
    To Space:
    capacity = 26214400 (25.0MB)
    used = 1556480 (1.484375MB)
    free = 24657920 (23.515625MB)
    5.9375% used
    PS Old Generation
    capacity = 1677721600 (1600.0MB)
    used = 583893848 (556.8445663452148MB)
    free = 1093827752 (1043.1554336547852MB)
    34.80278539657593% used
    PS Perm Generation
    capacity = 16777216 (16.0MB)
    used = 10513680 (10.026626586914062MB)
    free = 6263536 (5.9733734130859375MB)
    62.66641616821289% used
    The 'PS Old' generation also looks way oversized here -- 1.6G out of 1.8G! -- and the young/tenured starved, although no non-default constraints have been set on generation sizes, and we had hoped the ballyhooed 'ergonomics' would've adjusted generation sizes sensibly over time.
    '-XX:+UseSerialGC' doesn't have the problem, and 'jmap -heap' suggests the big array is in the tenured generation there.
    Any ideas why UseParallelGC is behaving pathologically here? Is this, as I suspect, a bug? Any suggestions for getting it to work better through VM options? (Cap the perm size?)
    Any way to kick a running ParallelGC VM while it's running to resize it's generations more sensibly?
    I may also tweak the bloom filter to use a number of smaller -- and thus more GC-relocatable -- arrays... but I'd expect that to have a slight performance hit from the extra level of indirection/indexing, and it seems I shouldn't have to do this if the VM lets me allocate a giant object in the first place.
    Thanks for any tips/insights.
    - Gordon @ IA

    Yes, in my app, the large array is updated constantly.
    However, in the test case below, I'm getting similar behavior without any accesses to the big array at all.
    (I'll file this test case via the bug-reporting interface as well.)
    Minimal test case which seems to prompt the same behavior:
    * Demonstrate problematic ParallelGC behavior with a "big" (512MB)
    * object. (bug-id#6298694)
    * @author gojomo/archive,org
    public class BigLumpGCBug {
    int[] bigBitfield;
    public static void main(String[] args) {
    (new BigLumpGCBug()).instanceMain(args);
    private void instanceMain(String[] args) {
    bigBitfield = new int[Integer.MAX_VALUE>>>4]; // 512MB worth of ints
    while(true) {
    byte[] filler = new byte[1024*1024]; // 1MB
    Run with java-options "-Xmx700m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps", the GC log is reasonable:
    0.000: [GC 0.001: [DefNew: 173K->63K(576K), 0.0036490 secs]0.005: [Tenured: 39K->103K(1408K), 0.0287510 secs] 173K->103K(1984K), 0.0331310 secs]
    2.532: [GC 2.532: [DefNew: 0K->0K(576K), 0.0041910 secs]2.536: [Tenured: 524391K->524391K(525700K), 0.0333090 secs] 524391K->524391K(526276K), 0.0401890 secs]
    5.684: [GC 5.684: [DefNew: 43890K->0K(49600K), 0.0041230 secs] 568281K->524391K(711296K), 0.0042690 secs]
    5.822: [GC 5.822: [DefNew: 43458K->0K(49600K), 0.0036770 secs] 567849K->524391K(711296K), 0.0038330 secs]
    5.956: [GC 5.957: [DefNew: 43304K->0K(49600K), 0.0039410 secs] 567695K->524391K(711296K), 0.0137480 secs]
    6.112: [GC 6.113: [DefNew: 43202K->0K(49600K), 0.0034930 secs] 567594K->524391K(711296K), 0.0041640 secs]
    Run with the ParallelGC, "-Xmx700m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseParallelGC", the long GCs dominate immediately:
    0.000: [GC [PSYoungGen: 2272K->120K(3584K)] 526560K->524408K(529344K), 60.8538370 secs]
    60.854: [Full GC [PSYoungGen: 120K->0K(3584K)] [PSOldGen: 524288K->524389K(656960K)] 524408K->524389K(660544K) [PSPermGen: 1388K->1388K(8192K)], 0.0279560 secs]
    60.891: [GC [PSYoungGen: 2081K->0K(6656K)] 526470K->524389K(663616K), 57.3028060 secs]
    118.215: [GC [PSYoungGen: 5163K->0K(6656K)] 529553K->524389K(663616K), 59.5562960 secs]
    177.787: [GC
    Thanks,
    - Gordon @ IA

  • Performance for multiple tablespaces

    Hi,
    I have 8 oracle instances with an only user for every instance.
    My customer asked me to create an only instance with 8 tablespaces and 8 different users.
    These instances could also further increase.
    I think that avoiding creating all these tablespaces on an only instance, because I think that it would be possible to have a reduction of the performances.
    According to you what problem would it be possible to verify afterwards doing an operation of the kind?
    What are the advantages and disadvantages of putting multiple schemas on one database instead of on seperate databases? (SGA, I/O, MEMORY..........)
    Thanks
    Raf

    Look at it this way, with your avail memory split between 8 instances, each of these sgas is of no use to the other 7 instances when that user is idle. If you used all the available memory for one instance, then all the memory will be available when required.
    Provided you create the tablespaces for a single instance the way the individual instances are layed out then i/o will not change (an i/o wait is an i/o wait no matter who generates it). In fact, you will have more storage space available as you will have only one each of system, rollback and temp tablespaces instead of 8.
    I hope this helps.

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • Performance of Big Report Programs

    Hi SAP Experts,
    I want to know whether we can put the trace on the Report Programs which are taking on almost around 10 hours to execute or how can we put a trace on background jobs.
    Thanks and Regards,
    Harsh Goel

    Hi Harsh,
    The problem with the report will be mostly due to the select queries.
    So better do one thing, execute your program in debug mode, keep a break point before the select query, activate the trace in ST05 transaction and after executing the select query deactivate the trace
    Now look in display trace for the time it takes for the query.
    you can follow the same method for each query and check which query takes a long time and rectify that.
    To reduce ABAP/4 program runtime, always follow the ABAP/4 program optimization techniques given below:
    1. Avoid 'SELECT *'. Instead use select with field names i.e. SELECT f1 f2 f3 and so on.
    2. Use table (primary) keys (as far as possible) in the WHERE clause of the select statements. Else, check for secondary indexes.
    3. Avoid nested selects or nested loops.
    4. Use binary search wherever possible.
    5. Avoid use of joins in the select queries.
    5. You can evaluate the performance using GET RUN TIME command for small pieces of program. Try using this statement for SELECT queries to know how much time your SELECT query takes to execute.
    Example
    DATA: gv_runtime1       TYPE i,
          gv_runtime2       TYPE i,
          gv_final          TYPE i.
    GET RUN TIME FIELD gv_runtime1.
    SELECTu2026u2026
    GET RUN TIME FIELD gv_runtime2.
    gv_final = gv_runtime2 - gv_runtime1.
    WRITE: 'Execution time=', gv_final.
    If you still have any doubts please let me know.
    Regards,
    Shobana.K
    Edited by: Shobana k on Sep 18, 2008 8:27 AM

  • Database Performance (Tablespaces and Datafiles)

    Hi guys!
    What's the best for performance in database, tablespace with various datafiles distribuited in diferents filesystems or tablespaces with various datafiles in only one filesystem?
    Thanks,
    Augusto

    It depends on contents of the tablespaces, tablespace level LOGGING/NOLOGGING and env such as either OLTP or OLAP and LUN presentation to the server with RAID or without RAID and SAN Read per second and write per second.
    In general, tablespace with various datafiles distribuited in diferents filesystems/LUN's is in practice for non dev/system test databases.
    Moreover using ASM is better then standard filesystems.
    Regards,
    Kamalesh

  • Big Content Access Performance Challenge

    Partitioning solution: GPT
    + Window 8.1 is capable to process 64-bit GPT partitions - even those created by parted-program under Linux OS
    + big disks available with full capacity
    Formatting solution (at the time being):
    1) Window 8.1: NTFS
       + works also under Linux with ntfs-3g installed
       - missing ext4 features such as journaling missing
    2) Linux: EXT4 with mkfs.ext4
       + ext4 features such as journaling
    Challenge: big data performance is zero with Windows 8.1 Enterprise Evaluation at the moment
    Problem 1: big partitions (>2TB): not available!
    Problem 2: big files (> 2TB): no access !
    Problem 3: big hard disks ( > 2TB): no access!
    Notes:
    a) 64-bit Linux OS was able to a) create b) access c) operate with ok performance big (>2TB) content/data/files in >4TB disks
    b)  " ** not accessible **"  message to be considered because content IS accessible once a 64-bit software is installed 
    c)  " ** the volume does not contain a recognized file system ** message should rather be something like
               "sorry, system can not recognize file system, please install/get .... solution"
    d) 32-bit system should NOT offer format as an option for with 64-bit big data !
    e) NTFS- format should not be offered for big > 2TB data by 32-bit software!
    f) it's hard for user to know whether problems are due to 32-bit programs when 32-bit programs itself don't
       recognize the fact that they are processing big data - this challenge is the same to other 32/64-bit hybrid OS systems  
    Question: what is the add-on software and where to download to get access to EXT4 disks used by 64-bit OS ?

    Thanks for your advice!
    More detailed info:
    - file size example 3.3TB <=> is 64-bit
    - partition size example: 4TB <=> is 64-bit
    - hard disk example: 4 TB hd used for enterpise big data apps <=> is 64-bit
    - partioned by 64-bit parted software in enterpise Linux
    - formatted by 64-bit mkfs.ext4 in enterpise Linux
      where
      - ext4 is fourth extended filesystem is a journaling file system for Linux
      - can support volumes with sizes up to 1 exbibyte (EiB) and files with sizes up to 16 tebibytes (TiB).[
    Goal: to explore Windows 8.1 Enteprise Evaluation performance with big data using big data content created by other 64-bit Linux
    Questions:
    1) do I need to install something to make Windows 8.1 Enteprise Evaluation handle properly big data ?
    2) what software does Microsoft recommends for Ext4 ?
    3) will the final Windows 8.1 Enteprise Evaluation embed Ext4 ?
    4) which tools and apps does Microsoft recommend
       a) to read ext4 format big data for writing it to different hard disk created by Win 8.1 with GPT ?
          - goal is to use 64-bit Linux big data in Windows 8.1 Enteprise Evaluation
       b) to read from hard disk created by Win 8.1 with GPT and to write to ext4 format disk ?
          - goal is to use Windows 8.1 Enteprise Evaluation big data in 64-bit Linux
    5) why did Microsoft diskpart give NTFS as only formatting option for big volumes:
    > diskpart <-- in command mode
    DISKPART > LIST VOLUME
    SELECT VOLUME 17 <-- one of volumes
    FILESYSTEMS --> Current File System
    Type: RAW <-- in Linux system this is EXT4
    File Systems supported for formatting: NTFS <--- *** only NTFS where max file size < 2TB***

  • Performance of coalesce if tablespace is not fragmented

    Hi,
    It is recommended that the DBA performs an ALTER TABLESPACE ...
    COALESCE periodically to improve space availability in the
    database. If the tablespace is not fragmented much ,will the
    statement terminate 'significantly' faster than when the
    tablespace is highly fragmented?
    Thanks.

    There is NO performance gain by partitioning the drive into multiple partitions. That is true for both Mac OS X and Windows.
    Partitioning for reasons of separating your files in the same way you would have a Multiple draw filing cabinet or multiple filing cabinets it one thing but it will not increase performance of the drive is any way.
    It may actually decrease the overall performance of the system once one of those partitions start to fill up with Data.
    So it is Not necessary and only should be done IF you know what you are doing for your particular needs.
    Personally I have 3 partitions on my MBPs internal disk. One for Lion, One for Mt Lion and the third is for downloads and my personal files. Never cared for both the Mac and Windows system of placing everything in the USER folder on the System drive.
    Great in a Multi User computer but I am the only one that uses MY Computers.

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • One single tablespace for the entire db of 3.3T

    One of the db i am supporting has about 3.3 Terabytes capacity and the application is using only 1 huge tablespace with one big file.
    the system is linux 4 , 32 bit.
    oracle version is 10.2.0.4
    Is there a limit of space for a tablespace when you consider insert/delete/query performance?
    Thanks,
    Chau

    It's really depends on how your storage been setup. Besides the lack of parallel backup ability like other user pointed out. There shouldn't be any other major performance impact solely because of big tablespace. Or bigfile tablespace alone doesn't cause performance problem. It's only a problem if you setup is wrong. For example setup this file on system that doesn't support striping.
    Since you have 3.3 TB size tablespace with one single datafile, that means you must have a big file tablespace which only support one datafile.
    Performance of database opens, checkpoints, and DBWR
    processes should improve if data is stored in bigfile
    tablespaces instead of traditional tablespaces.
    However, increasing the datafile size might increase
    time to restore a corrupted file or create a new
    datafile.That is in the event of media crash, that could be only affect one or two small files in traditional setup, but in your case, you need to restore whole big file.
    Some more information about big file tablespace here,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01102
    and here
    Considerations with Bigfile Tablespaces
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/physical.htm#sthref489

  • Tablespaces : 2 Questions

    hello,
    im a informatik student from hamburg,
    we have the course databasedesign with focus on oracle dbms.
    in the past i worked 2 years a bit at oracle 8i and 9i databases as administrator, therfor i have some little konwledge.
    our professor told us:
    1.
    oracle recommend that a tablespace shoudnt greater than 10 gb, i ask him if he mean that the tablespace files shouldnt be greate, he told me that he means the real tablespace, not the tablespace files.
    2.
    my professor also told us that we can create a single table over more than 1 tablespace, i never heard something about that option. i asked him and he told me he never did it but it shoud really be possible, mybe create table option.
    at both pionts i never heard about it, and at my search at the internet i dindt found some relevant information.
    maybe i get some answers at this forum :o)
    thanks
    Andreas

    Hi,
    Oracle 10g now supports Bigfile Tablespace whcih can contain even very big datafile. See the following abstract from Oracle Document:
    Bigfile Tablespaces
    A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks) datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles, but the files cannot be as large. The benefits of bigfile tablespaces are the following:
    A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile tablespace with 32K blocks can contain a 128 terabyte datafile. The maximum number of datafiles in an Oracle Database is limited (usually to 64K files). Therefore, bigfile tablespaces can significantly enhance the storage capacity of an Oracle Database.
    Bigfile tablespaces can reduce the number of datafiles needed for a database. An additional benefit is that the DB_FILES initialization parameter and MAXDATAFILES parameter of the CREATE DATABASE and CREATE CONTROLFILE statements can be adjusted to reduce the amount of SGA space required for datafile information and the size of the control file.
    Bigfile tablespaces simplify database management by providing datafile transparency. SQL syntax for the ALTER TABLESPACE statement lets you perform operations on tablespaces, rather than the underlying individual datafiles.
    Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment-space management, with three exceptions: locally managed undo tablespaces, temporary tablespaces, and the SYSTEM tablespace can be bigfile tablespaces even if their segments are manually managed.

  • Full 64-bit Aperture 3 Performance Maxed

    All Macs running Snow Leopard that are Core2Duo or better are capable of full 64-bit, but from reading the threads I would say almost nobody is doing this.
    Currently, of all Macs on the market now, only the X-Serve boots into full 64 bit mode by default.
    I guess that this is because Apple assumes a server is typically professionally managed, meaning the operator would have the technical savvy to verify that all required software is 64 bit clean....ALSO servers do not tend to have any excess software on them that is not needed for them to function.
    Stark contrast to the overstuffed user machines cluttered with useless old files PLUS there is still lot of current valid user software not 64-bit capable yet, like PS CS4. Knowing this, Apple does not want a lot of complaints from average, non IT professional users for current non-64 bit incompatibilities, and that is understandable. This makes Apple's default boot mode for Snow Leopard on user machines 32 bit mode. Currently Apple makes users DELIBERATELY select full 64 bit mode at boot...and in so assuming that these hopefully savvy users know the possible consequences.
    Now when running in the default 32 bit mode, 64 bit apps will operate in either 64 bit, or if so selected in their Info box, 32 bit to maximize compatibility. While running a 64 bit app increases its performance, it is only a portion of the performance running a full 64-bit machine will do.
    If your system has a lot of extensions or old drivers or other associated crapola <(technical term) left over from years of upgrades, you might just want to pass on this suggestion right now...HOWEVER if your system is decently clean, a fresh install of the OS and Aperture 3, you will most likely be fine.
    Aperture 3 running on a full 64-bit machine is a delightful improvement, and in my limited experience to date SMOKIN' fast.
    Now if you are on still on Leopard, congrats, you kept better performance than the early SL adopter folks on Aperture 2...BUT Aperture 3 NEEDS SL to make it sing, so if you are going to move to Aperture 3, UPGRADE to SL 10.6.2 NOW.
    Assuming we are all on SL 10.6.2 now let's talk full 64 bit, and getting all the performance your hardware can deliver.
    OK then...On this clean system, restart and hold both the 6 and 4 keys down during boot. You can then verify the full 64 bit mode by looking in the System Profiler. Select About this Mac... More Info... then click on the Software title header in the left column and in the second to last line you should see:
    " 64-bit Kernel and Extensions: Yes"
    WELCOME to your full 64 bit machine.
    After checking you are in full 64 bit, launch Aperture 3, and Activity Monitor...to monitor performance.
    If all goes well, Aperture should now be SMOKIN fast. I can hold the arrow key down in full screen and D3X .NEFs render almost immediately, smaller files a blink. 6GB on my MBP shows no pageouts running just Aperture 3. An 8 core Mac Pro with lots of RAM will be MUCH faster due to full 64 bit AND the parallel core thread processing.
    Imports with backups to a secondary disk (YAAY!!!) are so fast I cannot believe it, I think it is faster than Photo Mechanic, which is my gold standard for import/ingest speed.
    Now the two finger 6 and 4 key reboot method is only temporary, the next reboot it will revert to 32 bit mode, which is handy at this point in time if you have run into crashing or other problems.
    If you find you have a clean system you can make it boot into 64 bit all the time, but that is part of a larger performance discussion...just try this and see if you are doing better in terms of performance.
    Remember, most plug-ins, etc. are not going to be 64 bit yet...in fact Aperture even displays this in the File:Export... menu where it says (32 bit) next to the names of plug-ins. OBVIOUSLY (I hope) it probably would not be such a great idea to try these in this full 64-bit mode (ya think?). Just enjoy the stuff that does =).
    So...chances are VERY good you probably cannot do all your work in this mode just yet...BUT...if you are limiting tasks for the moment to just Aperture and the OS, like library conversion, or learning the new features, you probably will blow through this much faster than in 32 bit.
    ALSO, while I am talking Aperture performance...
    Aperture also needs really fast storage to see max performance with big libraries (500GB+), and I mean an eSATA host with a striped array. Firewire 800 and lesser technologies are 3-4 times slower on average. Sadly, Only the 17" MBP (and all previous size version MBPs) and the Mac Pro can run an eSATA host. To me this is the single biggest drawback to the iMac, and seems REALLY silly now that you can buy an iMac with an i7 processor with a great graphics card...but only the internal single drive is eSATA, and you are stuck with FW800 for storage.
    Anyway, I am running a Sonnet Tempo Pro Express 34 card in my MBP connected to Sonnet Fusion D500P array with 10TB of disk space, formatted using SoftRAID. My dedicated space for Aperture benchmarks on this setup at about 130MB/sec on average of all tests. Two crucial facts here is that the Sonnet drivers for the card (v2.2.1) are full 64-bit as well as the fact that a 64 bit version of the SoftRAID driver is included with Snow Leopard for users in 64 bit mode, which will allow you to keep existing volumes and access data from the 64 bit version of Snow Leopard.
    Anyway, fellow Aperture 3 adopters I encourage you to give it a shot...and I hope this results in some smiling faces...
    Sincerely,
    K.J. Doyle
    PS No flames please... of course since this is all new ground YMMV... proceed at your own risk, there is a reason Apple is not making this easy right now...'nuff said, I hope.

    Kevin -- is it your understanding that Aperture can and will default to 64 bit performance without need to reboot 6-4?
    Hi Miguel,
    He is correct, Aperture will be running in 64 bit, but the rest of the machine will not, and Aperture is dependent on many other subsystems, storage being #1 given the huge size of libraries. As I said above, I am pretty sure Apple would not recommend you run full 64 because of the other incompatibilities that exist on the machine, it would cause too many complaints from those who do not understand the need for 64 bit clean operation, and that all apps are not there yet.
    HOWEVER, there is a lot more to this than just Aperture's operation, as it is an app that uses many resources.
    There is a very significant difference in operating my eSATA array with 64 bit drivers and a full 64 bit machine, and therefore Aperture runs much faster than it would using the 32 bit drivers for the storage.
    There are other issues as well with greatly improved OS memory operations and other technical issues that impact this as well. The more RAM you have the bigger the improvement. This impacts parallel processing of threads, and again gives more time to Aperture in the bargain.
    Sincerely,
    K.J. Doyle

Maybe you are looking for