Partitioning large table

I'm in need of some expert advice on how best to partition the following table in Oracle 9i. It's expected to hold billions of records. This table is expected to receive 10,000,000 new records every hour or so.
I've tried a composite range-hash partitioning scheme, however it does not appear to be improving the speed of INSERTs. I've created the tablespaces named "part0" through "part9", which all happen to be on the same disk. I realize that optimally these would be on separate disks, however I expected to see a larger improvement over the same table without partitions which doesn't appear to be happening. Here's it's current definition:
CREATE TABLE sp_device_usage
(id NUMBER(15) NOT NULL,
device_id NUMBER(9) NOT NULL,
datastream_id NUMBER(9) NOT NULL,
timestamp NUMBER(10) NOT NULL,
usage_counter NUMBER(15) NOT NULL,
CONSTRAINT sp_device_usage_pk PRIMARY KEY (id) USING INDEX TABLESPACE cni_index,
CONSTRAINT sp_device_usage_fk1 FOREIGN KEY (device_id) REFERENCES encore_device,
CONSTRAINT sp_device_usage_fk2 FOREIGN KEY (datastream_id) REFERENCES sp_datastream
PARTITION BY RANGE (timestamp)
SUBPARTITION BY HASH (device_id) SUBPARTITIONS 10 (
PARTITION sp_device_usage_ptn1 VALUES LESS THAN (100000) TABLESPACE part0,
PARTITION sp_device_usage_ptn2 VALUES LESS THAN (200000) TABLESPACE part1,
PARTITION sp_device_usage_ptn3 VALUES LESS THAN (300000) TABLESPACE part2,
PARTITION sp_device_usage_ptn4 VALUES LESS THAN (400000) TABLESPACE part3,
PARTITION sp_device_usage_ptn5 VALUES LESS THAN (500000) TABLESPACE part4,
PARTITION sp_device_usage_ptn6 VALUES LESS THAN (600000) TABLESPACE part5,
PARTITION sp_device_usage_ptn7 VALUES LESS THAN (700000) TABLESPACE part6,
PARTITION sp_device_usage_ptn8 VALUES LESS THAN (800000) TABLESPACE part7,
PARTITION sp_device_usage_ptn9 VALUES LESS THAN (900000) TABLESPACE part8,
PARTITION sp_device_usage_ptn10 VALUES LESS THAN (maxvalue) TABLESPACE part9
CREATE INDEX sp_device_usage_idx1 ON
sp_device_usage(device_id) TABLESPACE cni_index
CREATE INDEX sp_device_usage_idx2 ON
sp_device_usage(datastream_id) TABLESPACE cni_index
CREATE SEQUENCE sp_device_usage_seq START WITH 1000 INCREMENT BY 1 MINVALUE 1 NOCACHE NOCYCLE NOORDER;

INSERT operations into a partitioned table won't necessarily be any faster than INSERT operations into a non-partitioned table. For partitioned tables, Oracle has to determine which partition to insert each record into, which is an extra step in the partitioned universe. Since you will, for the most part, be inserting into the same partition, you're basically doing the normal table insert with additional partition key checking overhead. Partitioning is designed to speed the retrieval of and, to a lesser extent, update operations on data as well as to improve managability.
Justin Cave <[email protected]>
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • Partitioning - query on large table v. query accessing several partitions

    Hi,
    We are using partitioning on a large fact table, however, in deciding partitioning strategy looking for advice regarding queries which have to access several partitions versus query against a large table.
    What is quicker - a query which acccesses a large table or a query which accesseses several partitions to return results. I
    Need to partition due to size/admin etc. but want to make sure queries which need to access > 1 partition are not significantly slower than ones which access a large table by comparison.
    Ones which access just one partition fine but some queries have to accesse several partitions
    Many Thanks

    Here are your choices stated another way. Is it better to:
    1. Get one weeks data by reading one month's data and throwing away 75% of it (assumes partitioning by month)
    2. Get one weeks data by reading three weeks of it and throwing away part of two weeks? (assumes partitioning by week)
    3. Get one weeks data by reading seven daily partitions and not having to throw away any of it? (assumes daily partitioning)
    I have partitioned as frequently as every 5-15 minutes (banking and telecom) and have yet to find a situation where partitions larger than the minimum date-range for the majority of queries makes sense.
    Anyone can insert data into a table ... an extra millisecond per insert is generally irrelevant. What you want to do is optimize reading the data where that extra millisecond per row, over millions of rows, adds up to measurable time.
    But this is Oracle so the best answer to your questions is to recommend you not take anyone advice on this but rather run some tests with real data, in real-world volumes, with real-world DML and queries.

  • Large table partition

    Hi,
    I have a large table with multiple columns, one of them has a date column. I would like to partition based on date column.
    Method 1. What is the advantage of assigning a new tablespace to a partition ?
    Method 2. Why not have one big tablespace with multiple datafiles assigned to it ?
    Which one is faster for querying results.
    Thanks.

    user633980 wrote:
    I have oracle's ASM running on 3 15K disks. But querying this big table is taking a while to return results. I thought that range partitioning may help.
    What other things would you suggest which will decrease the query time.There are 2 basic things one can do at DBA/developer level to improve I/O.
    <li>do less I/O (obviously)
    <li>do "faster" I/O
    The less I/O is where indexing and partitioning and so on come to the fore. Partitioning can reduce a large invoice table contains a years worth of invoices into 365 partitions (which is like having a bunch of "<i>mini tables</i>" each with its own local indexes). This can make a significant reduction in I/O when wanting to processes specific invoices. Partitioning also makes data management easier (enabling one to treat a partition as a physical entity ito data exchange, indexing, export/import, etc).
    How well partitioning will work in your case depends on how well the partitioning is designed (range/list/hash), on which columns, the indexing approach used, how the queries are structured to enable partitioning pruning (only hitting specific partitions instead of all the data in the partitioned table) and so on.
    Doing I/O faster - well, that's not much you can do to make the actual I/O faster. That requires relooking/redesigning/reconfiguring the storage layer. And this is typically a sysadmin function and not a developer one. What you can however do from a developer viewpoint is to make I/O faster using parallel processing.
    For example, if the query need to read a million rows, that can be made faster by using 10 parallel processes - with each processing a 100,000 rows. This however requires correctly using Oracle's PQ (Parallel Query) feature and utilising the available I/O capacity correctly. For example, PQ is of little use if the I/O pipes are already being hit at close to full capacity.
    The basic point about tablespaces is that this has no impact on any of this - nor should it have as it is a logical data management layer.

  • Limit on the number of partitions a table can have

    Hi,
    It's been quite some time I'm thinking of a simple and the best solution to a performance issue in my application. Benefiting out of partitioning the table seems a simplest approach; it involves very minimal changes to my application code and in addition, it keeps the overall application logic 100% intact. However, at times, the number of partitions required can grow to as large as 500 thousand. I know of another application implementing as many as 50 thousand partitions for one of its tables. I'm wondering if Oracle recommends or poses any limit to a number of partitions the table can have. How does that limit, if it exists, vary if each of my table partition holds only a small amount of data; say, not more than 2 thousand records each of 1 KB size. What are the important considerations that one needs to keep in mind while creating the huge number of partitions for a table?
    Any inputs on this would be a great help.
    Thanks,
    Aniruddh
    ps: Consider Oracle releases 10g onwards.
    Edited by: Aniruddh on Dec 30, 2009 9:46 AM
    Edited by: Aniruddh on Dec 30, 2009 9:50 AM

    Aniruddh,
    >
    What are the important considerations that one needs to keep in mind while creating the huge number of partitions for a table?
    >
    I doubt if you are using partitioning for the right causes. Please explain how is your data structured and why you need to create so many partitions..?
    when creating partition, you need to consider..
    a) why you need to partition?
    b) how is your data structured
    c) how your data is indexed.
    d) What kind of queries would you firing on this table to get the data. Will they use the partition key ?
    Thanks,
    Rajesh.
    Please mark this/any other answer as helpful or answered if it is so. If not, provide additional details/feedback.
    Always try to provide create table and insert statements to help the forum members help you better.

  • Using dbms_metadata to get defination of large table

    Hi,
    I have very very large table definition table with partitions and sub-partitions and I want to create that table in some other db.
    I am trying below spooled query but its not retrieving complete definition of the table, any other setting I should follow
    spool t.sql
    set pagesize 0
    set long 2000000
    select dbms_metadata.get_ddl
    I am using oracle 10g r2
    Thanks,
    Pankaj

    Thanks you all for your replied, following are observation after settings suggested
    With setting
    ==================
    set longchunksize 2000000
    set long 2000000
    set lines 2000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    10819 23419 21635001
    =================
    With setting
    set long 100000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    3452 7326 279269
    Result => None worked for me :(

  • Sql server partition parent table and reference not partition child table

     
    Hi,
    I have two tables in SQL Server 2008 R2, Parent and Child Table.  
    Parent has date time, and it is partitioned monthly,  there is a Child table which just refer the Parent table using Foreign key relation.   
    is there any problem the non-partitioned child table referring to a partitioned parent table?
    Thanks,
    Areef

    The tables will need to be offline for the operation. "Offline" here, means that you wrap the entire operation in a transaction. Ideally, this transaction would:
    1) Drop the foreign key.
    2) Use ALTER TABLE SWITCH to drop the old data.
    3) Use ALTER PARTITION FUNCTION to drop the old empty partition.
    4) Use ALTER PARTITION FUNCTION to add a new empty partition.
    5) Reapply the foreign keys WITH CHECK.
    All but the last operation are metadata-only operation (provided that you do them right). To perform the last operation, SQL Server must scan the child tbale and verify that all keys are present in the parent table. This can take some time for larger tables.
    During the transaction, SQL Server holds Sch-M locks on the table, which means that are entirely inaccessible, even for queries running with NOLOCK.
    You avoid this the scan by applying the fkey constraint WITH NOCHECK, but this can have impact on query plans, as SQL Server will not consider the constraint as trusted.
    An alternative which should not be entirely dismissed is to use partitioned
    views instead. With partitioned views, the foreign keys are not an issue, because each partition is a pair of tables, with its own local fkey.
    As for the second question: it appears to be completely pointless to partition the parent, but not the child table. Or does the child table only have rows for a smaller set of the rows in the parent?
    Erland Sommarskog, SQL Server MVP, [email protected]

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Using workspaces with large tables

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    I've just spotted the workspace manager forum, I'll post there. :-)

  • Partitioning Fact Table

    Hi
    In our data warehouse we have a Fact table which has grown quite a large in size around 17 million records. We have been pondering on the options to partition this table.
    Unfortunately this fact table does not have any date columns , all columns are surrogate keys of dimensions, even for time dimensions. My idea is to partition by range by manually specifying the surrogate key ranges of time dimension. Is that the way one proced with?
    Other option is to, add a new column to the Fact Table and populate with the corresponding "ddmmyyyy" in number format and use that one to partition the table. If I go with this approach, if in my reports / queries if I use the dimension key to join between the Fact and Dimension, will oracle still identify which partition to look for? Or do I MUST use the partitioned column in the queries for partition pruning to be effective.
    Any thoughts would be useful.
    Regards
    Mahesh

    if in my reports / queries if I use the dimension key to join between the Fact and Dimension, will oracle still identify which partition to look for? No the oracle will not use the Partition Prunning in this case.
    Or do I MUST use the partitioned column in the queries for partition pruning to be effective.yes , you need to use the partitioned column for partition pruning to be effective.
    Cheers
    Nawneet

  • Should I partition a table or not???

    I have a table with well over 8 million records and growing around 15,000 records daily. It is queried through a form (forms6i), no updates are done on the form just "select". The indexes are set up for the 2 types of querries mostly performed- by a date criteria and an id criteria. Is indexing good enough? I notice that it is getting harder to get results from date specific querries. What is the best approaches to ensure the table is optimized for querries? I am not used to working with such a large table size. Does partitioning help indexes work more efficient? Should I run statistics on the whole table?? I am not comfortable running statistics on the table when I am not around to monitor the performance on the server. The database runs 24 hours, 7 days a week- there is no real "downtime" to run stats.
    Please advise, I would like to find a best practice approach.

    Re: Should I partition a table or not???
    Posted: Jun 19, 2007 11:59 AM in response to: user542952 Reply
    (small correction in my earlier posting)
    mostly performed- by a date criteria and an id criteriaPartitioning NOT always imporves performance. It can degrade performance if you had NOT partitoned based on queries those hit.
    Beaware of local indexes. You might degrade performance because you may need to proble mulitple index partitions.
    Do you query date and Ids by range search or exact search ?.
    How much data (roughly 10% ,20% , 1% etc..) you fetch through the query

  • Performance of large tables with ADT columns

    Hello,
    We are planning to build a large table ( 1 billion+ rows) with one of the columns being an Advanced Data Type (ADT) column. The ADT column will be based on a TYPE that will have approximately (250 attributes).
    We are using Oracle 10g R2
    Can you please tell me the following:
    1. How will Oracle store the data in the ADT column?
    2. Will the entire ADT record fit in one block?
    3. Is it still possible to partition a table on an attribute that is part of the ADT?
    4. How will the performace be affected if Oracle does a full table scan of such a table?
    5. How much space will Oracle take, if any, for storing a NULL in an ADT?
    I think we can create indexes on the attribute of the ADT column. Please let me know if this is not true.
    Thanks for your help.

    I agree with D.Morgan that object type with 250 attributes is doubtful.
    I don't like object tables (tables with "row objects") too.
    But, your table is a relational table with object column ("column object").
    C.J.Date in An introduction to Database Systems (2004, page 885) says:
    "... object/relational systems ... are, or should be, basically just relational systems
    that support the relational domain concept (i.e., types) properly - in other words, true relational systems,
    meaning in particular systems that allow users to define their own types."
    1. How will Oracle store the data in the ADT column?...
    For some answers see:
    “OR(DBMS) or R(DBMS), That is the Question”
    http://www.quest-pipelines.com/pipelines/plsql/tips.htm#OCTOBER
    and (of course):
    "Oracle® Database Application Developer's Guide - Object-Relational Features" 10g Release 2 (10.2)
    http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14260/adobjadv.htm#i1006903
    Regards,
    Zlatko

  • Gather table stats taking longer for Large tables

    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    Does Table size actually matter for stats collection ?

    Max wrote:
    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    09:40:05 SQL> desc user_tables
    Name                            Null?    Type
    TABLE_NAME                       NOT NULL VARCHAR2(30)
    TABLESPACE_NAME                        VARCHAR2(30)
    CLUSTER_NAME                             VARCHAR2(30)
    IOT_NAME                             VARCHAR2(30)
    STATUS                              VARCHAR2(8)
    PCT_FREE                             NUMBER
    PCT_USED                             NUMBER
    INI_TRANS                             NUMBER
    MAX_TRANS                             NUMBER
    INITIAL_EXTENT                         NUMBER
    NEXT_EXTENT                             NUMBER
    MIN_EXTENTS                             NUMBER
    MAX_EXTENTS                             NUMBER
    PCT_INCREASE                             NUMBER
    FREELISTS                             NUMBER
    FREELIST_GROUPS                        NUMBER
    LOGGING                             VARCHAR2(3)
    BACKED_UP                             VARCHAR2(1)
    NUM_ROWS                             NUMBER
    BLOCKS                              NUMBER
    EMPTY_BLOCKS                             NUMBER
    AVG_SPACE                             NUMBER
    CHAIN_CNT                             NUMBER
    AVG_ROW_LEN                             NUMBER
    AVG_SPACE_FREELIST_BLOCKS                   NUMBER
    NUM_FREELIST_BLOCKS                        NUMBER
    DEGREE                              VARCHAR2(10)
    INSTANCES                             VARCHAR2(10)
    CACHE                                  VARCHAR2(5)
    TABLE_LOCK                             VARCHAR2(8)
    SAMPLE_SIZE                             NUMBER
    LAST_ANALYZED                             DATE
    PARTITIONED                             VARCHAR2(3)
    IOT_TYPE                             VARCHAR2(12)
    TEMPORARY                             VARCHAR2(1)
    SECONDARY                             VARCHAR2(1)
    NESTED                              VARCHAR2(3)
    BUFFER_POOL                             VARCHAR2(7)
    FLASH_CACHE                             VARCHAR2(7)
    CELL_FLASH_CACHE                        VARCHAR2(7)
    ROW_MOVEMENT                             VARCHAR2(8)
    GLOBAL_STATS                             VARCHAR2(3)
    USER_STATS                             VARCHAR2(3)
    DURATION                             VARCHAR2(15)
    SKIP_CORRUPT                             VARCHAR2(8)
    MONITORING                             VARCHAR2(3)
    CLUSTER_OWNER                             VARCHAR2(30)
    DEPENDENCIES                             VARCHAR2(8)
    COMPRESSION                             VARCHAR2(8)
    COMPRESS_FOR                             VARCHAR2(12)
    DROPPED                             VARCHAR2(3)
    READ_ONLY                             VARCHAR2(3)
    SEGMENT_CREATED                        VARCHAR2(3)
    RESULT_CACHE                             VARCHAR2(7)
    09:40:10 SQL> >
    Does Table size actually matter for stats collection ?yes
    Handle:     Max
    Status Level:     Newbie
    Registered:     Nov 10, 2008
    Total Posts:     155
    Total Questions:     80 (49 unresolved)
    why so many unanswered questions?

  • Two billion record limit for non-partitioned HANA tables?

    Is there a two billion record limit for non-partitioned HANA tables? I've seen discussion on SCN, but can't find any official SAP documentation.

    Hi John,
    Yes there is a limit for non-partitioned tables in HANA. In the first page of this document says: SAP HANA Database – Partitioning and Distribution of Large Tables
    A non - partitioned table cannot store more than 2 billion rows. By using partitioning, this
    limit may overcome by distributing the rows to several partitions. Please note, each partition must not contain more than 2 billion rows.
    Cheers,
    Sarhan.

  • Partitioning the tables

    i have a table that logs an actigtivity
    it has activity date time
    its a very large table...which i havnt partitioned.
    because i havnt got the archiving requirements.
    my questions is .. should i blindly create a monthly partition based on date key or should i wait for the archiving requirements?
    what is the criteria to partition the table?
    is it the amount of data ... or the archiving strategy. which one of these two should i take as the deciding factor to create partitions
    regards
    raj

    my problem is that
    i have two tables A and B
    A has a column called activity date time and B is the child table for A.
    i did not create partitions on both the tables because... i did not know the online data requirement.
    now for a code move people are complaining that i did not create monthly partitions before...
    my quesitioon is that...
    shouldnt we wait for the online data requirement to come then think of partitioning the table?
    or rather than waiting
    should we blindly create the monthly partitions on activity date time?
    regards
    raj

Maybe you are looking for

  • How do i transfer a song from macbook to my touch

    I've been trying to put songs that weren't purchased, but downloaded on my macbook onto my touch.  Have had no success.  HELP!

  • NEF images loaded on iMAC into PSCC are distorted, but not on Preview or second iMAC or MacBook Air

    NEF images loaded onto one IMAC into PSCC are distorted but show normally in Preview.   Same NEF on second iMAC and MacBook Air are not distorted.   Questions 1.     Is there a setting in PSCC that needs to be tweaked 2.   Is the PSCC loaded on the i

  • Database Installation

    I need to install Oracle 9i server in a new server, once i installed default database, shall i create a new tablespace or shall i keep system tablespace and import .dmp file into new server, please advise because our current server became very slow a

  • Refactoring difficult code

    Hi all Slightly complicated but ill try and keep it simple � the code: int[] seq = new int{0,0,0,0}; int limit = 9; while(!done){      //Print out array - (uses a for loop)      seq[0]++;      if(seq[0] > limit){           seq[0] = 0;           seq[1

  • ITunes and Keyboard backilght

    A bizarre behaviour. Hitting the spacebar to stop iTunes playing turn off the keyboard backlight... iTunes 9.0.2 (25) Now what?