Question related to table partitioning

Dear Friends,
I have two tables:
Table 1.
SQL> desc ProcessedData_Details
Name Null? Type
VERSIONNO NOT NULL FLOAT(126)
VARCHAR_VALUE VARCHAR2(255)
NUMERIC_VALUE FLOAT(126)
DATE_VALUE DATE
BOOLEAN_VALUE VARCHAR2(10)
REPORTINGDATE_ID NOT NULL NUMBER
SYSTEMDEAL_ID NUMBER
OUTPUT_TEMPLATE_ID NUMBER
SECTION_ID NUMBER
DATAPOINT_ID NUMBER
PROCESSDATA_ID NOT NULL NUMBER
ROW_ID NUMBER
USER_ID NUMBER
ROLE_ID NUMBER
IS_STORE NOT NULL NUMBER
Table 2.
SQL> desc Reporting_Date
Name Null? Type
REPORTINGDATE_ID NOT NULL NUMBER(10)
REPORTINGDATE DATE
CURRENTPERIOD NUMBER(38)
Both tables are joined using 'REPORTINGDATE_ID'. Now I've to partition Table 1 based on the column 'REPORTINGDATE' of Table 2. How can it be done?
Edited by: Santosh Kumar on Dec 12, 2008 2:35 PM

updates to the status column is frequent, considering data inflow.
status has dfferent values like new,active,inactive etc.
for the entity record, workflow, has different states.
table can be partitioned on status field,
for a new state of a workflow record, there will be a new row and the old record is updated with a different status, hence row movement across paritions
for eg:
workflow_id=100 (internal id
workflow_parent=WF_XXXX01
status=new
next time, if there is a status change for the workflow, WF_XXXX01
workflow_id=101
workflow_parent=WF_XXXX01
status=active
etc
if PK to be a local index, then it will be workflow_id+status
possible query patterns,
select * from workflow where workflow_id=id_val and status='new'
select * from workflow where <some_other_criteria>
select * from workflow where status='new'
select * from workflow where status='active'
the only advantage i see here is to keep separate partitions for each status to group records, for eg. instead of keeping a separate entity to keep the inactive records.
if the predicate includes the status field, it needs to touch only the corresponding partition, if local index is there, then there can be performance benefits.
but global index will be ok, if there are limited number of partitions and query retrieves only small number of rows and response time is critical
may be after some time, as part of puring, the inactive paritions can be dropped, if there are only local indexes, then there is no parition maintenance overhead
but considering overall overhead, in terms of row contention related issues,
for an OLTP, would you suggest partitioning?
thanks,
charles
Edited by: user570138 on 2-nov-2010 8:10
Edited by: user570138 on 2-nov-2010 8:18
Edited by: user570138 on 2-nov-2010 8:18

Similar Messages

  • A question regarding database table partitioning and table indexes in 10g

    We are considering partitioning a large table in our 10g database, in order to improve response time. I believe I understand the various partitioning options, but am wondering about the indexes built over the table. When the table is partitioned, will the indexes also be partitioned "automatically"? Or do I need to also partition the indexes as well?
    Thank you in advance to any and all who respond to this question.

    Hello,
    When you build your partiton table you just need to create indexes locally and they will be partitioned automatically, see following example
    CREATE TABLE YY_EVENT
      PART_KEY       DATE                              NOT NULL,
      SUBPART_VALUE  NUMBER                             NULL,
      EVENT_NAME     VARCHAR2(30 BYTE)                  NULL,
      EVENT_VALUE    NUMBER                             NULL
    TABLESPACE TEST_DATA
    PARTITION BY RANGE (PART_KEY)
      PARTITION Y_EVENT_200901 VALUES LESS THAN (TO_DATE(' 2009-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE TEST_DATA, 
      PARTITION Y_EVENT_200902 VALUES LESS THAN (TO_DATE(' 2009-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE TEST_DATA
    -- This will create paritioned indexes automatically
    CREATE INDEX MY_IDX ON YY_EVENT
    (EVENT_NAME)
      TABLESPACE TEST_DATA
    LOGGING
    LOCAL;Regards
    Edited by: OrionNet on Feb 25, 2009 12:05 PM

  • Question Related to table or view privilages ?????

    Hi,
    I have written a stored procedure under my schema, it queries data from production table or view. when i run this Stored procedure(SP) it gives me error saying "table or view does not exist".
    But when i try to query(i,e select * from XYZ(prod.table)) from this prod table or view outside the SP it runs fine and queries the data.
    My intial question was i didnt had privilage to access the table, but when i query outside the SP it runs fine. So there is permission for my ID to access the table. But why m i getting this error, Any idea please ???
    Your help is greatly appriciated!!!! Thank you so much!!!

    Using the default "definer rights" model, the owner of the stored SQL (view, procedure, trigger, method, etc) must have the required privileges granted directly to them and not obtain them via a role.
    The reason is that a role can be disabled in one session while enabled in a different session (for the same user). The database only relies on a privs that are consistent across all sessions. Those privs granted directly WILL be consistent across all sessions.
    Using the "invoker rights" model (AUTHID CURRENT_USER), this direct grant restriction does not apply.
    So, either declare AUTHID CURRENT_USER in your stored SQL header, or grant the privs directly to the owner of the SQL.

  • Questions relate to table "cache"??

    We plan to "cache" some tables to SGA or DB_keep_cache_size. some questions need to clarify:
    1. what different between:
    alter table user.table_name cache;
    alter TABLE USER.TABLE_NAME storage (buffer_pool keep);
    2. if I using either way to cache table, how to check table already "cache"?
    Thanks.

    ef8454 wrote:
    We plan to "cache" some tables to SGA or DB_keep_cache_size. some questions need to clarify:
    1. what different between:
    alter table user.table_name cache;
    alter TABLE USER.TABLE_NAME storage (buffer_pool keep);
    2. if I using either way to cache table, how to check table already "cache"?
    Thanks.In Oracle, starting with I suppose Release 8, they have three types of data buffer pool, or say they have divided data buffer pool, in three part,
    1) Default pool -- This is the normal data pool.
    2) Keep Pool -- This is a data pool where you want your table/object to remain in memory for longer time.
    3) Recycle Pool -- This is a data pool where you want your table/object to remain till the time it is needed i.e. very short duration.
    So when you use second command with storage option, you are saying to keep that data in Keep Pool.
    Now, in all the three pools data gets aged out based on an algorithim known as LRU (Least recently used) i.e. to keep the data in front of the queue if it is accessed.
    When you say cache using your first query, you are saying oracle to keep it in the front of the queue when full table scan is done. It can be in any buffer (Default/Keep/Recycle)
    Regards
    Anurag

  • Where to find links related to table partition?

    Please,
    I randomly read a very nice topic about partitioning tables from Arup or someone else through a link, unfortunately Ididn't save it to my favorites.
    Does someone give link me to some good partition table topics?
    I'm just involved in a projet that need to partition a table, and the topic that I read last time was very interesting, and read it again should definetely refresh my mind
    Thank you very much for your help and cooperation

    Searching a topic on internet is obvious for any IT person, I have already made some search without finding a topic that I read last time.
    I'm sure that most of you have already be involved into a partition scheme, that's why asking in the forum is an asset
    thanks a gain

  • Question related to tables

    hi friends...................
    Is it possible to find out changes made in "screen or in a table  or in a rept type object "  ..................
    if it is possible so plz tell me how is it possible????? ................................
    what to get changes in it ?????............................
    thanks .....

    HI ,
    Is it standard table, and standard screen, tcodes..
    check the table CDHDR and CDPOS table which captures the changes.
    search SDN for more on CDHDR and CDPOS.
    eg: change the values on the screen, table now and execute the tables CDHDR and CDPOS in se16 for todays date
    the values will be captured and you can fetch them.
    regards,
    Nazeer

  • Question on Creating table from another table and copying the partition str

    Dear All,
    I want to know whether is there any way where we can create a table using another table and have the partitions of the new table to be exactly like the used table.
    Like
    CREATE TABLE TEST AS SELECT * FROM TEMP;
    The table TEMP is having range and hash partitions.
    Is there any way when we use the above command, we get the table partitions of TEMP to be copied to the new table TEST.
    Appreciate your suggestions on this one.
    Thanks,
    Madhu K.

    may this answer your question...
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:595568200346856483
    Ravi Kumar

  • Index Vs table partition

    I have table whose growth is 1 million per month and may increase in future. I currently place an index on column which is frequently uses in where clause. there is another column which contains months so it may possible that I make 12 partitions of that. I want to know what is suitable. is there any connection between index and table partition?
    Message was edited by:
    user459835

    I think the question is more of what type of queries are answered by this table?
    is it that most of the times the results returned span across several months?
    is there any relation to the column you use in where clause with the data belonging to a particular month (or range there-of)?

  • Long running table partitioning job

    Dear HANA grus,
    I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
    Total data volumn is around 340GB and the table size was 32GB !!!!!
    (migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
    Before partitioning, the data volumn of the table was around 32GB.
    After partitioning, the size has changed to 25GB.
    It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
    (It is QA DB, so less complaints)
    I thought that I might not can do this in the production DB.
    Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
    Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
    Any comments would be appreciate.
    Cheers,
    - Jason

    Jason,
    looks like we're cross talking here...
    What was your rationale to partition the table in the first place?
           => To reduce deleting time of CDPOS            (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
    Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
    As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
    Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
    The deletion then should run along these 4 sets of 25% of data.
    It's surely me, but where is the speedup potential here?
    How many unloads happened during the re-partitioning?
           => It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
    I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
    How do the now longer running SQL statements look like?
           => As i mentioned selecting/deleting increased almost twice.
    That's not what I asked.
    Post the SQL statement text that was taking longer.
    What are the three columns you picked for partitioning?
           => mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
    Why those? Because these are the primary key?
    I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
    In that case the partition pruning cannot work and all partitions have to be searched.
    How did you come up with 4 partitions? Why not 13, 72 or 213?
           => I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
    Alright, so basically that was arbitrary.
    For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
    Well, not sure what "most people" would do.
    HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
    - Lars

  • Table Partition on daily basis in oracle 10g

    I Want to create partition based on sysdate on daily basis.
    There will be 8 partitions. Every day data's will be get loaded into this table and everyday 8th day old data ill be get truncated.
    CREATE TABLE CUST_WALLET_BALANCE_7DAYS
    ( ID  VARCHAR2(250),
       A_DATE  VARCHAR2(11),
       LAST_PROCESS_DATE DATE,
      DD_OF_PROCESS_DATE  NUMBER(2),
      CONSTRAINT CUST_WALLET_BALANCE_7DAYS_PK PRIMARY KEY (ID,A_DATE))
      PARTITION BY RANGE (DD_OF_PROCESS_DATE)
      ( PARTITION DAY1 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE)),'DD')),
        PARTITION DAY2 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-1)),'DD')),
        PARTITION DAY3 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-2)),'DD')),
        PARTITION DAY4 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-3)),'DD')),
        PARTITION DAY5 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-4)),'DD')),
        PARTITION DAY6 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-5)),'DD')),
        PARTITION DAY7 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-6)),'DD')),
        PARTITION DAY8 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-7)),'DD'))
    THIS WONT WORKS OUT. SO PLEASE SUGGEST ME WITH BETTER SOLUTION.

    Original thread here: Table Partition on daily basis in oracle 10g
    Please do not start duplicate questions for the same topic.
    Locking this thread

  • Table Partitioning in Oracle 9i

    Hi all,
    I have a question on partitioning in Oracle 9i.
    I have a parent table with primary key A1 and attribute A2. A2 is not a primary key but I would to create partition for the table based on this attribute. I have a child table with attribute B1 being a foreign key to A1.
    I wish to perform a data purging on the parent and child table. I'll purge the parent table based on A2, but for the child table, it will be inefficient if I delete all records in child table where parent.A1 = child.B1. Should I add a new attribute A2 to the child table, partition the child table based on this attribute or is there a better way to do it?
    Thanks in advance for all replies.
    Cheers,
    Bernard

    Bernard
    Right 100K in the parent...but how many in the child ?
    I guess it comes back to what I said earlier...you can either take the hit on the cascaded delete to get out the records on the child table or you can denormalise the column down onto the child table in order to partition by it.
    I'm building a Data Warehouse currently and we're using the denormalise approach on a couple of tables in order to allow them to be equipartitioned and enable easier partition management and DML operations as you've indicated....but our tables have 100's of millions of rows in them so we really need to do that for manageability.
    100K records in the parent - provided the ratio to the child is not such that on average each deleted parent has 100's of children is probably not too onerous, especially for a monthly batch process - the question there would be how much time do you have to do this at the end of the month ? I'd probably suggest you set up a quick test and benchmark it with say 10K records as a representative sample (can do all 100K if you have time/space) - then assess that load/time against your month end window....if its reasonably quick then no need to compromise your design.
    You should also consider whether the 100K is going to remain consistent over time or is it going to grow rapidly in which that would sway you towards adding the denormalisation for partitioning approach at the outset.
    HTH
    Jeff

  • Introduction of Oracle Table Partitions into PeopleSoft HRMS environment

    I would like to pose a general question and see if anyone has found any published advice / suggestions from PeopleSoft or Oracle on this. I believe that Oracle table partitioning isn't supported through the PeopleTools application designer functionality. Most likely this is done for platform independence. However, we were thinking about implementing table partitioning for performance and the ability to refresh test instances with subsets worth of data instead of the entire database.
    I know that this would be a substantial effort, but was wondering if anyone had any documentation on this type of implemention. I've read some articles from David Kurtz on the subject, and it sounds like these were all custom jobs for each individual client. Was looking for something more generic on this practice from PeopleSoft or Oracle...
    Regards,
    Jay

    Thanks for the article Nicolas. I will add that to my collection, good reference piece.
    I think you gasped the gist of the query, which was I know that putting partitioning into a PeopleSoft application is going to be highly specific to the client and application that you are running. But what I looking for was something like a baseline guide for implementing partitioning in a PeopleSoft application as a whole.
    In other words, something like notifiaction that the application designer panels would be affected since they don't have the ability to manage partitions. Therefore, any changes to tables that would utilize partitioning would need to be maintained at the database level and no longer utilize the DDL generated from PeopleTools Application Designer. Other consideration would be, like maybe a list of tables that would be candidates for partitioning based on the application, in my case HRMS. And maybe, suggestions on what column should be used for partitioning, etc...All of which are touched on in your identified article about putting partitioning in at the database level for a generic application.
    Thanks for your help, it is much appreciated...
    Jay

  • Suggestions for table partition for existing tables.

    I have a table as below. This table contains huge data. This table has so many child tables .I am planning to do a 'Reference Partitioning' for the same.
    create table PROMOTION_DTL
      PROMO_ID              NUMBER(10) not null,
      EVENT                 VARCHAR2(6),
      PROMO_START_DATE      TIMESTAMP(6),
      PROMO_END_DATE        TIMESTAMP(6),
      PROMO_COST_START_DATE TIMESTAMP(6),
      EVENT_CUT_OFF_DATE    TIMESTAMP(6),
      REMARKS               VARCHAR2(75),
      CREATE_BY             VARCHAR2(50),
      CREATE_DATE           TIMESTAMP(6),
      UPDATE_BY             VARCHAR2(50),
      UPDATE_DATE           TIMESTAMP(6)
    alter table PROMOTION_DTL
      add constraint PROMOTION_DTL_PK primary key (PROMO_ID);
    alter table PROMOTION_DTL
      add constraint PROMO_EVENT_FK foreign key (EVENT)
      references SP_PROMO_EVENT_MST (EVENT);
    -- Create/Recreate indexes
    create index PROMOTION_IDX1 on PROMOTION_DTL (PROMO_ID, EVENT)
    create unique index PROMOTION_PK on PROMOTION_DTL (PROMO_ID)
    -- Grant/Revoke object privileges
    grant select, insert, update, delete on PROMOTION_DTL to SCHEMA_RW_ROLE;I would like to partition this table .Most of the queries contains the following conditions.
    promo_end_date >=   SYSDATE
    and
    (event = :input_event OR
    (:input_Start_Date <= promo_end_date           
    AND promo_start_date <= :input_End_Date))Any time the promotion can be closed by updating the PROMO_END_DATE.
    Interval partioning on PROMO_END_DATE is not possible as PROMO_END_DATE is a nullable and updatable field.
    I am now to table partition.
    Any suggestions are welcome...

    DO NOT POST THE SAME QUESTION IN MULTIPLE FORUMS PLEASE!
    Suggestions for table partition of existing tables

  • Question related to combining rows...

    Hi,
    I have a question related to combining rows...
    From our typical tables.... Dept and Emp.
    I want a result set like....
    Dept# | Employees
    10 | <Emp1>, <Emp5>, <Emp6>
    20 | <Emp7>, <Emp2>, <Emp8>, <Emp9>
    30 | <Emp10>, <Emp11>
    40 | <Emp12>
    Plesae give me the query...
    Thanks
    Abdul.

    How about this solution looks like?
    create or replace
    function fnc_concat_data(p_query VARCHAR2,P_ID NUMBER) RETURN VARCHAR2
    AS
    type res_tab is table of varchar2(20);
    result_tab res_tab;
    v_retval varchar2(256);
    begin
    execute immediate p_query || p_id BULK COLLECT into result_tab;
    FOR i IN 1..result_tab.COUNT LOOP
    v_retval := v_retval||','||result_tab(i);
    END LOOP;
    v_retval := substr(v_retval,2);
    return (v_retval);
    exception
    when others then
    return('Error');
    end fnc_concat_data;
    sql> select deptno, fnc_concat_data('select ename from emp where deptno=', deptno) employees from emp group by deptno
    deptno employees
    30     ALLEN,WARD,MARTIN,BLAKE,TURNER,JAMES
    20     SMITH,JONES,SCOTT,ADAMS,FORD
    10     CLARK,KING,MILLER

  • Foreign keys at the table partition level

    Anyone know how to create and / or disable a foreign key at the table partition level? I am using Oracle 11.1.0.7.0. Any help is greatly appreciated.

    Hmmm. I was under the impression that Oracle usually ignores indices on columns with mostly unique and semi-unique values and prefers to do full-table scans instead on the (questionable) theory that it takes almost as much time to find one or more semi-unique entries in an index with a billion unique values as it does to just scan through three billion fields. Though I tend to classify that design choice in the same category as Microsoft's design decision to start swapping ram out to virtual memory on a PC with a gig of ram and 400 megs of unused physical ram on the twisted theory that it's better to make the user wait while it needlessly thrashes the swapfile NOW than to risk being unable to do it later (apparently, a decision that has its roots in the 4-meg win3.1 era and somehow survived all the way to XP).

Maybe you are looking for

  • Do Outer Joins as well as Self Joins Affect the Performence of a Query ??

    4 Tables A,B,C,D. 1 View V Based Each of the tables having 1 and only one primary key and a few NOT NULL columns. Now my Query selects from A,B,C,D,V with mapping Table A columns with all other Table(B,C,D,V) Primary Key Columns using Left Outer Join

  • Need to invoke a process from a column link and then redirect to a url

    Is it possible to invoke a process from a column link then redirect to a url I want to invoke a process that has a pl sql block of code which will do some table inserts and then redirect the user to an external url. Any help would be appreciated.

  • SCSI Disc Read-Only

    hi i have Arch linux installed on my SCSI disc (Seagate Cheetah 18GB) my controler is Adaptec AHA-2940U/UW / AHA-39xx / AIC-7895 (rev 04) working on kernel 2.6.15-ARCH (from 0.7.1 installation CD) with default initrd .... sometimes i get error like (

  • Managed metadata columns in document information panel with multiple content types

    Hi everyone, The problem I have is that for custom content types not all managed metadata columns are displayed in Document Information Panel (DIP) for the document in the Office client application.  However, everything works fine with 1 specific con

  • ILife '06 Install problems on OS X 10.5.6

    Recently I decided to format my internal HD. It had become sluggish, and hadn't been cleaned up in 2 years since I bought it (iMac 20" 2.16GHz Intel Core 2 Duo) I chose to completely wipe out the drive (I have 6 external Firewire drives for file and