Vertical table partition?

Hi,
our database contains XML documents in 11 languages, one table for each language. Besides, there is meta information about each document which is the same for each language. Therefore, this meta information has been placed in a separate table.
Now to the problem: common database queries search in the documents (Intermedia) as well
in the meta data, what means that there is a join between the table that contains the documents (columns: key, documenttext) and the table that contains the meta information (columns: key and 18 others). Unfortunately after having located documents via Intermedia index, Oracle reads the corresponding data blocks for extracting the key. Because the documents are up to 4K, Oracle reads much information that it does not need. Is it possible to have a vertical partition of the documents table in order to be able to efficiently read only the key information instead of the whole table rows?
Thank You for any hint,
Markus

Thanks For your comment and sorry for delay my Internet got down yesterday. Well, As you said throw away existing table structure.. i agree with you but there is a limitation right now. Our application only understand the current structure. Any change in database result to change execution engine. Actually, Our application provide interface to do Ad hoc reporting. we certainly use dimensional modeling but unfortunately our CTO believe existing structure is much better than the dimensional model. it gives the flexibility  to load any data like retail, click stream etc and provide reporting on data. Although, the architecture is bad but they we providing analytic to customer....
To cut the long story short, they have assigned me a task to improve its performance. i have already improve it performance by using different feature of oracle like table partitioning sub partitioning, indexing and many more... but we want more... i have observed during the data loading process the Update part took much time. if i could do this any other way it would reduced the overall data loading time.. we extract the file from client server and perform transformation according to  business rule and out the result of transformation file in flat file.. finally our application use sql loader to load in flat table then divide it in different table. Some of our existing client have 250 column some are 350 plus...
i think i have explained much but if you have more question on it i will ready to answer them. Please suggest with your experience how could i improve performance of existing system.

Similar Messages

  • Getting error while importing a table partition

    Hi,
    I am trying to import a table partition from OEM and occurred with following error:
    Job IMPORT000042 has been reopened at Friday, 13 June, 2008 14:44
    Restarting "SYSMAN"."IMPORT000042":
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    ORA-31693: Table data object "SCOTT"."CONTAINER":"PARTITION_5" failed to load/unload and is being skipped due to error:
    ORA-06502: PL/SQL: numeric or value error
    LPX-00210: expected '<' instead of 'n'
    Job "SYSMAN"."IMPORT000042" completed with 1 error(s) at 14:44
    Job state: COMPLETED
    Thanks

    What's the source and target database Oracle version?
    What's the character set of both databases?

  • Index Vs table partition

    I have table whose growth is 1 million per month and may increase in future. I currently place an index on column which is frequently uses in where clause. there is another column which contains months so it may possible that I make 12 partitions of that. I want to know what is suitable. is there any connection between index and table partition?
    Message was edited by:
    user459835

    I think the question is more of what type of queries are answered by this table?
    is it that most of the times the results returned span across several months?
    is there any relation to the column you use in where clause with the data belonging to a particular month (or range there-of)?

  • Report Custom Vertical Table Question

    Hello Guys,
    I am trying to display some data in a custom vertical table (for a report). The data that I have looks like this :
    Customer number          Name               Product          Age
    123               John Customer          Checking     50
    456               Jane Customer          Savings          40
    When I display it using a vertical table (or attribute value pairs) the data looks like:
    Customer number: 123
    Name : John Customer
    Product: Bank Account
    Age: 50
    Customer number: 456
    Name : Jane Customer
    Product: Bank Account
    Acct No: 40
    But I want to display the data like this:
    Customer number: 123 456
    Name : John Customer Jane Customer
    Product: Bank Account Bank Account
    Age: 50 50
    The option here is to change the data to dynamic columns rather than dynamic rows. I tried manipulating the templates but it would not do it. Any other suggestions?
    Thanks!

    Hi Badri,
    OK - this is what I did (there's no documentation as such, so I'll just give you the step-by-step guide here!):
    1 - In your application, go to Shared Components then Templates
    2 - Click the Create button
    3 - Click the "Report" option
    4 - Select "From Scratch" and click Next
    5 - Enter a name for the new template (for example, "Vertical Report"), leave the Theme as your current theme, set Template Class to "Custom 1", tick the "Named Column (row template)" option and click Create
    This creates a new blank report template - scroll down the list of templates to this new one and click on the name to edit it.
    In there, you need to enter the following:
    Row Template 1 setting:
    &lt;td&gt;
    &lt;table cellpadding="0" border="0" cellspacing="0" summary="" class="t18Standard" style="border-collapse:collapse"&gt;
    &lt;tr&gt;&lt;td class="t18Data"&gt;#1#&lt;/td&gt;&lt;/tr&gt;
    &lt;tr&gt;&lt;td class="t18Data"&gt;#2#&lt;/td&gt;&lt;/tr&gt;
    &lt;tr&gt;&lt;td class="t18Data"&gt;#3#&lt;/td&gt;&lt;/tr&gt;
    &lt;tr&gt;&lt;td class="t18Data"&gt;#4#&lt;/td&gt;&lt;/tr&gt;
    &lt;/table&gt;
    &lt;/td&gt;NOTE: In my example report, I show four columns (EMPNO, ENAME, SAL, COMM) - these are refered to as #1#, #2#, #3# and #4# above (#1# means column number 1, #2# is column 2 and so on - but use the column numbers not the names in your template). If you have a different number of columns add in or remove lines of: &lt;tr&gt;&lt;td class="t18Data"&gt;#nn#&lt;/td&gt;&lt;/tr&gt; (replacing nn with the column number)
    Leave all other settings in the Row Templates section blank
    Before Rows setting:
    &lt;table cellpadding="0" border="0" cellspacing="0" summary="" style="border-collapse:collapse;"&gt;
    &lt;tr&gt;
    &lt;td&gt;
    &lt;table class="t18Standard" cellpadding="0" border="0" cellspacing="0" summary="" style="border-collapse:collapse;"&gt;
    &lt;tr&gt;&lt;th class="t18ReportHeader"&gt;EMPNO&lt;/th&gt;&lt;/tr&gt;
    &lt;tr&gt;&lt;th class="t18ReportHeader"&gt;ENAME&lt;/th&gt;&lt;/tr&gt;
    &lt;tr&gt;&lt;th class="t18ReportHeader"&gt;SAL&lt;/th&gt;&lt;/tr&gt;
    &lt;tr&gt;&lt;th class="t18ReportHeader"&gt;COMM&lt;/th&gt;&lt;/tr&gt;
    &lt;/table&gt;
    &lt;/td&gt;NOTE: You will need one TR tag for each of the column headings - I have four here (EMPNO, ENAME, SAL and COMM) - add or remove lines of &lt;tr&gt;&lt;th class="t18ReportHeader"&gt;COLUMNNAME&lt;/th&gt;&lt;/tr&gt;
    After Rows setting:
    &lt;/tr&gt;
    &lt;/table&gt;
    &lt;table&gt;
    &lt;tr&gt;
    &lt;td&gt;
    #PAGINATION#
    &lt;/td&gt;
    &lt;/tr&gt;
    &lt;/table&gt;Leave all settings in the Pagination section blank
    NOTE: Any "class" name used above assumes that you are using Theme 18 (t18Standard is the standard report style class, for example) - you will have to update these to match your own theme
    Click Apply Changes to save this
    Now go back to your page and click on the "Report" link for your report region - this takes you to Report Attributes. Scroll down to the "Report Template" setting and change this to your new template. Click Apply Changes to save that.
    When the page is rendered, a new table will be created for the report, the headings will be created within another table in the first TD on that table. All data lines are created as separate tables within new TD tags - this makes the output go across the page instead of down.
    Andy

  • How can I display a ViewObject in 'vertical' table? (11g)

    In our application we have a need to display data that comes from a ViewObject in a 'vertical' table, rather than the standard horizontal layout the <af:table> displays.
    For example, suppose the ViewObject has 3 attributes -- Attr1, Attr2 and Attr3.
    Rather than displaying bound viewObject data in columns that run horizontally such as this:
    (Row1)      Attr1 | Attr2 | Attr3
    (Row2)      Attr1 | Attr2 | Attr3
    (Row3)      Attr1 | Attr2 | Attr3
    ...I we want to display:
    (Row1)
    Attr1
    Attr2
    Attr3
    (Row2)
    Attr1
    Attr2
    Attr3
    (Row3)
    Attr1
    Attr2
    Attr3
    ...Is it possible to achieve this using the af:table tag and monkeying with the css styles or is there another tag suited for this such as the forEach tag?
    Does anyone have experience with this? Thanks in advance

    Well, you can use just one <af:column, and put all the attributes under it... can even try using some grouplayout -vertical..
    <af:table value="#{bindings.Something.collectionModel}"   var="row">
       <af:column>
          <af:panelGroupLayout layout="vertical">
             <af:activeOutputText value="#{row.bindings.column1.inputValue}" id="aot1"/>
             <af:activeOutputText value="#{row.bindings.column2.inputValue}" id="aot2"/>
             <af:activeOutputText value="#{row.bindings.column3.inputValue}" id="aot3"/>
          </af:panelGroupLayout>
       </af:column>
    </af:table>                   Should definitely work
    Julian

  • Long running table partitioning job

    Dear HANA grus,
    I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
    Total data volumn is around 340GB and the table size was 32GB !!!!!
    (migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
    Before partitioning, the data volumn of the table was around 32GB.
    After partitioning, the size has changed to 25GB.
    It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
    (It is QA DB, so less complaints)
    I thought that I might not can do this in the production DB.
    Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
    Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
    Any comments would be appreciate.
    Cheers,
    - Jason

    Jason,
    looks like we're cross talking here...
    What was your rationale to partition the table in the first place?
           => To reduce deleting time of CDPOS            (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
    Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
    As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
    Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
    The deletion then should run along these 4 sets of 25% of data.
    It's surely me, but where is the speedup potential here?
    How many unloads happened during the re-partitioning?
           => It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
    I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
    How do the now longer running SQL statements look like?
           => As i mentioned selecting/deleting increased almost twice.
    That's not what I asked.
    Post the SQL statement text that was taking longer.
    What are the three columns you picked for partitioning?
           => mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
    Why those? Because these are the primary key?
    I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
    In that case the partition pruning cannot work and all partitions have to be searched.
    How did you come up with 4 partitions? Why not 13, 72 or 213?
           => I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
    Alright, so basically that was arbitrary.
    For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
    Well, not sure what "most people" would do.
    HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
    - Lars

  • Automatic table partitioning in Oracle 11g

    Hi All,
    I need to implement automatic table partitioning in Oracle 11g version, but partitioning interval should be on daily basis(For every day).
    I was able to perform this for Monthly and Yearly but not on daily basis.
    create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*MONTH*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    Table created
    create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*YEAR*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    Table createdBut if i use DD or DAY instead of YEAR or MONTH it fails......Please suggest me how to perform this on daily basis.
    SQL>
      1  create table part
      2  (a date)PARTITION BY RANGE (a)
      3  INTERVAL (NUMTOYMINTERVAL(1,'*DAY*'))
      4  (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
      5* )
    SQL> /
    INTERVAL (NUMTOYMINTERVAL(1,'DAY'))
    ERROR at line 3:
    ORA-14752: Interval expression is not a constant of the correct type
    SQL> create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*DD*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    );  2    3    4    5
    INTERVAL (NUMTOYMINTERVAL(1,'DD'))
    ERROR at line 3:
    ORA-14752: Interval expression is not a constant of the correct typePlease suggest me to resolve this ORA-14752 error for using DAY or DD or HH24
    -Yasser

    Yes, for differenct partitions for different months.
    interval (numtoyminterval(1,'MONTH'))
    store in (TS1,TS2,TS3)
    This code will store data in partitions in tablespaces TS1, TS2, and TS3 in a round robin manner.
    for Day wise day yes you can store
    INTERVAL (NUMTODSINTERVAL(1,'day')) or
    INTERVAL (NUMTODSINTERVAL(2,'day')) or
    INTERVAL (NUMTODSINTERVAL(3,'day')) or
    INTERVAL (NUMTODSINTERVAL(4,'day')) or
    INTERVAL (NUMTODSINTERVAL(5,'day')) or
    INTERVAL (NUMTODSINTERVAL(n,'day'))

  • What is the best way to dynamically create table partition by year and month based on a date column?

    Hi,
    I have a huge table and it will keep growing. I have a date column in this table and thought of partition the table by year and month. Can any you suggest better approach so that partition will create automatically for new data also along with the existing
    data? Nothing but automatically/dynamically partition should create along with file group and partition files.
    Thanks in advance!
    Palash 

    Also this one
    http://weblogs.sqlteam.com/dang/archive/2008/08/30/Sliding-Window-Table-Partitioning.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Table partitioning (intervel partitioning) on existing tables in oracle 11g

    Hi i'm newbie to table partitioning. I'm using 11g. I have table of size 32 gb (which has 22 million records) and i want to apply interval partition on that table. I created a empty table with a partition having columns same as source table and take dump of the source table and import into the new partition table. can you please suggest how to import table dump into new table? also is there any other better idea to do the same.

    Hi,
    imp user/password file=exp.dmp ignore=y
    The ignore=y causes the import to skip the table creation and continues to load all rows.
    On the other hand, you can insert data into the partitioned table with a subquery from the non-partitioned table such as follows;
    insert into patitioned_table
    select * from original_table;
    Hope it helps,

  • Table Partition on daily basis in oracle 10g

    I Want to create partition based on sysdate on daily basis.
    There will be 8 partitions. Every day data's will be get loaded into this table and everyday 8th day old data ill be get truncated.
    CREATE TABLE CUST_WALLET_BALANCE_7DAYS
    ( ID  VARCHAR2(250),
       A_DATE  VARCHAR2(11),
       LAST_PROCESS_DATE DATE,
      DD_OF_PROCESS_DATE  NUMBER(2),
      CONSTRAINT CUST_WALLET_BALANCE_7DAYS_PK PRIMARY KEY (ID,A_DATE))
      PARTITION BY RANGE (DD_OF_PROCESS_DATE)
      ( PARTITION DAY1 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE)),'DD')),
        PARTITION DAY2 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-1)),'DD')),
        PARTITION DAY3 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-2)),'DD')),
        PARTITION DAY4 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-3)),'DD')),
        PARTITION DAY5 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-4)),'DD')),
        PARTITION DAY6 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-5)),'DD')),
        PARTITION DAY7 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-6)),'DD')),
        PARTITION DAY8 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-7)),'DD'))
    THIS WONT WORKS OUT. SO PLEASE SUGGEST ME WITH BETTER SOLUTION.

    Original thread here: Table Partition on daily basis in oracle 10g
    Please do not start duplicate questions for the same topic.
    Locking this thread

  • Table Partitioning in Oracle 9i

    Hi all,
    I have a question on partitioning in Oracle 9i.
    I have a parent table with primary key A1 and attribute A2. A2 is not a primary key but I would to create partition for the table based on this attribute. I have a child table with attribute B1 being a foreign key to A1.
    I wish to perform a data purging on the parent and child table. I'll purge the parent table based on A2, but for the child table, it will be inefficient if I delete all records in child table where parent.A1 = child.B1. Should I add a new attribute A2 to the child table, partition the child table based on this attribute or is there a better way to do it?
    Thanks in advance for all replies.
    Cheers,
    Bernard

    Bernard
    Right 100K in the parent...but how many in the child ?
    I guess it comes back to what I said earlier...you can either take the hit on the cascaded delete to get out the records on the child table or you can denormalise the column down onto the child table in order to partition by it.
    I'm building a Data Warehouse currently and we're using the denormalise approach on a couple of tables in order to allow them to be equipartitioned and enable easier partition management and DML operations as you've indicated....but our tables have 100's of millions of rows in them so we really need to do that for manageability.
    100K records in the parent - provided the ratio to the child is not such that on average each deleted parent has 100's of children is probably not too onerous, especially for a monthly batch process - the question there would be how much time do you have to do this at the end of the month ? I'd probably suggest you set up a quick test and benchmark it with say 10K records as a representative sample (can do all 100K if you have time/space) - then assess that load/time against your month end window....if its reasonably quick then no need to compromise your design.
    You should also consider whether the 100K is going to remain consistent over time or is it going to grow rapidly in which that would sway you towards adding the denormalisation for partitioning approach at the outset.
    HTH
    Jeff

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Reg. table partition

    Hi
    Shall anyone explain about table partition.
    What it is and when it can be used.
    regards
    Sridhar
    [email protected]

    Hi,
    Partitioning is nothing but u can split up the whoe dataset of an Info cube into smaller, physically independant and redundancy free units. Because of this Query performance has increased while reporting, also when u r deleting data from the info cube.
    We can Partition the ODS/Cube based on 0CALMONTH/0FISCYEAR.
    Double click on ur ODS/Cube.
    Extras > Partitioning.
    This is called Physical partitioning.we go fro this partition prior loading data into the info provdider..
    You can even do that after the load, but you have to move the data.
    There is another Partitioning also LOGICAL PARTITIONING.
    Ex: If we have 2 maintain 3 years of Data.
    We will maintain 2005 data in a cube, 2006 data in a cube and 2007 data in cube.
    Different cube for different years....we will group these Cubes under a Multi Provider.
    Check the following link regarding partitioning:
    http://help.sap.com/saphelp_nw04/helpdata/en/0a/cd6e3a30aac013e10000000a114084/frameset.htm
    Partitioning

  • Update in table partition by hash

    I have a table partitioned by hash. I want to make some updates (not on the partition key column and also not based on this column, i.e this column is not used in where clause).
    Do I need to specify partition name in the query?
    For a concrete example:
    CREATE TABLE invoices
    (invoice_id NUMBER NOT NULL,
    customer_id NUMBER NOT NULL,
    invoice_date DATE NOT NULL,
    comments VARCHAR2(500))
    PARTITION BY HASH (customer_id)
    PARTITIONS 10
    primary key is customer_id + invoice_date
    I need to update invoice_id based on the rowid, like update invoices set invoice_id = MEMO_ID_1SQ.nextval where rowid = <variable>. I will be running this query from multiple processes and I want to make sure they are hitting different partitions, so they actually run in parallel.
    Thanks in advance,
    Radu

    Radu,
    You can use parallel hint on your update statement.
    update /*+ PARALLEL(invoices, 4) */  invoices set invoice_id = MEMO_ID_1SQ.nextval where rowid = <variable>;See following execution plan for both with and without using parallel option
    SQL> update interfacerecords set status=1 where interfaceid=5;
    200000 rows updated.
    Elapsed: 00:00:03.75
    Execution Plan
    Plan hash value: 3154550297
    | Id  | Operation          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT   |                  |   200K|  3125K|   128   (3)| 00:00:02 |
    |   1 |  UPDATE            | INTERFACERECORDS |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL| INTERFACERECORDS |   200K|  3125K|   128   (3)| 00:00:02 |
    Predicate Information (identified by operation id):
       2 - filter("INTERFACEID"=5)
    Statistics
            262  recursive calls
         458353  db block gets
           1539  consistent gets
            378  physical reads
       91750476  redo size
            655  bytes sent via SQL*Net to client
            585  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
         200000  rows processed
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.00
    SQL> set autotrace traceonly;
    SQL> set timi on;
    SQL> set lines 400;
    SQL> update /*+ PARALLEL(interfacerecords, 4) */ interfacerecords set status=5 where status=1;
    200000 rows updated.
    Elapsed: 00:00:02.48
    Execution Plan
    Plan hash value: 2940696107
    | Id  | Operation             | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | UPDATE STATEMENT      |                  |     1 |    13 |    35   (0)| 00:00:01 |        |   |       |
    |   1 |  UPDATE               | INTERFACERECORDS |       |       |            |          |        |   |       |
    |   2 |   PX COORDINATOR      |                  |       |       |            |          |        |   |       |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000         |     1 |    13 |    35   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |                  |     1 |    13 |    35   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |*  5 |      TABLE ACCESS FULL| INTERFACERECORDS |     1 |    13 |    35   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    Predicate Information (identified by operation id):
       5 - filter("STATUS"=1)
    Statistics
            309  recursive calls
         214609  db block gets
          30287  consistent gets
              0  physical reads
       63356488  redo size
            654  bytes sent via SQL*Net to client
            617  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
         200000  rows processedRegards

  • Introduction of Oracle Table Partitions into PeopleSoft HRMS environment

    I would like to pose a general question and see if anyone has found any published advice / suggestions from PeopleSoft or Oracle on this. I believe that Oracle table partitioning isn't supported through the PeopleTools application designer functionality. Most likely this is done for platform independence. However, we were thinking about implementing table partitioning for performance and the ability to refresh test instances with subsets worth of data instead of the entire database.
    I know that this would be a substantial effort, but was wondering if anyone had any documentation on this type of implemention. I've read some articles from David Kurtz on the subject, and it sounds like these were all custom jobs for each individual client. Was looking for something more generic on this practice from PeopleSoft or Oracle...
    Regards,
    Jay

    Thanks for the article Nicolas. I will add that to my collection, good reference piece.
    I think you gasped the gist of the query, which was I know that putting partitioning into a PeopleSoft application is going to be highly specific to the client and application that you are running. But what I looking for was something like a baseline guide for implementing partitioning in a PeopleSoft application as a whole.
    In other words, something like notifiaction that the application designer panels would be affected since they don't have the ability to manage partitions. Therefore, any changes to tables that would utilize partitioning would need to be maintained at the database level and no longer utilize the DDL generated from PeopleTools Application Designer. Other consideration would be, like maybe a list of tables that would be candidates for partitioning based on the application, in my case HRMS. And maybe, suggestions on what column should be used for partitioning, etc...All of which are touched on in your identified article about putting partitioning in at the database level for a generic application.
    Thanks for your help, it is much appreciated...
    Jay

Maybe you are looking for

  • Urgent : how to make customised jre  by removing unwanted classes

    dear all, We have developed a application in java , we want to provide this as a downloadable application so we want it to be of less size.To run this java application we need jre.As the jre size is more and we are not using most of the classes , we

  • OpenGL Enabled, but not working (Photoshop CS4)

    I have having a weird problem with Photoshop CS4. First of all, let me give off the specs. OS: Windows 7 64 bit Graphics Card: AMD Radeon HD 7570 (Driver Version 8.960.11.1000, with latest Catalyst Control Center) I use Photoshop CS4 for freelance wo

  • Sample Adapter Deployment problem

    HI All,             We have developed a Sample Adapter  in XI for a client requirement. Its working fine except a one problem that sometimes it does not work but when we re-deploy the same sample.sda file  through SDM , it starts working.  It's a wei

  • Record audio while checking email

    I would like to start recording my voice and the check email etc at the same time, but when I minimize GB it pauses the recording. Is this possible?

  • How do I restore smtp mail setting from time machine?

    Earlier today I was fooling around with my smtp mail settings and managed to mess them up. How do I restore smtp mail setting from time machine? I'm using Lion 10.7.4 Messages and incoming mail works fine. It's just the smtp mail setting that's the p