Should I partition a table or not???

I have a table with well over 8 million records and growing around 15,000 records daily. It is queried through a form (forms6i), no updates are done on the form just "select". The indexes are set up for the 2 types of querries mostly performed- by a date criteria and an id criteria. Is indexing good enough? I notice that it is getting harder to get results from date specific querries. What is the best approaches to ensure the table is optimized for querries? I am not used to working with such a large table size. Does partitioning help indexes work more efficient? Should I run statistics on the whole table?? I am not comfortable running statistics on the table when I am not around to monitor the performance on the server. The database runs 24 hours, 7 days a week- there is no real "downtime" to run stats.
Please advise, I would like to find a best practice approach.

Re: Should I partition a table or not???
Posted: Jun 19, 2007 11:59 AM in response to: user542952 Reply
(small correction in my earlier posting)
mostly performed- by a date criteria and an id criteriaPartitioning NOT always imporves performance. It can degrade performance if you had NOT partitoned based on queries those hit.
Beaware of local indexes. You might degrade performance because you may need to proble mulitple index partitions.
Do you query date and Ids by range search or exact search ?.
How much data (roughly 10% ,20% , 1% etc..) you fetch through the query

Similar Messages

  • Partitioning Fact Tables -- experiences, notes, documentation

    I have gone through section 3.8 of the OBIA Installation and Configuration Guide -- "Partitioning Guidelines for Large Fact Tables".
    Frankly, I find that documentation inadequate and using a poor example.
    I am looking at partitioning W_GL_BALANCE_F . In this table, BALANCE_DT_WID seems to be a Partitioning Key. With 24 months data and only Month-End balances I have only 24 distinct keys. Therefore, this would be a LIST PARTITIONING Key.
    I can and have rebuilt the table as a partitioned table. And am proceeding with the DAC changes as per the documentation. However, I am looking for real world implementations, documentations, notes, experiences.
    Hemant K Chitale

    Thanks.
    Information like BUs, Companies, Ledgers etc from the source Financials systems are Dimensions when extracted. So they go into W_INT_ORG_D and W_LEDGER_D (for example) and the ROW_WIDs generated for the ORG_NAME and LEDGER_NAME is the join key to the Fact table (W_GL_BALANCE_F). So these might be partition keys but we'd have to identify the generated values (ROW_IDs becoming COMPANY_ORG_WID and LEDGER_WID) before defining the partition keys. That can be done only after the data is loaded ?.
    How did you partition by BU ?
    Hemant K Chitale

  • Sql server partition parent table and reference not partition child table

     
    Hi,
    I have two tables in SQL Server 2008 R2, Parent and Child Table.  
    Parent has date time, and it is partitioned monthly,  there is a Child table which just refer the Parent table using Foreign key relation.   
    is there any problem the non-partitioned child table referring to a partitioned parent table?
    Thanks,
    Areef

    The tables will need to be offline for the operation. "Offline" here, means that you wrap the entire operation in a transaction. Ideally, this transaction would:
    1) Drop the foreign key.
    2) Use ALTER TABLE SWITCH to drop the old data.
    3) Use ALTER PARTITION FUNCTION to drop the old empty partition.
    4) Use ALTER PARTITION FUNCTION to add a new empty partition.
    5) Reapply the foreign keys WITH CHECK.
    All but the last operation are metadata-only operation (provided that you do them right). To perform the last operation, SQL Server must scan the child tbale and verify that all keys are present in the parent table. This can take some time for larger tables.
    During the transaction, SQL Server holds Sch-M locks on the table, which means that are entirely inaccessible, even for queries running with NOLOCK.
    You avoid this the scan by applying the fkey constraint WITH NOCHECK, but this can have impact on query plans, as SQL Server will not consider the constraint as trusted.
    An alternative which should not be entirely dismissed is to use partitioned
    views instead. With partitioned views, the foreign keys are not an issue, because each partition is a pair of tables, with its own local fkey.
    As for the second question: it appears to be completely pointless to partition the parent, but not the child table. Or does the child table only have rows for a smaller set of the rows in the parent?
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Should we have to mention tablespace while we partition a table???

    Should we have to mention our new tablespace while we partition a table???

    You can create a table with multiple partitions and put all each those partitions into one tablespace
    Take a look at this example:
    http://www.oracle-dba-online.com/sql/oracle_table_partition.htm
    And from Documentation:
    Although you are not required to keep each table or index partition (or subpartition) in a separate tablespace, it is to your advantage to do so. Storing partitions in separate tablespaces enables you to:
    Reduce the possibility of data corruption in multiple partitions
    Back up and recover each partition independently
    Control the mapping of partitions to disk drives (important for balancing I/O load)
    Improve manageability, availability, and performance
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm#sthref2606
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com

  • All rows in table do not qualify for specified partition

    SQL> Alter Table ABC
    2 Exchange Partition P1 With Table XYZ;
    Table altered.
    SQL> Alter Table ABC
    2 Exchange Partition P2 With Table XYZ;
    Exchange Partition P2 With Table XYZ
    ERROR at line 2:
    ORA-14099: all rows in table do not qualify for specified partition
    The exchange partition works correct for the first time. However if we try to exchange 2nd partition it gives the error.
    How do i solve this error?
    How do i find rows which are not qualified for the specified portion. is there a query to find out the same?

    stephen.b.fernandes wrote:
    Is there another way?First of all, exchange is physical operation. It is not possible to append exchanged data. So solution would be to create archive table as partitioned and use non-partitioned intermediate table for exchange:
    SQL> create table FLX_TIME1
      2  (
      3  ACCOUNT_CODE VARCHAR2(50) not null,
      4  POSTING_DATE DATE not null
      5  ) partition by range(POSTING_DATE) INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
      6  ( partition day0 values less than (TO_DATE('01-12-2012', 'DD-MM-YYYY') ) )
      7  /
    Table created.
    SQL> create index FLX_TIME1_N1 on FLX_TIME1 (POSTING_DATE)
      2  /
    Index created.
    SQL> create table FLX_TIME1_ARCHIVE
      2  (
      3  ACCOUNT_CODE VARCHAR2(50) not null,
      4  POSTING_DATE DATE not null
      5  ) partition by range(POSTING_DATE) INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
      6  ( partition day0 values less than (TO_DATE('01-12-2012', 'DD-MM-YYYY') ) )
      7  /
    Table created.
    SQL> create table FLX_TIME2
      2  (
      3  ACCOUNT_CODE VARCHAR2(50) not null,
      4  POSTING_DATE DATE not null
      5  )
      6  /
    Table created.
    SQL> Declare
      2  days Number;
      3  Begin
      4  FOR days IN 1..50
      5  Loop
      6  insert into FLX_TIME1 values (days,sysdate+days);
      7  End Loop;
      8  commit;
      9  END;
    10  /
    PL/SQL procedure successfully completed.
    SQL> set linesize 132
    SQL> select partition_name,high_value from user_tab_partitions where table_name='FLX_TIME1';
    PARTITION_NAME                 HIGH_VALUE
    DAY0                           TO_DATE(' 2012-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
    SYS_P119                       TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
    SYS_P120                       TO_DATE(' 2013-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
    Now we need to echange partition SYS_P119 to FLX_TIME2 and then echange FLX_TIME2 into FLX_TIME1_ARCHIVE:
    To exchange it with FLX_TIME2:
    SQL> truncate table FLX_TIME2;
    Table truncated.
    SQL> alter table FLX_TIME1 exchange partition SYS_P119 with table FLX_TIME2;
    Table altered.To exchange FLX_TIME2 with FLX_TIME1_ARCHIVE we need to create corresponding partition in FLX_TIME1_ARCHIVE. To do than we use LOCK TABLE PARTITION FOR syntax supplying proper date value HIGH_VALUE - 1 (partition partitioning column is less than HIGH_VALUE so we subtract 1) and then use ALTER TABLE EXCHANGE PARTITION FOR syntax:
    SQL> lock table FLX_TIME1_ARCHIVE
      2    partition for(TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') - 1)
      3    in share mode;
    Table(s) Locked.
    SQL> alter table FLX_TIME1_ARCHIVE exchange partition
      2    for(TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') - 1)
      3    with table FLX_TIME2;
    Table altered.
    SQL> Same way we exchange partition SYS_P120:
    SQL> truncate table FLX_TIME2;
    Table truncated.
    SQL> alter table FLX_TIME1 exchange partition SYS_P120 with table FLX_TIME2;
    Table altered.
    SQL> lock table FLX_TIME1_ARCHIVE
      2    partition for(TO_DATE(' 2013-01-02 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') - 1)
      3    in share mode;
    Table(s) Locked.
    SQL> alter table FLX_TIME1_ARCHIVE exchange partition
      2    for(TO_DATE(' 2013-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') - 1)
      3    with table FLX_TIME2;
    Table altered.
    SQL> Now:
    SQL> select  count(*)
      2    from  FLX_TIME1 partition(day0)
      3  /
      COUNT(*)
             8
    SQL> select  count(*)
      2    from  FLX_TIME1 partition(sys_p119)
      3  /
      COUNT(*)
             0
    SQL> select  count(*)
      2    from  FLX_TIME1 partition(sys_p120)
      3  /
      COUNT(*)
             0
    SQL> select partition_name from user_tab_partitions where table_name='FLX_TIME1_ARCHIVE';
    PARTITION_NAME
    DAY0
    SYS_P121
    SYS_P122
    SQL> select  count(*)
      2    from  FLX_TIME1_ARCHIVE partition(day0)
      3  /
      COUNT(*)
             0
    SQL> select  count(*)
      2    from  FLX_TIME1_ARCHIVE partition(sys_p121)
      3  /
      COUNT(*)
            31
    SQL> select  count(*)
      2    from  FLX_TIME1_ARCHIVE partition(sys_p122)
      3  /
      COUNT(*)
            11
    SQL> SY.

  • BLOB column in own tablespace, in partition, in table, tablespace to be moved

    Hi All,
    First off I am using Oracle Database 11.2.0.2 on AIX 5.3.
    We have a table that is partitioned monthly.
    In this table there is a partition (LOWER), this lower partition is 1.5TB in size due to a BLOB column called (ATTACHMENT).
    The rest of the table is not that big, about 30GB, its the BLOB column that is using up all the space.
    The lower partition is in its own default tablespace (DefaultTablespace), the BLOB column in the lower partition is also in its own tablespace(TABLESPACE_LOB) - 1.5TB
    I've been asked to free up some space by moving the TABELSPACE_LOB(from the lower partition) to an archive database, confirming the data is there and then removing the lower partition from production.
    I don't have enough free space (or time) to do an expdp, I don't think its doable with so much data.
    CREATE TABLE tablename
      xx                          VARCHAR2(14 BYTE),         
      xx                          NUMBER(8),   
      xx         NUMBER,
      ATTACHMENT   BLOB,
      xx             DATE,
      xx             VARCHAR2(100 BYTE),
      xx                     INTEGER,
    LOB (ATTACHMENT) STORE AS (
      TABLESPACE  DefaultTablespace
      ENABLE      STORAGE IN ROW
    NOCOMPRESS
    TABLESPACE DefaultTablespace
    RESULT_CACHE (MODE DEFAULT)
    PARTITION BY RANGE (xx)
      PARTITION LOWER VALUES LESS THAN ('xx')
        LOGGING
        COMPRESS BASIC
        TABLESPACE DefaultTablespace
        LOB (ATTACHMENT) STORE AS (
          TABLESPACE  TABLESPACE_LOB
          ENABLE      STORAGE IN ROW
    ...>>
    My idea was to take an datapump export of the table excluding the column ATTACHMENT, using external tables.
    Then to create the table on the archive database "with" the column ATTACHMENT.
    Import the data only, from what I understand if you use a dump file that has too many columns Oracle will handle it, i'm hoping it will work the other way round.
    Then on production make the TABLESPACE_LOB read only and move it to the new file system.
    This is a bit more complicated than a normal tablespace move due to how the table is split up.
    Any advice would be very much appreciated.

    JohnWatson wrote:
    If disc space is the problem, would a network mode export/import work for you? I have never tried it with that much data, but theoretically it should work. You could do just a few G at a time.
    I see what you are saying, if we use a network link then no redo would be generate on the export, but it would for the import right.  But like you said, we could do 100GB per day for the next ten days and that would be very doable I think, it would just take a long time. On the archive database we backup archivelogs every morning so anything generate on the import would be backed up to tape the following morning.
    mtefft wrote:
              Does it contain only that partition? Or are there other partitions in there as well? If there are other partitions, what % of the space is used by the partition you are trying to move?
    Yep, tablespace_lob only contains the LOWER partition, no other partitions.  Just the LOWER partition is taking up 1.5TB.

  • Range Partitioning a table. Max value to be defined

    Hi,
    I am using a range partitioned table, range partitioned on date, and have defined max value as 6 months after the Creation Date.
    I have a proc which creates the partitions I want in advance by splitting up the max partition.
    - Now what do I do when max partition is reached after 6 months?
    - If I define max partition one year or two year after the current date instead of the currently defined 6 months after creation date. What are the negatives attached with it?
    I can't use Interval Partition and have to use Range only.
    Kindly suggest.
    Thanks..

    >
    I am using a range partitioned table, range partitioned on date, and have defined max value as 6 months after the Creation Date.
    I have a proc which creates the partitions I want in advance by splitting up the max partition.
    - Now what do I do when max partition is reached after 6 months?
    - If I define max partition one year or two year after the current date instead of the currently defined 6 months after creation date. What are the negatives attached with it?
    >
    Any data with a partition key that does NOT match any partition will cause your INSERT query to fail.
    Any partition that has no data to match it will simply remain empty.
    A common partitioning scheme is to define one partition for all old data, one partition with a high max value and then split the max value partition to get the partitions you want in the middle.
    Let's say you want monthly partitions but don't have that much data from before the current year, 2012.
    1. Create one partition for dates < 1/1/2012
    2. One partition each for the 12 months of 2012
    3. One max value partition to be 1/1/4000
    You would just split the max value partition to create each month of 2013. The split could be done ahead of time or a month at a time as you choose.
    The only negative is that any data inserted by mistake that has a super-high date will go into the max value partition. But that is going to happen anyway. If you accidentally enter a date of 3/23/3882 it won't be rejected.
    But it is easy to query periodically to see if you have any 'bad' data like that. And the alternative is that an INSERT would fail because of the one bad record and all of your good data would be rejected anyway so it's not really much of a negative.
    Remember - for best management performance each partition should have its own tablespace and the indexes should all be local if possible.

  • Experiences of Partitioning FACT tables

    Running BPC 7.0 SP3 for MS
    We have two very large FACT tables (195milliion records and 105million records) and these are currently growing at a rate of 2m/5m records per month - we are running an incremental optimize twice per day
    It has been suggested that we consider partioning the tables to improve performance, but I have not been able to find any users/customers with any experience of doing this
    Specifically
    1. Does it improve performance?
    2. What additional complexity does it add to regular maintenance ?
    3. Have there been any problems encountered implementing Partioned tables?
    4. It would seem that partioning based on time would make sense - historic data in one partition, current data in another HOWEVER many of our reports pull current year and prior year so will this cause a reporting issue? Or degrade report performance?

    I don't know if this is still an issue for you.  You ask about Fact Table partitioning specifically, but you need to be aware that it is possible to partition either the FACT tables or the Fact table partition of the cube, or both. We have used (further) partioning of Fact table partition in the cube with success, and it sounds as if this is what you are really asking about. 
    The impacts are on
    1. processing time, a full optimize without Compress only processes the paritions that have changed, thereby reducing the run time where there is a lot of unchanged data. You mention that you run incremental updates twice daily,  this is currently reprocessing the whole database.  I would have expected the lite optimize to be more effective, supported by an overnight full optimize, if you have an overnight window. You can also run the lite optimize more frequently.
    2. query time. The filters defined in the partitions provide a more efficient path to data in the reporting processes than the defaults, which have the potential to scan large parts of the database.
    Partitioning is not a panacea. You need to be specific about the areas of performance problem that you have and choose the performance improvement strategy to address these.  Looking at the indexing of the database is also an area where you can improve performance significantly.
    If you partition the cube, it is transparent to the usage of the application, from both user and admin perspective. The greatest complexity comes is the definition of the partitions in the first place, but this is a normal DBA function.  The trick is ensure that the filter statements do not overlap, otherwise you might get a value duplicated in 2 partitions, and to define a catchall partition to include anything not included in specific partitions. You should expect to revist the partitioning from time to time.  It is quite straightforward to repartition, you are not doing anything to the underlying data in the FACT tables
    Time is a common dimension to partition and you may partition at different levels of granularity for different periods, e.g. current year by qtr or month, prior and future years by year.  This reflects where the most frequent updates will be.  It is also possible to define partitions based on combinations of dimensions, we use category and time, so that currenct year actuals has the most granular partitions and all historic years budgets go into a single partition.

  • Partitioning the tables

    i have a table that logs an actigtivity
    it has activity date time
    its a very large table...which i havnt partitioned.
    because i havnt got the archiving requirements.
    my questions is .. should i blindly create a monthly partition based on date key or should i wait for the archiving requirements?
    what is the criteria to partition the table?
    is it the amount of data ... or the archiving strategy. which one of these two should i take as the deciding factor to create partitions
    regards
    raj

    my problem is that
    i have two tables A and B
    A has a column called activity date time and B is the child table for A.
    i did not create partitions on both the tables because... i did not know the online data requirement.
    now for a code move people are complaining that i did not create monthly partitions before...
    my quesitioon is that...
    shouldnt we wait for the online data requirement to come then think of partitioning the table?
    or rather than waiting
    should we blindly create the monthly partitions on activity date time?
    regards
    raj

  • Partition a table

    Hi,
    i should want to know what are the benefit to partition a table, because I follow the step on:
    http://docs.oracle.com/cd/B19306_01/server.102/b14231/tables.htm#i1006754
    the example 1.
    but i cannot demostrate some really benefit.
    For example:
    I have a table AVAN with some index, after that i made AVAN partitioned with the same index, How i can demostrate that after partioned the query executed on partioned table is better?
    If there is a solution of course.
    thanks
    Francesco

    Hi, I've runned the explain plan on a partitioned table and on a non-partitioned table (both table contain the same data).
    The table partioned has only an LOCAL index more, than do not have a partitioned table:
    For the table partioned:
    SQL> EXPLAIN PLAN SET STATEMENT_ID='PART'
      2  FOR
      3  select * from EVENTI_PM e where e.key_id_evento='50000034' and dt_inserimento = (select min(dt_inserimento) from eventi_pm e2
    where e.key_id_evento=e2.key_id_evento);
    Spiegato.See the plan:
    SQL> SELECT
      2  CARDINALITY,
      3  BYTES,
      4  COST,
      5  POSITION,
      6  PARTITION_START,
      7  PARTITION_STOP,
      8  PARTITION_ID,
      9  lpad(' ',level-1)||OPERATION||' '||OPTIONS||' '||object_name as Plan
    10    FROM PLAN_TABLE a
    11  CONNECT BY prior id = parent_id
    12          AND prior statement_id = statement_id
    13    START WITH id = 0
    14          AND statement_id = 'PART'
    15    ORDER BY id;
    CARDINALITY BYTES COST POSITION PARTITION_START PARTITION_STOP  PARTITION_ID PLAN
              1   144    5        5                                              SELECT STATEMENT
              1   144    5        1                                               NESTED LOOPS
              1    26    3        1                                                VIEW  VW_SQ_1
              1    17    3        1                                                 HASH GROUP BY
              2    34    3        1                                                  FIRST ROW
              2    34    3        1                                                   INDEX RANGE SCAN (MIN/MAX) EVENTO_PK
              1   118    2        2 KEY             KEY                        6   PARTITION HASH ITERATOR
              1   118    2        1 KEY             KEY                        6    TABLE ACCESS BY LOCAL INDEX ROWID EVENTI_P
                                                                                 M
              1          1        1 KEY             KEY                        6     INDEX UNIQUE SCAN INDICE_LOC
    Selezionate 9 righe.in which the cost is 5.
    the same thing for a non-partioned table:
    SQL> EXPLAIN PLAN SET STATEMENT_ID='NON_PART'
      2  FOR
      3  select * from EVENTI_PM2 e where e.key_id_evento='50000034' and dt_inserimento = (select min(dt_inserimento) from eventi_pm e
    2 where e.key_id_evento=e2.key_id_evento);
    Spiegato.see the plan
    SQL> SELECT
      2  CARDINALITY,
      3  BYTES,
      4  COST,
      5  POSITION,
      6  PARTITION_START,
      7  PARTITION_STOP,
      8  PARTITION_ID,
      9  lpad(' ',level-1)||OPERATION||' '||OPTIONS||' '||object_name as Plan
    10    FROM PLAN_TABLE a
    11  CONNECT BY prior id = parent_id
    12          AND prior statement_id = statement_id
    13    START WITH id = 0
    14          AND statement_id = 'NON_PART'
    15    ORDER BY id;
    CARDINALITY BYTES COST POSITION PARTITION_START PARTITION_STOP  PARTITION_ID PLAN
              1   144    5        5                                              SELECT STATEMENT
              1   144    5        1                                               NESTED LOOPS
              1    26    3        1                                                VIEW  VW_SQ_1
              1    17    3        1                                                 HASH GROUP BY
              2    34    3        1                                                  FIRST ROW
              2    34    3        1                                                   INDEX RANGE SCAN (MIN/MAX) EVENTO_PK
              1   118    2        2                                                TABLE ACCESS BY INDEX ROWID EVENTI_PM2
              1          1        1                                                 INDEX RANGE SCAN EVENTO_PK2
    Selezionate 8 righe.but the cost is the same, why?Even if only one partition was used. How i can demostrate to my self that the query in the partitioned table is better?
    I'm going to get crazy, partioned a table is not useful at all.Please help me!!!!!!!!!!!!!!!!!!!

  • Partitioning in table.

    Hello,
    In my company we have a ERP system. In which we have a Finance Module . One of the table in it keeps the record of the accounting year (ex- 08-09,09-10 or 10-11). Now my senior has given me a task to partition that table as per the accounting year. Should i use range partitioning ? give me some idea which will be the best thing to use here ....if you need anymore details , can ask me ....Thanks !!!
    Oracle 10g R2
    Windows server 2005

    First question that you should ask is why you want to use partitioning ?
    What are your goals ?
    They can be (from http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/schemaob.htm#CFAGCECI):
    >
    Increased availability
    The unavailability of a partition does not entail the unavailability of the object. The query optimizer automatically removes unreferenced partitions from the query plan so queries are not affected when the partitions are unavailable.
    Easier administration of schema objects
    A partitioned object has pieces that can be managed either collectively or individually. DDL statements can manipulate partitions rather than entire tables or indexes. Thus, you can break up resource-intensive tasks such as rebuilding an index or table. For example, you can move one table partition at a time. If a problem occurs, then only the partition move must be redone, not the table move. Also, dropping a partition avoids executing numerous DELETE statements.
    Reduced contention for shared resources in OLTP systems
    In some OLTP systems, partitions can decrease contention for a shared resource. For example, DML is distributed over many segments rather than one segment.
    Enhanced query performance in data warehouses
    In a data warehouse, partitioning can speed processing of ad hoc queries. For example, a sales table containing a million rows can be partitioned by quarter.

  • Partitioning a table with brspace

    Hello,
    we are under SAP 46C and oracle 10.2.0.2.
    we have a table that size over than 200GB, we would to go ahead with partitioning and archiving into a lot of parts.
    My question:
    in the past we use a full oracle method with exchange partition. Actually we would use brspace, but i cannot find the command line to use to partitioned a new table.
    Can you please help me on this issue?
    Thank you

    Orkun Gedik wrote:
    > As far as I know that it is not allowed partitioning on brspace. Instead you can check the note 1333328 - Partitioning Engine for Oracle
    Well, it is in fact allowed, allthough it takes a small ride through some notes.
    You should know what you are doing, but
    - Note 104047 (point 60.) allows partitioning (with minor restrictions)
    - Note 722188 describes partitioning in general and esp. the syntax required in the sql statements.
    - Note 646681 describes (point 5) how to stop brspace after createing the ddl.sql file to have the possibility to adjust as needed.
    I applied hash partitioning on a a table once using this way (editing ddl.sql) with an online reorg using brspace and that was way back in version 9.
    So I think it is still valid, but you should verify the procedure on a QA-system first and may be do a 4 eyes check on ddl.sql!
    Volker

  • Partitioned Incremental Table - no stats gathered on new partitions

    Dear Gurus
    Hoping that someone can point me in the right direction to trouble-shoot. Version Enterprise 11.1.0.7 AIX.
    Range partitioned table with hash sub-partitions.
    Automatic stats gather is on.
    dba_tables shows global stats YES analyzed 06/09/2011 (when first analyzed on migration of data) and dba_tab_partitions shows most partitions analyzed at that date and most others up until 10/10/2011 - done by the automatically by the weekend stats_gather scheduled job.
    46 new partitions added in the last few months but no stats gathered on in dba_tab_partitions and dba_table last_analyzed says 06/09/2011 - the date it was first analyzed manually gathering stats rather than using auto stats gatherer.
    Checked dbms_stats.get_prefs set to incremental and all the default values recommended by Oracle are set including publish = TRUE.
    dba_tab_partitions has no values in num_rows, last_analyzed etc.
    dba_tab_modifications has no values next to the new partitions but shows inserts as being 8 million approx per partition - no deletes or updates.
    dba_tab_statistics has no values next to the new partitions. All other partitions are marked as NO in the stale column.
    checked the dbms_stats job history window - and it showed that the window for stats gathering stopped within the Window automatically allowed.
    Looked at Grid Control - the stats gather for the table started at 6am Saturday morning and closed 2am Monday morning.
    Checked the recommended Window - and it stopped analyzing that table at 2am exactly having tried to analyze it since Saturday morning at 6am.
    Had expected that as the table was in incremental mode - it wouldn't have timed out and the new partitions would have been analyzed within the window.
    The job_queue_processes on the database = 1.
    Increased the job_queue_processes on the database = 2.
    Had been told that the original stats had taken 3 days in total to gather so via GRID - scheduled a dbms_scheduler (10.2.0.4) - to gather stats on that table over a bank holiday weekend - but asked management to start it 24 hours earlier to take account of extra time.
    The Oracle defaults were accepted (and as recommended in various seminars and whilte papers) - except CASCADE - although I wanted the indexes to be analyzed - I decided that was icing on the cake I couldn't afford).
    Went to work - 24 hours later - checked dba_scheduler_tasks_job running. Checked stats on dba_tab_stats and tba tablestats nothing had changed. I had expected to see partition stats for those not gathered first - but quick check of Grid - and it was doing a select via full table scan - and still on the first datafile!! Some have suggested to watchout for the DELETE taking along time - but I only saw evidence of the SELECT - so ran an AWR report - and sure enough full table scan on the whole table. Although the weekend gather stats job was also in operation - it wasn't doing my table - but was definitely running against others.
    So I checked the last_analyzed on other tables - one of them is a partitioned table - and they were getting up-to-date stats. But the tables and partitions are ridiculously small in comparison to the table I was focussed on.
    Next day I came in checked the dba_scheduler_job log and my job had completed within 24 hours and completed successfully.
    Horrors of horrors - none of the stats had changed one bit in any view I looked at.
    I got my excel spreadsheet out - and worked out whether because there was less than 10% changed - and I'd accepted the defaults - that was why there was nothing in the dba_tables to reflect it had last been analyzed when I asked it to.
    My stats roughly worked out showed that they were around the 20% mark - so the gather_table stats should have picked that up and gathered stats for the new partitions? There was nothing in evidence on any views at all.
    I scheduled the job via GRID 10.2.04 for an Oracle database using incremental stats introduced in 11.1.0.7 - is there a problem at that level?
    There are bugs I understand with incremental tables and gathering statistics in 11.1.0.7 which are resolved in 11.2.0 - however we've applied all the CPU until April of last year - it's possible that as we are so behind - we've missed stuff?
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    Thanks for anyone who replies - I'm not online at work so can't always give you my exact commands done - but hopefully you'll give me a few pointers of where to look next?
    Thanks!!!!!!!!!!!!!

    Save the attitude for your friends and family - it isn't appropriate on the forum.
    >
    I did exactly what it said on the tin:
    >
    Maybe 'tin' has some meaning for you but I have never heard of it when discussing
    an Oracle issue or problem and I have been doing this for over 25 years.
    >
    but obviously cannot subscribe to individual names:
    >
    Same with this. No idea what 'subscribe to individual names' means.
    >
    When I said defaults - I really did mean the defaults given by Oracle - not some made up defaults by me - I thought that by putting Oracle in my text - there - would enable people to realise what the defaults were.
    If you are suggesting that in all posts - I should put the Oracle defaults in name becuause the gurus on the site do not know them - then please let me know as I have wrongly assumed that I am asking questions to gurus who know this suff inside out.
    Clearly I have got this site wong.
    >
    Yes - you have got this site wrong. Putting 'Oracle' in the text doesn't enable people to realize
    what the defaults in your specific environment are.
    There is not a guru that I know of,
    and that includes Tom Kyte, Jonathan Lewis and many others, that can tell
    you, site unseen, what default values are in play in your specific environment
    given only the information you provided in your post.
    What is, or isn't a 'default' can often be changed at either the system or session level.0
    Can we make an educated guess about what the default value for a parameter might be?
    Of course - but that IS NOT how you troubleshoot.
    The first rule of troubleshooting is DO NOT MAKE ANY ASSUMPTIONS.
    The second rule is to gather all of the facts possible about the reported problem, its symptoms
    and its possible causes.
    These facts include determining EXACTLY what steps and commands the user performed.
    Next you post the prototype for stats
    DBMS_STATS.GATHER_TABLE_STATS (
    ownname VARCHAR2,
    tabname VARCHAR2,
    partname VARCHAR2 DEFAULT NULL,
    estimate_percent NUMBER DEFAULT to_estimate_percent_type
    (get_param('ESTIMATE_PERCENT')),
    block_sample BOOLEAN DEFAULT FALSE,
    method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
    degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
    granularity VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),
    cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
    stattab VARCHAR2 DEFAULT NULL,
    statid VARCHAR2 DEFAULT NULL,
    statown VARCHAR2 DEFAULT NULL,
    no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
    get_param('NO_INVALIDATE')),So what exactly is the value for GRANULARITY? Do you know?
    Well it can make a big difference. If you don't know you need to find out.
    >
    As mentioned earlier - I accepted all the "defaults".
    >
    Saying 'I used the default' only helps WHEN YOU KNOW WHAT THE DEFAULT VALUES ARE!
    Now can we get back to the issue?
    If you had read the excerpt I provided you should have noticed that the values
    used for GRANULARITY and INCREMENTAL have a significant influence on the stats gathered.
    And you should have noticed that the excerpt mentions full table scans exactly like yours.
    So even though you said this
    >
    Had expected that as the table was in incremental mode
    >
    Why did you expect this? You said you used all default values. The excerpt I provided
    says the default value for INCREMENTAL is FALSE. That doesn't jibe with your expectation.
    So did you check to see what INCREMENTAL was set to? Why not? That is part of troubleshooting.
    You form a hypothesis. You gather the facts; one of which is that you are getting a full table
    scan. One of which is you used default settings; one of which is FALSE for INCREMENTAL which,
    according to the excerpt, causes full table scans which matches what you are getting.
    Conclusion? Your expectation is wrong. So now you need to check out why. The first step
    is to query to see what value of INCREMENTAL is being used.
    You also need to check what value of GRANULARITY is being used.
    And you say this
    >
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    >
    Yet when I provide an excerpt that seems to match your issue you cop an attitude.
    I gave you a few pointers of where to look next and you fault us for not knowing the default
    values for all parameters for all versions of Oracle for all OSs.
    How disingenous is that?

  • Short dump 'Table does not exist in database'

    Hello All,
    When a report is executing it is going to short dump by saying 'Table does not exist in database'. As per the short dump analysis this issue is happening because of the following   Native SQL statement statement :
    Program :  %_T050N0 (This is a dynamic  program generating by SAP )
    Form Name :  DYN_LIC_SEL_TOT
    exec sql performing LOOP_MOVE_WRITE_ISAP.
    select single_plate, itm_num, ctry_code, model_lot,
    lic_hold_flg, qty into :dcat-lplate, :dcat-matnr,
    :dcat-werks, :dcat-charg, :dcat-holdflag,
    :dcat-qty from ZLICENSE_R2 where itm_num   = :p_matnr and
                    model_lot = :p_charg
    endexec.
    As per the customer this issue occurring since they migrated the SAP  back-end data base from Oralce to DB6. Here I felt that ZLICENSE_R2 is not migrated from the  Oracle to DB6. But as per the BASIS Team, even this table was not maintained in Oracle also. If the table was not maintained in the Oracle, this issue should have been there even before migration also.
    Following is the short dump details:
    Short text
        Table does not exist in database.
    What happened?
        The table or view name used does not
        exist in the database.
        The error occurred in the current database connection "DEFAULT".
    What can you do?
        Check the spelling of the table names in your report.
        Note down which actions and inputs caused the error.
        To process the problem further, contact you SAP system
        administrator.
        Using Transaction ST22 for ABAP Dump Analysis, you can look
        at and manage termination messages, and you can also
        keep them for a long time.
    Error analysis
        An exception occurred that is explained in detail below.
        The exception, which is assigned to class 'CX_SY_NATIVE_SQL_ERROR', was not
         caught in
       procedure "DYN_LIC_SEL_TOT" "(FORM)", nor was it propagated by a RAISING
    clause.
    Since the caller of the procedure could not have anticipated that the
    exception would occur, the current program is terminated.
    The reason for the exception is:
    Triggering SQL statement: "select single_plate, itm_num, ctry_code, model_lot,
    lic_hold_flg, qty from ZLICENSE_R2 where itm_num = ? and model_lot = ? "
    Database error code: "-204"
    Could you please  let me know what might be the reason for this issue.
    Many Thanks in Advance.

    Transaction SE11, input ZLICENSE_R2 for table name, and display the table. Did the table display? If not, that is the main problem.
    If the table displays, go to menu item Utilities -> Database Object -> Database Utility
    In the resulting screen, under the "Status" fields, you should see text "Exists in the database." If you don't, then the table exists in the dictionary, but doesn't exist in the database system. Click the "Create database table" button and then you should be able to run the program.
    You may need basis team's help to carryout some of these actions.

  • Table can not activate again after changing

    Hi, everybody
    For some reason that we create one more field call EKORG(Purchasing organisation) in table LFA1 and made choice for both Key and Initial value. Now, we are going to delete this field and activate table again, system show the following message, please kindly give us a hand and let me know how to solve this issue in more detail. We  use T-code: se14 and try to Activate and adjust database that system show the same message as below.
    Thanks you All.
    <b>Primary key change not permitted for value table LFA1
    Message no. AD300
    Diagnosis
    This table is defined as a check table. For reasons of consistency, changes to the primary key of the table are not allowed.
    Procedure
    If it is essential that you change the primary key, you must delete the relevant foreign keys. Refer to the where-used list to find all tables containing a field that is checked against this table. Delete the foreign keys for these fields.
    If necessary, maintain the deleted foreign keys again.</b>
    Message was edited by:
            Alfred

    Alfred,
    If you can't figure this out you should stop what you are doing and get help from someone who knows how to fix this mess.  I'll give you a hint, it isn't Programs...  And it might be something to do with tables...  Once you've got a list of all tables where LFA1 is used you will have to check them 1 by 1 to see if they use LFA1 as a check table.
    I'd still love to know what posessed you to mess around with the primary key of a standard SAP table?  And I'd also love to know why everyone is giving you help and suggestions on how to activate the table whilst no-one seems to care that you are changing a standard SAP primary key.  I can't believe you managed to get an access key to change the table in this way and no-one in your company/client questioned what and why you were doing it!
    LFA1 is a pretty important table in the SAP system (Vendor Master table) so to mess around with its primary key is utterly ridiculous.
    The only thing you should learn here is that changing SAP standard objects is usually a no-no.  Trying to change the primary key on a standard SAP table is a complete no-no.  There must have been an alternative to whatever it is you are trying to achieve.
    Gareth.

Maybe you are looking for

  • Intermittent display after wake with Samsung under 10.6, using MDP- VGA

    I have an early 2009 Mac Mini, which was supplied with Leopard but upgraded to Snow Leopard 10.6.1. It is connected to a 26" Samsung LCD TV, using the Mini Display Port -> VGA adaptor. Recently, we noticed that after waking from sleep, if the Mac Min

  • Screen blacking out and freezing when phone is charging

    I have noticed sometimes my phone is blacking out and the only way to get the screen to come back up is to remove my battery. It happens about 2 or 3 times a day. Also, when charging my phone I will press an app or phone number and it goes to a compl

  • SharePoint 2010: Ribbon on document library does not load

    Hello, I have a user where on some sites that he uses the document library fails to load. We are on SharePoint 2010, Enterprise Edition. The problem is just on his Windows 7 PC, and no others have reported the problem. He is using IE 10. Can you give

  • Nightmare on Printing Weirdness

    Servers 2x NW6SP5e Clients WIN2kSP4 running Novell Clients 4.90SP2 We currently run a number of different NDS Que based printers . We keep having this periodic nightmare, rougly every 8 hours or so. The clients state the the printers are saying unabl

  • Bass and treble settings

    Is there anyway to fix my bass and treble settings from constantly being shown on my screen?  It comes up automatically, usually after the computer has been on for a short period and moves the setting from low to high, shutting off my internet access