Very big table to delete :)

Hi all!
I have tablespace with 6 datafile and each has 4GB, lately that tablespace is increasing to fast so we decided to delete some data from table which is the largest.
That table has around 10 million records, and when I run query to delete it by date:
delete from TABLE_NAME where dt_start < to_date('09/07/16', 'YY/MM/DD');
all records which are older than 3 months.
After running to execute that query I see it in "Session" for about 2-3 hours and then disappears! but query still has executing status.
What happened to this query? why it disappeared?

Is there any chance you could partition the table by date so that you could simply drop the older partitions?
What fraction of the data are you trying to delete? If you are deleting a substantial fraction of the data in the table, it is likely more efficient to write the data you want to keep to a different table, and then either truncate the existing table and move the saved data back or drop the existing table and rename the table you saved the data into.
Justin

Similar Messages

  • Very Big Table (36 Indexes, 1000000 Records)

    Hi
    I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
    I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
    1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
    2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
    3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
    Which way has more efficiency?
    Do I have to take "master-detail" relations in mind when building Forms on this table?
    Are there any other suggestions?
    I am using Oracle 8.1.7 database.
    Please Help.

    Hi everybody
    I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
    Q: Have you gathered statistics on the tables in your database?
    A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
    Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
    A: Actually I do not know what you mean by "10046 level 8".
    Q: what OS and what kind of server (hardware) are you using
    A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
    Q: how many concurrent user do you have an how many transactions per hour
    A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
    Q: How fast should your queries be executed
    A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
    Q:please show use the explain plan of these queries
    A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports).

  • Improve the performance in stored procedure using sql server 2008 - esp where clause in very big table - Urgent

    Hi,
    I am looking for inputs in tuning stored procedure using sql server 2008. l am new to performance tuning in sql,plsql and oracle. currently facing issue in stored procedure - need to increase the performance by code optmization/filtering the records using where clause in larger table., the requirement is Stored procedure generate Audit Report which is accessed by approx. 10 Admin Users typically 2-3 times a day by each Admin users.
    It has got CTE ( common table expression ) which is referred 2  time within SP. This CTE is very big and fetches records from several tables without where clause. This causes several records to be fetched from DB and then needed processing. This stored procedure is running in pre prod server which has 6gb of memory and built on virtual server and the same proc ran good in prod server which has 64gb of ram with physical server (40sec). and the execution time in pre prod is 1min 9seconds which needs to be reduced upto 10secs or so will be the solution. and also the exec time differs from time to time. sometimes it is 50sec and sometimes 1min 9seconds..
    Pl provide what is the best option/practise to use where clause to filter the records and tool to be used to tune the procedure like execution plan, sql profiler?? I am using toad for sqlserver 5.7. Here I see execution plan tab available while running the SP. but when i run it throws an error. Pl help and provide inputs.
    Thanks,
    Viji

    You've asked a SQL Server question in an Oracle forum.  I'm expecting that this will get locked momentarily when a moderator drops by.
    Microsoft has its own forums for SQL Server, you'll have more luck over there.  When you do go there, however, you'll almost certainly get more help if you can pare down the problem (or at least better explain what your code is doing).  Very few people want to read hundreds of lines of code, guess what's it's supposed to do, guess what is slow, and then guess at how to improve things.  Posting query plans, the results of profiling, cutting out any code that is unnecessary to the performance problem, etc. will get you much better answers.
    Justin

  • Create very big table

    Hi ,
    call_fact table contain about 300 milion tables.
    exceptions table contain about 150 milion tables.
    Both tables have an uptodate statistics.
    The machine have 8 CPUs
    The statment already run 48 hours.
    Can one suggest a faster way to do it ?
    create table abc parallel
    as
    select /*+ parallel(t,32) */ *
    from STARQ.CALL_FACT t
    where rowid NOT IN (select /*+ parallel(ex,32) */ row_id
    from starq.exceptions ex );
    The plan is:
    Plan
    CREATE TABLE STATEMENT ALL_ROWSCost: 1,337,556,446,040                                                        
    15 PX COORDINATOR                                                   
    14 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ30001 :Q3001Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                                              
    13 LOAD AS SELECT PARALLEL_COMBINED_WITH_PARENT :Q3001                                        
    12 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q3001                                   
    11 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q3001Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                               
    10 PX SEND ROUND-ROBIN PARALLEL_FROM_SERIAL SYS.:TQ30000 Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                          
    9 FILTER                     
    4 PX COORDINATOR                
    3 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ20000 :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832           
    2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832 Partition #: 10 Partitions accessed #1 - #46     
    1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STARQ.CALL_FACT :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832 Partition #: 10 Partitions accessed #1 - #46
    8 PX COORDINATOR                
    7 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10000 :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1           
    6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1      
    5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STARQ.EXCEPTIONS :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1

    > When in doubt, I use exists. Here it is clear to me that exists will be faster
    If the row_id column is declared not null, this is not true: exactly the same path is chosen as can be seen below.
    select /* with primary key */
      from call_fact t
    where rowid not in
           ( select row_id
               from exceptions ex
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch     1001      0.46       0.46          0      32467          0       15000
    total     1003      0.46       0.47          0      32467          0       15000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 61 
    Rows     Row Source Operation
      15000  NESTED LOOPS ANTI (cr=32467 pr=0 pw=0 time=600105 us)
      30000   TABLE ACCESS FULL CALL_FACT (cr=1466 pr=0 pw=0 time=120050 us)
      15000   INDEX UNIQUE SCAN EX_PK (cr=31001 pr=0 pw=0 time=297574 us)(object id 64376)
    select /* with primary key */
      from call_fact t
    where not exists
           ( select 'same rowid'
               from exceptions ex
              where ex.row_id = t.rowid
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch     1001      0.51       0.46          0      32467          0       15000
    total     1003      0.51       0.47          0      32467          0       15000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 61 
    Rows     Row Source Operation
      15000  NESTED LOOPS ANTI (cr=32467 pr=0 pw=0 time=585099 us)
      30000   TABLE ACCESS FULL CALL_FACT (cr=1466 pr=0 pw=0 time=120048 us)
      15000   INDEX UNIQUE SCAN EX_PK (cr=31001 pr=0 pw=0 time=298198 us)(object id 64376)
    ********************************************************************************Note that the tables, scaled down to 30,000 and 15,000 rows, are created like this:
    SQL> create table call_fact (col1, col2)
      2  as
      3   select level
      4        , lpad('*',100,'*')
      5     from dual
      6  connect by level <= 30000
      7  /
    Tabel is aangemaakt.
    SQL> create table exceptions (row_id, col)
      2  as
      3  select rowid
      4       , lpad('*',100,'*')
      5    from call_fact
      6   where mod(col1,2) = 0
      7  /
    Tabel is aangemaakt.
    SQL> alter table exceptions add constraint ex_pk primary key (row_id)
      2  /
    Tabel is gewijzigd.
    SQL> exec dbms_stats.gather_table_stats(user,'call_fact')
    PL/SQL-procedure is geslaagd.
    SQL> exec dbms_stats.gather_table_stats(user,'exceptions',cascade=>true)
    PL/SQL-procedure is geslaagd.Without declaring row_id not null, I've test exists to be definitely much faster, as the not in variant cannot do an antijoin anymore.
    Regards,
    Rob.

  • Excution of a PL/SQL procedure with CURSOR for big tables

    I have prepared a proceudre that uses CURSOR to make a complex query for tables with big number of records, something like 900'000. And the execution failed; ORA-01652:impossible to extend the temporary segment of 64 in the space of storage TEMP.
    Any sugestion.

    This brings us to the following question: How could I calculate the bytes required by a cursor?. It is a selection of certain fields of very big tables. Let's say that the fields are NUMBER(4), NUMBER(8) and CHAR(2). The fields are in 2 relational tables of 900'000 each. What size is required for a procedure like this.
    Your help is really appreciated.

  • Print very big JTable

    Hi all,
    i have to print a very big table with 100000 rows and 6 columns, i have put the System.gc() at the end of the print method but when i print the table the print process become too big (more or less 700 kB for page and there are 1048 pages).
    It is possible to make a pdf of my table and this solution is better like the first?
    When i make the preview this take a lot of time for the size of the table, because first i have to create the table and then i preview it.
    There is a way to reduce the time lost for the table generation?
    N.B.: the data in the table is always the same.
    Thanks a lot!!!

    There is a way to reduce the time lost for the table
    generation? Write a table model, extending AbstractTableModel.
    The model is queried for each cell. Usually all the columns
    of one row are retrieved before getting next row. You may cache
    one row in the model: not the whole table!

  • Optimize delete in a very big database table

    Hi,
    For delete entries in database table i use instruction:
    Delete from <table> where <zone> = 'X'.
    The delete take seven hours (the table is very big and  <zone> isn't an index)
    How can i optimize for reduce the delete time.
    Thanks in advance for your response.
    Regards.

    what is the size of the table and how many lines are you going to delete?
    I would recommend you to delete only up to 5000 or 10000 records in one step.
    do 100 times.
    select *
              from
              into table itab.
              where
              up to 10.000 records.
    if ( itab is initial )
      exit.
    endif.
    delete ... from table itab.
    commit work.
    If this is still too slow, than you should create a secondary index with zone.
    You can drop the index after the deletion is finished.
    Siegfried

  • How does table SMW3_BDOC become very big?

    Hi,
    The table SMW3_BDOC which store BDocs in my system becomes very big with several million records. Some BDocs in this table are sent several month ago. I'm very strange that why those BDocs were not processed?
    If I want to clean this table, will inconsistancy occurrs in system? And how can I clean this table for those very old BDocs?
    Thanks a lot for your help!

    Hi Long,
    I have faced the same issue recently on our Production system and this created a huge performance issue and completely blocked the system with TimeOut errors.
    I was able to clean up the same by running the report SMO8_FLOW_REORG in SE38.
    If you are very sure about cleaning up, first delete all the unnecessary Bdocs and then run this report.
    At the same time, check any CSA* queue is stuck in CRM inbound queue SMQ2. If yes, select it, manually unlock it, activate and then refresh. Also check any unnecessary queues stuck up there.
    Hope this could help you.
    regards,
    kalyan

  • SAPSR3DB   XMII_TRANSACTION table LOG column is very big

    Hi,
    We have a problem about MII server.
    SAPSR3DB   XMII_TRANSACTION table LOG column is very big data in it.
    How can it be decrease the size of data in this column?
    Regards.

    In 12.1 its XMII Administration Menu (Menu.jsp) --> System Management --> DefaultTransactionPersistance.
    In production I recommend setting this to 'ONERROR'
    There is also the TransactionPersistenceLifetime which determines how long entries will stay in the log table.
    We set this to 8 hours.

  • Shrink Oracle Table after Deletion

    A few database tables are very big. An Oracle table still holds the disk space occupied by deleted records according to http://stackoverflow.com/questions/6540546/oracle-10g-table-size-after-deletion-of-multiple-rows. Re: shrink table after delete teaches the following commands to release the idle space from the table.
    ALTER TABLE TABLE1 DEALLOCATE UNUSED;
    ALTER TABLE TABLE1 ENABLE ROW MOVEMENT;
    ALTER TABLE TABLE1 SHRINK SPACE;
    ALTER TABLE TABLE1 MOVE;
    1. Are the commands feasible?
    2. Are they safe?
    3. What will be the impacts of running the commands?
    4. Is there any other workable safe approach shrinking a table?

    Hi,
    I advise using shrink table operation.
    The tablespace which belong to your table must be in ASSM (Automatic segment space management) to use shrink.
    Shrink is safe but you need to run two commands :
    1)ALTER TABLE TABLE1 SHRINK SPACE COMPACT; (This is long operation which moves data, but can be done online)
    2)ALTER TABLE TABLE1 SHRINK SPACE; (this is quick if you run SHRINK SPACE COMPACT before, it only shift the High Water Mark. Be carreful if you don't run SHRINK SPACE COMPACT before your table will be locked for a long time)
    Another point, is that execution plans are calculed using the HWM so SHRINK the table (The 2nd command) will invalidate all the cursors in shared pool where the table is in. So execution plan need to be recalculated (often not a problem).
    Regards,

  • Regarding the SAP big tables in ECC 6.0

    Hi,
    We are having SAP ECC 6.0 running on Oracle 10.2g database. Please can anyone of you give fine details on the big tables as below. What are they? Where are they being used? Do they need to be so big? Can we clean them up?
    Table          Size
    TST03          220 GB
    COEP          125 GB
    SOFFCONT1      92 GB
    GLPCA          31 GB
    EDI40          18GB
    Thanks,
    Narendra

    Hello Narendra,
    TST03 merits special attention, certainly if it is the largest table in your database. TST03 contains the contents of spool requests and it often happens that at some time in the past there was a huge number of spool data in the system causing TST03 to inflate enormously. Even if this spool data was cleaned up later Oracle will not shrink the table automatically. It is perfectly possible that you have a 220 GB table containing virtually nothing.
    There are a lot of fancy scripts and procedures around to find out how much data is actually in the table, but personally I often use a quick-and-dirty check based on the current statistics.
    sqlplus /
    select (num_rows * avg_row_len)/(1024*1024) "MB IN USE" from dba_tables where table_name = 'TST03';
    This will produce a (rough) estimate of the amount of space actually taken up by rows in the table. If this is very far below 220 GB then the table is overinflated and you do best to reorganize it online with BRSPACE.
    As to the other tables: there are procedures for prevention, archiving and/or deletion for all of them. The best advice was given in an earlier response to your post, namely to use the SAP Database Management Guide.
    Regards,
    Mark

  • How to purge PERFSTAT.STATS$SQLTEXT table, after deleting snapshots

    I had an alarm on the free space of the PERFSTAT tablespace. In order to free up some space I tried to delete some old StatsPack snapshots, with the following query:
    DELETE from perfstat.stats$snapshot where snap_time < sysdate - 10 ;
    COMMIT;
    After running it the space usage on the tablespace was not significantly reduced. I checked again and I saw that the table PERFSTAT.STATS$SQLTEXT was very big, almost 8 Gb, but all the other tables were a lot smaller. I read somewhere that the sppurge.sql procedure, that comes with Oracle, could be used to purge the statspack data, but that it comes with the lines related to PERFSTAT.STATS$SQLTEXT commented, because it is a big query and it can take a lot of undo space. I tried to run the followiung query, which I found in the forum, by Don Burleson, but it failed, because of running out of undo:
    DELETE /*+ index_ffs(st)*/
    FROM perfstat.stats$sqltext st
    WHERE (hash_value, text_subset) NOT IN (
    SELECT /*+ hash_aj full(ss) no_expand*/ hash_value, text_subset
    FROM perfstat.stats$sql_summary ss
    WHERE snap_id NOT IN (SELECT DISTINCT snap_id FROM perfstat.stats$snapshot)
    COMMIT;
    Is there any way to know if the PERFSTAT.STATS$SQLTEXT table has records that could be purged, and an easier way, that don't use as much undo, to purge it?
    My oracle version is:
    Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
    PL/SQL Release 9.2.0.7.0 - Production
    CORE 9.2.0.7.0 Production
    TNS for Solaris: Version 9.2.0.7.0 - Production
    NLSRTL Version 9.2.0.7.0 - Production
    I. Neva
    Oracle DBA

    Is there any way to know if the PERFSTAT.STATS$SQLTEXT table has records that could be purged, yes, just transform Delete to Select
    SELECT /*+ index_ffs(st)*/ count(*)
    FROM perfstat.stats$sqltext st
    WHERE (hash_value, text_subset) NOT IN (
    SELECT /*+ hash_aj full(ss) no_expand*/ hash_value, text_subset
    FROM perfstat.stats$sql_summary ss
    WHERE snap_id NOT IN (SELECT DISTINCT snap_id FROM perfstat.stats$snapshot)
    and an easier way, that don't use as much undo, to purge it?yes. do it smaller chunks
    BEGIN
    LOOP
    DELETE /*+ index_ffs(st)*/
    FROM perfstat.stats$sqltext st
    WHERE (hash_value, text_subset) NOT IN (
    SELECT /*+ hash_aj full(ss) no_expand*/ hash_value, text_subset
    FROM perfstat.stats$sql_summary ss
    WHERE snap_id NOT IN (SELECT DISTINCT snap_id FROM perfstat.stats$snapshot)
    ) AND ROWNUM<=10000;
    EXIT WHEN SQL%ROWCOUNT = 0;
    COMMIT;
    END LOOP;
    COMMIT;
    END;
    /

  • What is the easiest way to create and manage very big forms?

    I need to create a form that will contain few hundred questions. Could you please give me some advise on what is the easiest way to do that? I mean for example is it easier to create everything in Word (since it is easier to manage) and than create a form based on that?
    My concern is that when I will have a very big form, containing different kinds of questions and with many scripts, managing it during work will be slow and difficult, for example adding a question in the middle of the form which would require moving half of the questions down which could smash the layout etc.
    What is the best practise for that?
    Thanks in advance

    Try using Table and Rows for this kind of forms. These forms will have the same look throught with a question and and answer section..
    In the future if you want to add a new section, you can simply add rows in between..
    Thanks
    Srini

  • Managing a big table

    Hi All,
    I have a big table in my database. When I say big, it is related to data stored in it (around 70 million recs) and also no of columns (425).
    I do not have any problems with it now, but going ahead I assume, it would be a bottleneck or very difficult to manage this table.
    I have a star schema for the application of which this is a master table.
    Apart from partitioning the table is there any other way of better handling such a table.
    Regards

    Hi,
    Usually the fact tables tend to be smaller in number of columns and larger in number of records while the dimension tables obey to the opposite larger number of columns, which is were the powerful of the dimension lays on, and very few (in some exceptions even millions of record) records. So the high number of columns make me thing that the fact table may be, only may be, I don't have enough information, improperly designed. If that is the case then you may want to revisit that design and most likely you will find some 'facts' in your fact table that can become attributes of any of the dimension tables linked to.
    Can you say why are you adding new columns to the fact table? A fact table is created for a specific business process and if done properly there shouldn't be such a requirement of adding new columns. A fact use to be limited in the number of metrics you can take from it. In fact, it is more common the oposite, a factless fact table.
    In any case, from the point of view of handling this large table with so many columns I would say that you have to focus on avoiding the increasing number of columns. There is nothing in the database itself, such as partitioning that could do this for you. So one option is to figure out which columns you want to get 'vertical partition' and split the table in at least two new tables. The set of columns will be those that are more frequently used or those that are more critical to you.Then you will have to link these two tables together and with the rest of dimensions. But, again if you are adding new columns then is just a matter of time that you will be running in the same situation in the future.
    I am sorry but cannot offer better advice than to revisit the design of your fact table. For doing that you may want to have a look at http://www.kimballgroup.com/html/designtips.html
    LW

  • How to use partioning for big table

    Hi,
    Oracle 10gR2/Redhat4
    RAC database
    ASM
    I have a big table TRACES that will also grow very fast, actually I have 15 000 000 rows.
    TRACES (ID NUMBER,
    COUNTRY_NUM NUMBER,
    Timestampe NUMBER,
    MESSAGE VARCHAR2(300),
    type_of_action VARCHAR(20),
    CREATED_TIME DATE,
    UPDATE_DATE DATE)
    The querys that asked this table are and the made a lot of I/O in disks!!
    select count(*) as y0_
    from TRACES this_
    where this_.COUNTRY_NUM = :1
    and this_.TIMESTAMP between :2 and :3
    and lower(this_.MESSAGE) like :4;
    SELECT *
    FROM (SELECT this_.id ,
    this_.TIMESTAMP
    FROM traces this_
    WHERE this_.COUNTRY_NUM = :1
    AND this_.TIMESTAMP BETWEEN :2 AND :3
    AND this_.type_of_action = :4
    AND LOWER (this_.MESSAGE) LIKE :5
    ORDER BY this_.TIMESTAMP DESC)
    WHERE ROWNUM <= :6;
    I have 16 distinct COUNTRY_NUM in the table and the TIMESTAMPE is a number that the application insert in the table.
    My question is the best solution to tune this table is to use partitioninig to a smal parts?
    I need to made a partioning using a list by COUNTRY_NUM and date (YEAR/mounth) , is it a best way to it?
    NB: for an example of TRACES in my test database
    1 select COUNTR_NUM,count(*) from traces
    2 group by COUNTR_NUM
    3* order by COUNTR_NUM
    SQL> /
    COUNTR_NUM COUNT(*)
    -1 194716
    3 1796581
    4 1429393
    5 1536092
    6 151820
    7 148431
    8 76452
    9 91456
    10 91044
    11 186370
    13 76
    15 29317
    16 33470

    Hello,
    You can automate and use dbms_scheduler to add monthly partition. Here is an example of your partitioned table with monthly partitions
    CREATE TABLE traces (
       id NUMBER,
       country_num NUMBER,
       timestampe NUMBER,
       MESSAGE VARCHAR2 (300),
       type_of_action VARCHAR (20),
       created_time DATE,
       update_date DATE
    TABLESPACE TEST_DATA - your tablespace_name
    PARTITION BY RANGE (created_time)
       (PARTITION traces_200901
           VALUES LESS THAN
              (TO_DATE (' 2009-02-01 00:00:00',
                        'SYYYY-MM-DD HH24:MI:SS',
                        'NLS_CALENDAR=GREGORIAN'
           TABLESPACE test_data,  -- Here you can put partition on difference tablespaces meaning different data files residing on diferent disks (Reducing i/o coententions)
       PARTITION traces_200902
          VALUES LESS THAN
             (TO_DATE (' 2009-03-01 00:00:00',
                       'SYYYY-MM-DD HH24:MI:SS',
                       'NLS_CALENDAR=GREGORIAN'
          TABLESPACE test_data);Regards

Maybe you are looking for

  • Unable to find Iphoto library folder

    I have been trying to attach photos from my gmail account, but after I browse for the iphoto library folder I'm unable to see my original and modified photos folders. Can anyone advice how to solve! Many thanks in advance Raul

  • Can I use an iTunes gift card to buy more storage for iCloud?

    I have a few gift cards and I feel as though there has to be a way you can buy more iCloud storage with those gift cards

  • Validating Checkboxes

    I'm needing to validate a few checkboxes, I need a script to just make sure the boxes are checked. I know how to do this with fields but it's not working when I apply the same scripts to checkboxes. I'm putting it in a submit button. It needs to work

  • Flash Components market

    Hi! Can Adobe comment current situation and plans about supporting of Flash Components? Will Adobe software support Flash Components in feature or will move to Flex components? Is Flash Components deprecated?

  • How can I update the artwork in my ipod?

    The artwork is getting updated in the itunes not in the ipod. I am using the itunes 7.0.1 & last when i faced this problem it got resolved only after restoring my ipod. I dont want to keep restoring the ipod. The ipod software is also up to date I am