Tables Purge in Oracle 10g

After puging tables from recyclebin space is not reclaiming
Any reason ?

Recycled objects are considered as free space: http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/flashptr004.htm#sthref615
>
Recycle bin objects are not counted as used space. If you query the space views to obtain the amount of free space in the database, objects in the recycle bin are counted as free space.
>
Edited by: P. Forstmann on 19 janv. 2010 06:38

Similar Messages

  • How to find the count of tables going for fts(full table scan in oracle 10g

    HI
    how to find the count of tables going for fts(full table scan) in oracle 10g
    regards

    Hi,
    Why do you want to 'find' those tables?
    Do you want to 'avoid FTS' on those tables?
    You provide little information here. (Perhaps you just migrated from 9i and having problems with certain queries now?)
    FTS is sometimes the fastest way to retrieve data, and sometimes an index scan is.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968
    There's no 'FTS view' available, if you want to know what happens on your DB you need, like Anand already said, to trace sessions that 'worry you'.

  • Give me the sql query which calculte the table size in oracle 10g ecc 6.0

    Hi expert,
    PleaseĀ  give me the sql query which calculte the table size in oracle 10g ecc 6.0.
    Regards

    Orkun Gedik wrote:
    select segment_name, sum(bytes)/(1024*1024) from dba_segments where segment_name = '<TABLE_NAME>' group by segment_name;
    Hi,
    This delivers possibly wrong data in MCOD installations.
    Depending on Oracle Version and Patchlevel dba_segments does not always have the correct data,
    at any time esp. for indexes right after being rebuild parallel (Even in DB02 because it is using USER_SEGMENTS).
    Takes a day to get the data back in line (never found out, who did the correction at night, could be RSCOLL00 ?).
    Use above statement with "OWNER = " in WHERE for MCOD or connect as schema owner and use USER_SEGMENTS.
    Use with
    segment_name LIKE '<TABLE_NAME>%'
    if you like to see the related indexes as well.
    For partitioned objects, a join from dba_tables / dba_indexes to dba_tab_partitions/dba_ind_partitions to dba_segments
    might be needed, esp. for hash partitioned tables, depending on how they have been created ( partition names SYS_xxxx).
    Volker

  • How to transfer data between different tables in an Oracle 10g databse?

    I have to do the following: there are 5 database tables in our Oracle 10g database that store certain data. Then there are 2 new tables where we want the data to go to. My task is to somehow transfer all the data that is in the first 5 tables and insert it in the 2 new ones. All tables are in the same database. The challenge lies in the fact that the structures of the tables are very dissimilar so it won't be a matter of a simple INSERT SQL query.
    The data access tier of our application is developed in JAVA. We use EJB. So one way I am thinking of doing the above mentioned data transfer is to simply code it in the JAVA backed. But that might be to slow, there are millions of records in those first 5 tables. What is the best way to do something like that? Perhaps a custom (Perl?) script or some such?
    Thanks.

    The problem is that the way the data is stored in the old tables you cannot write a query that would return it in such a format that you could immediately insert it in the new table without some formatting. Let me illustrate with an example. To pull the data from the old tables I use this query:
    SELECT
         SXML_DOCUMENT_DATA.VALUE As DocumentValue,
    SXML_ELEMENT.NAME As ElementName
    FROM
         SXML_DOCUMENT,
         SXML_DOCUMENT_DATA,
         SXML_DOCUMENT_DETAIL,
         SXML_ELEMENT,
         SXML_TYPE
    WHERE
         SXML_TYPE.XML_TYPE_KEY = SXML_DOCUMENT.XML_TYPE_KEY
    AND     SXML_DOCUMENT.XML_DOCUMENT_KEY = SXML_DOCUMENT_DETAIL.XML_DOCUMENT_KEY
    AND     SXML_DOCUMENT_DETAIL.XML_DOCUMENT_DETAIL_KEY = SXML_DOCUMENT_DATA.XML_DOCUMENT_DETAIL_KEY
    AND     SXML_ELEMENT.XML_ELEMENT_KEY = SXML_DOCUMENT_DATA.XML_ELEMENT_KEY
    AND     SXML_TYPE.NAME = 'DA_UNIT_COMMITMENT'
    AND     (SXML_ELEMENT.NAME = 'resource'
         OR SXML_ELEMENT.NAME = 'resourceType'
         OR SXML_ELEMENT.NAME = 'commitmentType'
         OR SXML_ELEMENT.NAME = 'startTime'
         OR SXML_ELEMENT.NAME = 'endTime'
         OR SXML_ELEMENT.NAME = 'schedulingCoordinator')
    ORDER BY
         SXML_DOCUMENT.NAME,
         SXML_DOCUMENT_DETAIL.XML_DOCUMENT_DETAIL_KEY,
         SXML_DOCUMENT_DATA.XML_DOCUMENT_DATA_KEY,
         SXML_ELEMENT.NAME;
    The results from the SQL query above look like this:
    DOCUMENTVALUE | ELEMENTNAME
    1 | ALAMIT_7_UNIT_1 | resource
    2 | GEN | resourceType
    3 | BRS8 | schedulingCoordinator
    4 | IFM | commitmentType
    5 | 2008-07-29T18:00:00:00 | startTime
    6 | 2008-07-30T00:00:00 | endTime
    7 | ALAMIT_7_UNIT_1 | resource
    8 | GEN | resourceType
    9 | BRS8 | schedulingCoordinator
    10 | IFM | commitmentType
    11 | 2008-07-29T00:00:00 | startTime
    12 | 2008-07-29T04:00:00 | endTime
    and so on. The type of data repeats every 6 records. And the values of each 6 records corresponds to 1 row in the new table, which looks like this:
    schedulingCoordinator | resource | resourceType | commitmentType | startDate | endDate
    1| 1 | 27 | GEN | IFM | 2008-07-29T18:00:00:00 | 2008-07-30T00:00:00
    2| 1 | 27 | GEN | | 2008-07-29T00:00:00 | 2008-07-29T04:00:00
    So hopefully now you see the challenge. It is not as simple as writing a SQL query that returns result rows corresponding 1 to 1 to a row in the new table. Somehow I need to take the first 6 result rows from that big SQL query and put them in the first row of the new table. Then the next 6 and put them in the second row and so on. And also, we are talking about millions of records. What happens if I process the first 2 million and then some error occurs? Do I start from the beginning again? One idea I have is to process the data in chunks, say process 500 results, commit, process another 500, commit, and so on.
    Edited by: 799984 on Oct 11, 2010 9:08 AM

  • How to archive and Purge in Oracle 10g based on data

    Hi,
    Our requirement is that based on some rules stored in static tables we want to perform Archiving, purging and compression of data in Oracle 10g.
    Can anyone guide me to proper information material for how to go about this?
    Thanks!
    Avinash.

    Hi,
    Thanks for the information.
    But this traditional way of doing is archiving is for maintenance and backup operations.
    We want to have this process online, without taking db offline. In this case will this approach work?
    In our case, the rules can be like -
    1. For table 'A', if rows exceed 10Million, then start archiving of the data for that table.
    2. For table 'B', if data is older than 6 motnhs start archiving of the data for that table.
    3. Archiving should be on for 15 minutes only after that should pause, and should resume whenever user wants to resume.
    4. Archiving should start on specified days only... ETC...

  • What Are the Tables in new oracle 10g express

    Hi guys,
    I am new at oracle and I just installed the Oracle 10g express. after login to the oracle by SQL Developer I noticed there are lots of tables in table folder which some of them come with $ and some not! can you please let me know what are these tables for? are they like system tables in SQL Server? if so then what are the tables under
    -->Other Users/System/Tables?
    I am lost here and I can't figure out what are the Other Users? are they like databases? I tries to google them but I couldn't find any thing
    Thanks a lot

    http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/toc.htm
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/toc.htm
    SELECT USERNAME FROM ALL_USERS;
    USER/SCHEMA are to Oracle; what database is to SQL_Server
    Edited by: sb92075 on Apr 22, 2011 5:53 PM

  • Table Archiving in Oracle 10g database

    Hi Friends,
    We are facing performance problem and large size of tables like FAGLFLEXA, BALDAT etc.
    Now we want to archive the tables, please help us and if any one has done this activity please share it.
    In particular we want to archive tables ACCTIT, ACCTHD, ACCTCR. We are using ECC6.0 and HP_UX11i v2 and oracle 10g
    Regards
    Ganesh Datt Tiwari
    Edited by: tiwari.ganeshdatt on Feb 2, 2011 7:19 AM

    Hi Mark,
    I have archived the tables. It is archived but not stored.
    Files are accessable but I m not getting the archived doc in activated infostructure relted to object MM_ACCTIT.
    After writte action we have to run Deleted action ?
    Please help me.
    Regards
    Ganesh Datt Tiwari

  • DMP TABLE IMPORT IN ORACLE 10G

    Dear All,
    i have an Oracle 8i DB in Windows 2000 server . I am export One table to x.dmp.
    I have an Oracle 10g Db in Windows 2000 server with Cluster option.
    One is data base server and 2 is O/s of Windows 2000 .
    So, How to import .dmp to 10 g db.
    I have tryied while useing imp command from command prompt .
    even i login to user and it aks the import for exported file i have given path also,
    at buffer prompts it diplay the information. After this it directly come out.
    please guide me to do the needful
    regards.
    mathapati

    Kindly post the command u have written. Also dont go for the Interactive way specify the imp directly like this
    imp scott/tiger file=C:\folder_name\emp.dmp log=C:\emp.log tables=table_name

  • How to import oracle 9i database tables only to oracle 10g express edition

    I had a database dump file created in oracle 9i now i want to import the 9i database table to oracle 10xe...

    Depends whats in the export file, was it a full, user, or tablespace export?
    One way to find out whats in the dump file, run an import show=y and it will only pull out the DDL for the objects in the file
    $ imp file=<dumpfile name> log=implog.log show=y
    username: system
    There are lots of options and ways to run imp, 10g Utilities doc has all the details, here's the chapter on the original imp and exp utilities: http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#i1023560
    If you only want one schema, just the tables, no data, no indexes, create your target user, and do a schema import no rows, no indexes, maybe leave out the grants and statistics too:
    $ sqlplus /nolog
    SQL> connect system
    SQL> create user target_user identified by <passwd> default tablespace <tblspc>;
    SQL> exit;
    $ imp file=... log=... fromuser=<source username> touser=target_user rows=n indexes=n statistics=none grants=n

  • Object_id's differs in table partions in oracle 10g.

    Can you please see the below and suggest me if this is acceptable result set.. how come object ids differs in data dictionary table from actual object ids. How do I ensure that the following query returns rows after moving partitions one table space to another.
    select t.* , u.object_name, u.subobject_name from user1.t t, all_objects u where dbms_rowid.rowid_object(t.rowid)=u.object_id
    DB version: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
    work:
    SQL> CREATE TABLE t
    2 (
    3 a date,
    4 b int,
    5 c varchar2(15)
    6 )
    7 PARTITION BY RANGE (a)
    8 (
    9 PARTITION p1
    10 VALUES LESS THAN
    11 (to_date('01-jan-2009',
    12 'dd-mon-yyyy'))
    13 tablespace rw,
    14 PARTITION p2
    15 VALUES LESS THAN
    16 (to_date('01-jan-2010',
    17 'dd-mon-yyyy'))
    18 tablespace ro
    19 );
    SQL> insert into t (a,b,c)
    2 values
    3 (to_date('01-jun-2008',
    4 'dd-mon-yyyy'),
    5 1, 'hello' );
    1 row created.
    SQL> insert into t (a,b,c)
    2 values
    3 (to_date('01-jun-2009',
    4 'dd-mon-yyyy'),
    5 1, 'Bye' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * From user1.t;
    A B C
    01-JUN-08 1 hello
    01-JUN-09 1 Bye
    2 rows selected.
    SQL> select t.* , u.object_name, u.subobject_name from user1.t t, all_objects u where dbms_rowid.rowid_object(t.rowid)=u.object_id;
    A B C OBJECT_NAME SUBOBJECT_NAME
    01-JUN-08 1 hello T P1
    01-JUN-09 1 Bye T P2
    2 rows selected.
    SQL> alter table user1.t move partition p1 tablespace rw1;
    Table altered.
    SQL> select t.* , u.object_name, u.subobject_name from user1.t t, all_objects u where dbms_rowid.rowid_object(t.rowid)=u.object_id;
    A B C OBJECT_NAME SUBOBJECT_NAME
    01-JUN-09 1 Bye T P2
    1 row selected.
    SQL> alter table user1.t move partition p2 tablespace rw;
    Table altered.
    SQL> select t.* , u.object_name, u.subobject_name from user1.t t, all_objects u where dbms_rowid.rowid_object(t.rowid)=u.object_id;
    no rows selected
    SQL> select * From user1.t;
    A B C
    01-JUN-08 1 hello
    01-JUN-09 1 Bye
    2 rows selected.
    SQL> SELECT OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID FROM USER_OBJECTS;
    OBJECT_NAME SUBOBJECT_NAME OBJECT_ID
    T P2 54578
    T P1 54577
    T 54576
    3 rows selected.
    SQL> select dbms_rowid.rowid_object(t.rowid), t.* from user1.t;
    DBMS_ROWID.ROWID_OBJECT(T.ROWID) A B C
    54579 01-JUN-08 1 hello
    54580 01-JUN-09 1 Bye
    2 rows selected.
    SQL> execute dbms_Stats.gather_schema_Stats('USER1');
    PL/SQL procedure successfully completed.
    SQL> SELECT OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID FROM USER_OBJECTS;
    OBJECT_NAME SUBOBJECT_NAME OBJECT_ID
    T P2 54578
    T P1 54577
    T 54576
    3 rows selected.
    SQL> EXECUTE DBMS_sTATS.GATHER_TABLE_sTATS('USER1','T');
    PL/SQL procedure successfully completed.
    SQL> SELECT OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID FROM USER_OBJECTS;
    OBJECT_NAME SUBOBJECT_NAME OBJECT_ID
    T P2 54578
    T P1 54577
    T 54576
    3 rows selected.
    SQL> ANALYZE TABLE USER1.T COMPUTE STATISTICS;
    Table analyzed.
    SQL> SELECT OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID FROM USER_OBJECTS;
    OBJECT_NAME SUBOBJECT_NAME OBJECT_ID
    T P2 54578
    T P1 54577
    T 54576
    SQL> select t.* , u.object_name, u.subobject_name from user1.t t, all_objects u where dbms_rowid.rowid_object(t.rowid)=u.object_id;
    no rows selected
    Can one guide me or suggest me if this is acceptable or anything look into?

    Thank you all..
    Okay, what query i should write to know which rows are existed in which partion? I do not want to select entire partition records with sql but based on rows of interest.
    Suppose I have a 100 rows which are stored among various partitions and I want list of those rows along with partitions that the rows are stored in with SQL statement. ?
    can u please provide SQL statement? Thanks in advance..

  • Strange tables in my Oracle 10g

    Hi guys,
    I found these stranges tables named BIN$+UX5Rnp5RbyomAwY5MV2fw==$0 , BIN$+m9qEOmZTlCp4GyCB0F0nw==$0 ... they are really a lot when i query "select tname from tab".. However Toad does not list them. Is there a way to remove them? Or can any one englighten me how these files are created? Thanks.
    Jason

    Whenever you drop a table using traditional syntax:
    DROP TABLE <table_name>; Oracle no longer drops it. It only changes the name of it and marks its space as potentially reusable space. It's the recycle bin, in case you change your mind you still have a second chance to restore it using the "flashback table to before drop" command.
    If you want to absolutely purge it on the first try, then use:
    DROP TABLE <table_name> purge; -- no way back this time.

  • Global temp tables difference in oracle 10g and 11g

    Hi All,
    we are planning to upgrade metasolv applications from 6.0.15 (currently suing 10g) to 6.2.1(currently using 11g).We are using the Global temp tables in 10g .i just want to know is there any impact if we upgrade the Global temp tables from 10g to 11g.if so can u please explain me clearly ?
    Please and thanks.

    FAQ on new features: Re: 14. What's the difference between different versions of the database?
    This can be used as a reference for all your queries..

  • Querying wide (multi-record case) table data in oracle 10g

    I've got a set of "wide" (multi-record case data format) tables similar to the format described in Oracle's "Oracle Data Mining Concepts" book in chapter 2 (http://download-west.oracle.com/docs/cd/B14117_01/datamine.101/b10698/2data.htm#1010394).
    I like the flexibility of this format, especially since it can handle thousands of attributes. But the main problem I have with this format is in determining how to write efficient, complex queries against the data.
    I've included some sample data below to demonstrate my problem. I've got two tables, one with people and the other with the attributes and values of age, income, height and weight. If I try to generate a complex query against this data, I seem to either have to rely on multisets and functions, or on a large number of self joins. Neither of which seems to scale well or perform well for large amounts of data. I know I could also add a pipelined TABLE function to filter the data more, but none of these options seem like great solutions. Does anyone have any ideas for a better way to form complex well performing queries against this type of data layout?
    For example, with my sample data below I'd like to select everyone whose (income = 2000) or (income = 50000 and weight = 210), but the only way I see of doing this is with a query like the following (which won't take advantage of any indexes on the data):
    select * from people_attributes_view
    where find_eq_attr(AttributeValue('income',2000),attribute_values) is not null
    or (find_eq_attr(AttributeValue('income',50000),attribute_values) is not null
    and
    find_eq_attr(AttributeValue('weight',210),attribute_values) is not null);
    Any help is greatly appreciated.
    Thanks.
    -- My full example starts here ----------------
    -- Create the sample tables and data
    create table people (id int, name varchar(30));
    create table attributes (id int, attribute varchar(10), value NUMBER);
    insert into people values (1,'tom');
    insert into people values (2,'jerry');
    insert into people values (3,'spike');
    commit;
    insert into attributes values (1,'age',23);
    insert into attributes values (1,'income',1000);
    insert into attributes values (1,'height',5);
    insert into attributes values (1,'weight',120);
    insert into attributes values (2,'age',20);
    insert into attributes values (2,'income',2000);
    insert into attributes values (3,'age',30);
    insert into attributes values (3,'income',50000);
    insert into attributes values (3,'weight',210);
    commit;
    -- Create some types, functions and views for the search
    CREATE OR REPLACE TYPE AttributeValue AS OBJECT
    (attribute varchar(30),
    value NUMBER);
    CREATE OR REPLACE TYPE AttributeValues AS TABLE OF AttributeValue;
    create or replace function find_eq_attr (val AttributeValue, vals AttributeValues) RETURN AttributeValue IS
    begin
    for i in vals.FIRST .. vals.LAST
    LOOP
    if ((val.attribute = vals(i).attribute) and
    (val.value = vals(i).value)) then
    return vals(i);
    end if;
    END LOOP;
    return null;
    end;
    create or replace view people_attributes_view as
    select p.name, p.id,
    cast(multiset(select attribute,value from attributes a where a.id = p.id)
    as AttributeValues) attribute_values
    from people p;
    -- Search for everyone whose (income = 2000) or (income = 50000 and weight = 210)
    select * from people_attributes_view
    where find_eq_attr(AttributeValue('income',2000),attribute_values) is not null
    or (find_eq_attr(AttributeValue('income',50000),attribute_values) is not null
    and
    find_eq_attr(AttributeValue('weight',210),attribute_values) is not null);

    I like the flexibility of this format, especially
    since it can handle thousands of attributes. But the
    main problem I have with this format is in
    determining how to write efficient, complex queries
    against the data. Can't be done. The flexibility is achieved at the cost of simplicity and performance, it is a trade off.
    See this thread for a full discussion on the subject.
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:10678084117056

  • How to verify reused table space in oracle 10g ?

    Hi..
    From my system, i have seen that the table size keep increasing event the delete query is running. Due to that, I would to check either the oracle allowed to reuse the table space or not ? if not then how could i enable it ?
    Please help me..
    Thank you,
    Baharin

    You can use dbms_space.space_usage to check the for free space.
    Space reuse will depend on whether you are using MSSM or ASSM, PCT_FREE, PCT_USED, and how new data is inserted ?
    You can shrink or move the table and rebuild indexes to reclaim space.

  • SYS.SYSNTJ6LP..... table names in Oracle 10g R2 database

    Can anyone inform me as to what / where the following tables came from - and also why a developer would us == in a table name?
    SYS.SYSNTJ6LPx94kVxXgRAAAAAAAAA==
    SYS.SYSNTJ6LPx94mVxXgRAAAAAAAAA==
    SYS.SYSNTJ6LPx94oVxXgRAAAAAAAAA==
    Thank you in advance,

    Recycle Bin:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/tables.htm#sthref2383
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/tables.htm#sthref2388
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/tables.htm#sthref2392
    Message was edited by:
    Kamal Kishore

Maybe you are looking for