Move_table_clause of the alter table command

Oracle Workspace Manager: 10.2.0.4.3
Database version: 10.2.0.4.0
Can one use the following technique to relocate a version enabled table from one tablespace to another? Is this supported?
exec dbms_wm.beginDDL(tablename)
alter table tablename_lts move tablespace ...
exec dbms_wm.commitDDL
I tried this in a development environment; the lt table moved, but the lck table remained behind. This history seems to be all there.

Hi,
Yes, that is the supported method to move the LT table into a new tabelspace.  It will also move the AUX table and the _VT table(if it exists).
Regards,
Ben

Similar Messages

  • Auditing the  ALTER USER command

    How exactly does one use FGA to capture the "tail-end" of the ALTER USER command? Could you provide or point me toward an example with detailed "how to" information?
    Thanks.

    You need to have "ALTER USER " privilege to change the password for any other user.
    It seems you are not login with scott user and actually "Scott.tiger"
    it shld be
    Alter user scott identified by tiger;

  • Using dbms_metadata.get_ddl to capture the alter table

    Hi there,
    I know you can capture table ddl using dbms_metadata.get_ddl. However, I want to capture all the changes after the table got created. If you created a table then add one or more columns. I just want to capture the change as the new columns added.
    Here's an example.
    Create table test (id number, name varchar2(20);
    then
    alter table test add (type varchar2(40);
    I want to capture the syntax "alter table test add (type varchar(40)". Is this possible?
    Thanks

    I dont belive you could easily use dbma_metadata to do this... but you can use a database or schema level trigger e.g. :
    create or replace trigger test.test_trigger
    AFTER ALTER
    ON DATABASE
    DECLARE
    sql_text ora_name_list_t;
    v_stmt VARCHAR2(2000);
    n number;
    BEGIN
    n := ora_sql_txt(sql_text);
    FOR i IN 1..n
    LOOP
    v_stmt := v_stmt || sql_text(i);
    END LOOP;
    INSERT INTO test.test_table2
    VALUES
    (v_stmt);
    END;
    This will capture all alter commands fired at the database... so you would then need to filter them using the System-Defined Event Attributes see
    http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm#i1006211

  • ALTER TABLE command with owner does not work

    I performed the following statement in a SQLplus script where the variables were filled some lines before:
    alter table '&ownername||'.'||&tablename' move online tablespace '&tablespacename';
    However this does not work. I am getting:
    ERROR at line 1:
    ORA-00903: invalid table name
    The variables are replace correctly by SQLplus and everything owner, table, tablespace exist.
    Can I (as SYSDBA) not alter the table of another user?
    Peter

    Solomon Yakobson wrote:
    EdStevens wrote:
    Ok, you removed the mis-placed quotes, but why the double "." ??
    If you wish to append characters immediately after a substitution variable, use a <font color=red size=3>period</font> to separate the variable from the character.
    SY.Ah, I had simply never done it that way nor seen it done that way.

  • Alter  Table 's Tablespace

    Dear All ,
    I have 10g database on rehl5 .I move table's(T_TRANSACTION_TABLE) tablespace(USERS) to tablespace (IMAGES) .
    It completed successfully
    MY command is
    ALTER TABLE USERS MOVE TABLESPACE IMAGES
    and
    ALTER INDEX USERS REBUILD TABLESPACE IMAGES
    Problem is transaction_images table show using table space IMAGES but store data in USERS table space .
    I check this using em->administrator->schema->Table
    and i also check DBA_SEGMENT,DBA_EXTENTS both are show tablespace USERS
    Note :- 1) This table contain BLOB column for save image .
    2) I am also rebuild index also.
    Thanks in advance
    Shaikh Abdul Moyed

    A LOB (CLOB or BLOB) can be stored in a different tablespace than the rest of the table. Are you certain that you're looking at the tablespace for the table segment rather than the LOB segment?
    If you want to move the LOB segment to a different tablespace
    ALTER TABLE <<table name>>
      MOVE LOB( <<name of BLOB column>> )
      STORE AS( TABLESPACE <<new tablespace>> )Additionally, I'm a bit confused by the DDL you posted
    ALTER TABLE USERS MOVE TABLESPACE IMAGES
    ALTER INDEX USERS REBUILD TABLESPACE IMAGESYou use USERS as a table name and as an index name in these DDL statements. But then you later talk about the USERS tablespace. I'm guessing that you really put the table name (TRANSACTION_IMAGES) in the ALTER TABLE command and that you used the correct index name in your ALTER INDEX.
    Justin

  • Alter table type from COLUMN to ROW

    TABLE type can be changed from ROW to COLUMN (and vice versa) using the ALTER TABLE command .
    Lars Breddemann  wrote
    when considering which data store to choose (which, by the way, can be changed later on as well), you have to take into account:
    * will you usually need the complete row (all columns)? If so, row store may be more efficient, as reconstructing the complete row is one of the most expensive column store operations.
    * will you need to join the row-store table to a column store table? If so, you should avoid using a different storage type, since using both storage engines in a statement leads to intermediate result set materialization which is another name for bad performance.
    * do you want to fill the table with huge amounts of data, that should be aggregated and analysed? If this is the case, the column store is the better option.
    As a rule of thumb you may just start with column-store tables and change them to row-store tables when you encounter performance issues.
    In general most developers cannot anticipate all important use cases for the tables they design.
    This is especially true for living and growing systems.
    So, more important than choosing the 'right' storage in the beginning is to monitor the performance and to benchmark the differences when changing the storage engine.
    So suppose we have a COLUMN table , but would be requiring to get data from many columns (so would be a very expensive column operation) , would it be advisable to change the table type FROM COLUMN to ROW on the fly . would this be a resource intensive operation if the table has a lot of data ?
    Lets suppose , if the above can be done , but there exists a interdependency for the column table (say from another simultaneous operation) , and thus should remain as COLUMN table as such . so what would be the better option in this case .
    Creating views is not an option as it seems from the SQL guide , that there was not an option to create a ROW view from a COLUMN table. ?
    Edited by: Rajarshi Muhuri on Nov 27, 2011 3:25 AM

    Dear Rajashri,
    1. you cann't alter table from column to row using alter command.
    but you can achieve this through Stored procedure, just Little bit HSQL coding.
    I hope upcoming versions SAP Gives like following SQL statements  ( following statemnt not works in HANA works in oralce )
       create row table "EFASHION_TUTORIAL"."AAA" as
    select
    "ARTICLE_COLOR_LOOKUP_ID",
    "ARTICLE_ID",
    "COLOR_CODE",
    "ARTICLE_LABEL",
    "COLOR_LABEL",
    "CATEGORY",
    "SALE_PRICE",
    "FAMILY_NAME",
    "FAMILY_CODE"
    from "EFASHION_TUTORIAL"."ARTICLE_COLOR_LOOKUP";
    2. Row & column table two different purpose like OLTP & OLAP.
         when you think about OLAP means modeling use Column.
          when you think about OLTP means real time operations then use Row
    Column table is high compress ( 5 - 20X),  i don't think you want get any performance issue when read information from column table. that is actual Core engine reading parller process. ( that is Heart of HANA).
    Column table purpose quite different like calculations, grouping.. most of DW environment Queires.
    Row table is currently system tables in feature row tables as OLTP, it's less compress mode compress to column store.
    so finally you write small program convert column to row and row to column
    thanks
    Rao

  • How to decrease the dynamic table data loading time

    hi
    i have problem with dynamic table.
    when i execute the the table with passing a query , getting lot of time for loading the table data.( it takes 30sec for every 100 rows.)
    pls help me how to overcome this problem.
    thanks advance.

    Yes, This is oracle application...
    We can move into other tablespace as well. But concern is how to improve the alter table move command performance.
    Is there any specific parameter apart from the nologging and parallel server..
    If it is taking 8 hours , can some have experience that nologging will save how much time. or is there any risk in doing in production.
    Regards

  • UPDATE involving the same table in sub query

    DB version: 11.2
    We have a table called SHP_GC_TRACK which has around 8 million records with partitions . In the below UPDATE, it is updating a column based on the SELECT on the same table in a subquery.
    UPDATE shp_gc_track a
       SET f_tran_proc  = 'Y'
    WHERE last_update_date <
              (SELECT MAX (last_update_date)
                 FROM shp_gc_track b
                WHERE a.shp_trx_rowid = b.shp_trx_rowid
                  AND a.c_shp_inst = b.c_shp_inst
                  AND a.f_tran_proc  = b.f_tran_proc
                  AND b.f_ltr_received = 'D'
                  AND f_rec_code IN ('G', 'W')
                  AND b.f_rec_status = 'B'
                  AND b.c_shp_inst = :b1
       AND a.c_shp_inst = :b1
       AND a.f_ltr_received = 'D'             -----------------> part of composite index
       AND a.f_tran_proc  = 'N'              -----------------> part of composite index
       AND a.f_rec_code IN ('G', 'W')      --------------> part of composite index 
       AND a.f_rec_status = 'B';              -----------------> part of composite index  This UPDATE takes a long time to execute and sometime get hung.
    We have a composite index on four columns f_ltr_received, f_rec_code, f_rec_status, f_tran_proc . Explain plan shows this composite index is being used.
    Is there anyway to rewrite this query or any other suggestions ?

    Steve_74 wrote:
    DB version: 11.2
    We have a table called SHP_GC_TRACK which has around 8 million records with partitions . In the below UPDATE, it is updating a column based on the SELECT on the same table in a subquery.
    UPDATE shp_gc_track a
    SET f_tran_proc  = 'Y'
    WHERE last_update_date <
    (SELECT MAX (last_update_date)
    FROM shp_gc_track b
    WHERE a.shp_trx_rowid = b.shp_trx_rowid
    AND a.c_shp_inst = b.c_shp_inst
    AND a.f_tran_proc  = b.f_tran_proc
    AND b.f_ltr_received = 'D'
    AND f_rec_code IN ('G', 'W')
    AND b.f_rec_status = 'B'
    AND b.c_shp_inst = :b1
    AND a.c_shp_inst = :b1
    AND a.f_ltr_received = 'D'             -----------------> part of composite index
    AND a.f_tran_proc  = 'N'              -----------------> part of composite index
    AND a.f_rec_code IN ('G', 'W')      --------------> part of composite index 
    AND a.f_rec_status = 'B';              -----------------> part of composite index  This UPDATE takes a long time to execute and sometime get hung.
    We have a composite index on four columns f_ltr_received, f_rec_code, f_rec_status, f_tran_proc . Explain plan shows this composite index is being used.
    Is there anyway to rewrite this query or any other suggestions ?Tuning updates with subqueries can be hard :(. Sadly my suggestions below are of the try-it-and-see-what-happens variety - nothing certain
    First, check the index. Is it bitmap or b-tree? If b-tree see if the most restrictive columns are listed first - this can help with b-tree index efficiency. Also if b-tree a composite bitmap for columns with lots of repeating values instead might help
    Its a correlated subquery so you can't just run the subquery first putting the result into a scalar varaiable and using the variable in the SQL instead. You can try putting the results of the subuqery w/join keys in a GTT first using the GTT in the SQL to see if I/O is reduced overall during both operations.
    Do you have the licence for the parallel query option? Using parallel DML (this must be turned on manually) might help. Check the docs for the ALTER SESSION command to do this. Also, the PARALLEL_INDEX() hint might help
    Post the execution plan of the SQL

  • Alter Table problem

    Hi there. I'm new in Oracle environments. I've a problem with this procedure with the alter table statement.
    CREATE OR REPLACE PROCEDURE HR.PROC_ETL_TAB IS
    CURSOR cursor_ETL IS
    SELECT * FROM ETL_TAB_REF WHERE AVAILABLE_FLAG = 1;
    record_etl ETL_TAB_REF%ROWTYPE;
    VAR_ALTER VARCHAR2(100);
    BEGIN
    OPEN cursor_ETL;
    LOOP
    FETCH cursor_ETL INTO record_etl;
    EXIT WHEN cursor_ETL%NOTFOUND;
    VAR_ALTER:='ALTER TABLE '||record_etl.EXT_TABLE_NAME||' LOCATION('''||record_etl.FILE_NAME||''')';
    DBMS_OUTPUT.PUT_LINE(VAR_ALTER);
    EXECUTE IMMEDIATE VAR_ALTER;
    END LOOP;
    CLOSE cursor_ETL;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    NULL;
    WHEN OTHERS THEN
    -- Consider logging the error and then re-raise
    RAISE;
    END PROC_ETL_TAB;
    When I try to call the procedure (call proc_etl_tab()) this error occurs: ORA-01735: invalid ALTER TABLE option. Can anyone help me please? Thanks.

    Pl post details of OS and database versions, along with a sample of what this command would look like when the variables are evaluated
    VAR_ALTER:='ALTER TABLE '||record_etl.EXT_TABLE_NAME||' LOCATION('''||record_etl.FILE_NAME||''')';
    I do not believe ALTER TABLE ... LOCATION ... is a valid command.
    http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_3001.htm
    HTH
    Srini

  • Error 00904 trying to alter table ... ?!

    I'm running a script to create a table, the primary key of that table will then be used in an Alter Table command to make it a foreign key in an already exisiting table. Code:
    CREATE TABLE Category
    CATCODE VARCHAR2(3),
    CatDesc VARCHAR2(11) NOT NULL,
    CONSTRAINT Category_CATCODE_pk PRIMARY KEY(CATCODE)
    INSERT INTO Category(CATCODE, CatDesc) VALUES('BUS','Business');
    INSERT INTO Category(CATCODE, CatDesc) VALUES('CHN','Children');
    INSERT INTO Category(CATCODE, CatDesc) VALUES('COK','Cooking');
    INSERT INTO Category(CATCODE, CatDesc) VALUES('COM','Computer');
    INSERT INTO Category(CATCODE, CatDesc) VALUES('FAL','Family Life');
    INSERT INTO Category(CATCODE, CatDesc) VALUES('FIT','Fitness');
    INSERT INTO Category(CATCODE, CatDesc) VALUES('SEH','Self Help');
    INSERT INTO Category(CATCODE, CatDesc) VALUES('LIT','Literature');
    ALTER TABLE books ADD CONSTRAINT books_Category_fk FOREIGN KEY (CATCODE) REFERENCES Category(CATCODE);
    And the error:
    Error starting at line 20 in command:
    ALTER TABLE books ADD CONSTRAINT books_Category_fk FOREIGN KEY (CATCODE) REFERENCES Category(CATCODE)
    Error report:
    SQL Error: ORA-00904: "CATCODE": invalid identifier
    00904. 00000 - "%s: invalid identifier"
    *Cause:
    *Action:
    I have no idea what is wrong with my code. I successfully make my table and insert the data, why is it telling me CatCode is an invalid identifier?...
    ( I originally posted it in the wrong section whoops )

    Zombnom wrote:
    There isn't, I'm trying to ADD a column with CatCode being a foreign key in the books table.Well there's your problem. Column and foreign key are two seperate things. You can combine the two operations though:
    ALTER TABLE books ADD catcode CONSTRAINT books_category_fk REFERENCES category(catcode);

  • Alter table DDL does a FTS

    We are trying to modify some columns from BYTE semantics to CHAR semantics in a UTF8 database and noticed that for some tables the ALTER TABLE ... modify (col varchar2(nn CHAR)) statement is very fast and for others it does a full table scan and takes a very long time. Verified this using sql_trace/tkprof and looking at the trace file.
    What determines whether the data in the table is scanned vs a quick dictionary update? It also doesn't appear to make a difference whether the column(s) being modified are part of a primary/unique key constraint.
    Comments? Thanks

    Yes, the tables that were fast is much smaller than the ones that took a long time but that might have been a co-incidence since, as I said, when I looked at the trace file, the fast one did not do many LIOs at all.
    I am just trying to determine how does this ALTER TABLE ... MODIFY command works and if it is possible to speed it up for tables containing a lot of data. I always thought that operations like adding a new column, expanding a column were quick dictionary updates regardless of the number of rows in the table.
    Thanks
    Fastcall     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.08       0.13          2          6         19           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.08       0.14          2          6         19           0Slow (cancelled it after a few seconds, the table is very big)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      1      5.12      13.35      16127      16068          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      5.12      13.37      16127      16068          0           0

  • Alter table move in 8.1.7

    I did a alter table move on one of the tables in 8.1.7 database
    before alter table move..
    TABLE_NAME ACTUAL_SIZE_MB CUR_SIZE_MB FRAGMENTATION PROJECTED_SAVINGS_MB
    UDA_ITEM_FF 36.07 328 89 291.93
    after the alter table move..
    TABLE_NAME ACTUAL_SIZE_MB CUR_SIZE_MB FRAGMENTATION PROJECTED_SAVINGS_MB
    UDA_ITEM_FF 57
    why after the alter table move, I am only able to see the current size and not the actual size?
    any inputs welcome

    Handle:  dee
    Status Level:  Newbie (5)
    Registered:  Jun 14, 2010
    Total Posts:  61
    Total Questions:  30 (30 unresolved)
    If its working , Please mark thread as answered why you leave your thread in Open Status

  • Nsert/Update and Add Column at the same Table and at the "same" Time

    Hello,
    I want Insert/Update and Add Column at the same Table and at the "same" Time but in different sessions.
    Example:
    At first the "insert/update" statement:
    Insert into TestTable (Testid,Value) values (1,5105);
    After that the "add" statement:
    Alter table TestTable add TestColumn number;
    - sadly now I get the message: ORA-00054: resource busy and acquire with NOWAIT specified
    "insert/update" statement:
    Insert into TestTable (Testid,Value) values (2,1135);
    After that the execute commit.
    I don't know when the first session set the commit statement so I want that the DB the "Alter Table..." statement execute if it's possible.
    If it's possible I want to save a repeat loop with the "Alter Table..." statemtent.
    Thanks for ideas

    Well I want to walk in the rain without and umbrella and still stay dry, but it ain't gonna happen.
    You can't run a DDL statement against a table with transactions pending. Session 2 has to wait until session commits or rollbacks (or until the session is killed). That's just the way it is.
    This makes sense if you think about it. The data dictionary has to be consistent across all sessions. If session 2 was allowed to change the table structure whilst session 1 has a pending transaction then the database is in an inconsistent state. This is easier to see if you consider the reverse situation - the ALTER TABLE statement run by session 2 does a DROP COLUMN TESTID rather than adding a column: now what should happen to session 1's INSERT statement? You have retrospectively invalidated a statement that was perfectly legal when it was executed.
    If it's possible I want to save a repeat loop with the "Alter Table..." statemtent.Fnord.
    Cheers, APC

  • Replicating DDL, but only for ALTER TABLE/CREATE TABLE

    We're looking to use Streams to replicate our database, for Warehouse use. We're not looking to use any ETL, but rather copy the table structures & data over identically as they appear in the source database. We'll add indexes in a custom step. Otherwise, though, we don't need TRIGGERs, PROCs, VIEWs etc.
    Is this possible in Streams? Probably asking if it's possible is the wrong question ... Is this something that is normally done, or does it bring with it a significant amount of complexity, plus some extra things to note (like, "..don't forget TRUNCATE also...")
    Thanks,
    Chuck

    Sorry to bump this one ...
    So, if our intent is to copy over all tables in a specified list of shemas, in their current form, and then we wanted to capture:
    INSERTs
    UPDATEs
    DELETEs
    TRUNCATE TABLE
    ALTER TABLE (but excluding anything related to CONSTRAINTS)
    (... I'm thinking that's all we'd need to keep a copy of the main db in the warehouse, without a DBA having to "retouch" the warehouse to keep it in sync ...)
    Would that be considered a complicated configuration? The ALTER TABLE piece sounds picky enough, that it could be a headache ... but I'm thinking the Oracle reps were being pushy about the amount effort needed to set this environment up.
    --=Chuck

  • 'alter session' command generating a vast number of audit files

    Oracle: 10.2.0.5
    OS: HP Itanium.
    We have requirements to turn on auditing to the operating system for security purposes. We have only set the init.ora parameters and have not actually enabled any sql statement tracing yet. The parameters we have set are:
    audit_file_dest
    audit_sys_operations=true
    audit_trail =xml, extended (the xml is based on requirements).
    We have a login trigger that runs 'alter session set current_schema=<>' This has been a part of the application for years. Every time someone logs in this gets set.
    We are getting 1 trace file every time this is run.
    Is this from the audit_sys_operations?
    I ran some queries of the Audit views and I don't think we have anything turned on. This is an old legacy system so someone may have issued some commands at some point in the past. There are a lot of views and I may have missed this.
    How do we disable auditing the 'alter session commands' ?
    Edited by: Guess2 on Sep 17, 2012 6:46 AM

    Hi,
    You could try: noaudit alter session;
    Regards,

Maybe you are looking for

  • Apple TV v6.0 Lockup and Lost Rental

    I rented a movie on my Apple TV which has version 6.0 firmware.  About 90 minutes into the movie, playback abruptly stopped and the apple TV software locked up.  I had to hard-reboot the Apple TV to regain control of it (move the cursor).  I was unab

  • Script to Export and Import Keywords and Metadata

    I have a requirement to mass upload and download keywords and various metadata fields (i.e. File Name, Date Created, City, Country, Document, Title, etc.) into an external database from the Adobe Bridge. Ideally it would be compatible to .txt, .csv,

  • F-53,F-30,F-03 - Selection of Items

    Hi All, When running these transactions previously the items were in deselected mode we will select our items but suddenly it came to selection mode.Now we are in a position to select the items we need from the already selected items. We need to know

  • EBS - File MT940 and BAI2

    Hi, In our current design we receive the electronic bank file from various banks in MT940 and BAI2 format. The requirement is to upload these file formats into SAP. Is it possible to do so using standard SAP transaction like FF_5? If not then what is

  • Importing translated text

    Can anyone tell me if English language annotations on diagrammatic drawings can be 'tagged' or coded in some way so that when they are translated into their foreign language equivalents they will automatically re-import or update to replace the origi