Importing table structure

Hai everybody
is there any possibility to import only the Table structure from a oracle backup dump file. After importing the table strucutre again any possibility to import the data into the tables from the same/other backup dump file.
What i meant to say is - first i will import the table structure from the dump file. After importing the table structure i want to verify the constraints. finally i want to import the data from the same dump file.
Thanks
JayaDev

Hi JayaDev,
If you are using IMP to import into database use ROWS=N parameter
for further reference, refer link [Importing schemas|http://www.psoug.org/reference/import.html|Importing schemas]
*009*

Similar Messages

  • Import table structure?

    Am I correct in assuming there is no way to import a table structure (non-delivered) into a custom repository in MDM?  In other words, you must manually create each table?
    ...the gui is a bit tedious, are there any quicker/better ways to create custom tables?

    Hi,
    I guess the answer to your question is Yes.
    Heres the justification..
    Firstly, a database table shouldn't be confused with the tables we see in the repository. MDM stores the tables in the repository in the database in a different manner.
    So, there is no one to one relationship.
    Also, MDM also stores other details like unique, display fields, keywords, multilingual options etc. So, even if you import a table you would have to enter these details manually.
    Also buddy, this is one time job and also a very critical job as it decides the efficieny of your MDM solution.
    I believe that using the API's, you can expedite the process..Not very sure abt this though..
    Hope that helps.
    Regards,
    Tanveer.
    <b>Please mark helpful answers</b>

  • Importing table structure only consumes space in GB

    Hello,
    I have used oracle 9i Exp commmand to export a schema structure.(No rows) There are only 150 tables and the export is perfect without any warnings.
    When i try to import into a different schema , I can see that it consumes 3 GB of space in the default tablespace for that schema. I checked the free space in the tablespace before importing...there is 3 GB of free space..
    When i start import it gets full , and import hangs. It does not proceed any further. Im not able to understand, why 3 GB of space it is using for just the table structure.
    Please help
    Thanks

    UNKNOWN007 wrote:
    Hello,
    I have used oracle 9i Exp commmand to export a schema structure.(No rows) There are only 150 tables and the export is perfect without any warnings.
    When i try to import into a different schema , I can see that it consumes 3 GB of space in the default tablespace for that schema. I checked the free space in the tablespace before importing...there is 3 GB of free space..
    When i start import it gets full , and import hangs. It does not proceed any further. Im not able to understand, why 3 GB of space it is using for just the table structure.neither are we without your export and import commands. I could take a guess though. That would be that your export uses compress=y either explicitly or because it's the default, and also that you are using dictionary managed tablespaces on your target. In that case each new table gets created with an initial extent the same size as the data on the source. In this case the immediate solution is not to use compress and the longer term one is to move as well to locally managed tablespaces.
    still if we could see the commands you used then we wouldn't need to guess.
    Niall Litchfield
    http://www.orawin.info/

  • Import tables structure to document and add comments

    I want to import to a document the structure of all que tables and add some coments to the columns.
    Are there tools that do that?
    Thanks
    André

    Hi,
    Oracle Designer or 3-rd party tools like PL/SQL Developer may help you...
    Simon

  • Resource for R/3 functional processes and table structures

    Dear Experts,
    I want to have brief but concise understanding of R/3 modules in aspects of business process flows, table important table structures. Ideally, the document or book should phrase it in a way easy for non-functional people to understand. I am sure as Abap Developers, you gurus have to understand business processes & tables all the time. Appreciate some help here. I am even willing to pay a sum for such resource.
    My contact : [email protected]
    regards,
    Bryan

    Hi Bryan,
    There's one PDF file on the web which I think is okay. Here's he link -
    http://www.auditware.co.uk/SAP/Extras/SAPTables.pdf
    Please let me know whether or nor this is what you are looking for.
    Regards,
    Anand Mandalika.

  • Export and import only table structure

    Hi ,
    I have two schema scott and scott2. scott schema is having table index and procedure and scott2 schema is fully empty.
    Now i want the table structure, indexes and procedure from scott schema to scott2 schema. No DATA needed.
    What is the query to export table structure, indexes and procedure from scott schema and import in scott2 schema.
    Once this done, i want scott schema should have full access to scott2 schema.
    Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
    Please help...

    Pravin wrote:
    I used rows=n
    it giving me below error while importing dump file:-
    IMP-00003: ORACLE error 604 encountered
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01013: user requested cancel of current operation^CYou are getting this error because you hit "Ctrl C" during the import, which essentially cancels the import.
    IMP-00017: following statement failed with ORACLE error 604:
    "CREATE TABLE "INVESTMENT_DETAILS_BK210509" ("EMP_NO" VARCHAR2(15), "INFOTYP"
    "E" VARCHAR2(10), "SBSEC" NUMBER(*,0), "SBDIV" NUMBER(*,0), "AMOUNT" NUMBER("
    "*,0), "CREATE_DATE" DATE, "MODIFY_DATE" DATE, "FROM_DATE" DATE, "TO_DATE" D"
    "ATE) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 6684672"
    " FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "DMSTG" LOGG"
    "ING NOCOMPRESS"Srini

  • How to export and import only data not table structure

    Hi Guys,
    I am not much aware about import ,export utility please help me ..
    I have two schema .. Schema1, Schema2
    i used to use Schema1 in that my valuable data is present . now i want to move this data from Schema1 to Schema2 ..
    In schema2 , i have only table structure , not any data ..

    user1118517 wrote:
    Hi Guys,
    I am not much aware about import ,export utility please help me ..
    I have two schema .. Schema1, Schema2
    i used to use Schema1 in that my valuable data is present . now i want to move this data from Schema1 to Schema2 ..
    In schema2 , i have only table structure , not any data ..Nothing wrong with exporting the structure. Just use 'ignore=y' on the import. When it tries to do the CREATE TABLE it (the CREATE statement) will fail because the table already exists, but the ignore=y means "ignore failures of CREATE", and it will then proceed to INSERT the data.

  • JDBC Lookup - Import table data from a different schema in same DB

    Hi XI Experts,
    We are facing an issue while importing a Database table into the external definition in PI 7.1.
    The details are as below:
    I have configured user 'A' in PI communication channel to access the database. But the table that I want to access is present in schema "B". Due to this, I am unable to view the table that I have to import in the list available.
    In other words, I am trying to access a table present in a different schema in the same database. Please note that my user has been given all the required permissions to access different schema. Even then, I am unable to access the table in different schema.
    Kindly provide your valuable suggestions as to how I can import table which is present in another schema but in the same Database.
    Regards,
    Subbu

    If you are using PI 7.1, then you can do JDBC Lookup to import JDBC meta data (table structures from DB). Configure a jdbc receiver communication channel where you specify username and password which has permission to access schema A and Schema B of database. Specify database name in the connection string. Then you might have access to import both schema.
    Please refer these links
    SAP PI 7.1 Mapping Enhancements Series: Graphical Support for JDBC and RFC Lookups
    How to use JDBC Lookup in PI 7.1 ?

  • TIPS(18) : CREATING SCRIPTS TO RECREATE A TABLE STRUCTURE

    제품 : SQL*PLUS
    작성날짜 : 1996-11-12
    TIPS(18) : Creating Scripts to Recreate a Table Structure
    =========================================================
    The script creates scripts that can be used to recreate a table structure.
    For example, this script can be used when a table has become fragmented or to
    get a defintion that can be run on another database.
    CREATES SCRIPT TO RECREATE A TABLE-STRUCTURE
    INCL. STORAGE, CONSTRAINTS, TRIGGERS ETC.
    This script creates scripts to recreate a table structure.
    Use the script to reorganise a table that has become fragmented,
    to get a definition that can be run on another database/schema or
    as a basis for altering the table structure (eg. drop a column!).
    IMPORTANT: Running the script is safe as it only creates two new scripts and
    does not do anything to your database! To get anything done you have to run the
    scripts created.
    The created scripts does the following:
    1. save the content of the table
    2. drop any foreign key constraints referencing the table
    3. drop the table
    4. creates the table with an Initial storage parameter that
    will accomodate the entire content of the table. The Next
    parameter is 25% of the initial.
    The storage parameters are picked from the following list:
    64K, 128K, 256K, 512K, multiples of 1M.
    5. create table and column comments
    6. fill the table with the original content
    7. create all the indexes incl storage parameters as above.
    8. add primary, unique key and check constraints.
    9. add foreign key constraints for the table and for referencing
    tables.
    10.Create the table's triggers.
    11.Compile any depending objects (cascading).
    12.Grant table and column privileges.
    13.Create synonyms.
    This script must be run as the owner of the table.
    If your table contains a LONG-column, use the COPY
    command in SQL*Plus to store/restore the data.
    USAGE
    from SQL*Plus:
    start reorgtb
    This will create the scripts REORGS1.SQL and REORGS2.SQL
    REORGS1.SQL contains code to save the current content of the table.
    REORGS2.SQL contains code to rebuild the table structure.
    undef tab;
    set echo off
    column a1 new_val stor
    column b1 new_val nxt
    select
    decode(sign(1024-sum(bytes)/1024),-1,to_char((round(sum(bytes)/(1024*1
    024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
    decode(sign(512-sum(bytes)/1024), -1,'1M',
    decode(sign(256-sum(bytes)/1024), -1,'512K',
    decode(sign(128-sum(bytes)/1024), -1,'256K',
    decode(sign(64-sum(bytes)/1024) , -1,'128K',
    '64K'
    a1,
    decode(sign(1024-sum(bytes)/4096),-1,to_char((round(sum(bytes)/(4096*1
    024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
    decode(sign(512-sum(bytes)/4096), -1,'1M',
    decode(sign(256-sum(bytes)/4096), -1,'512K',
    decode(sign(128-sum(bytes)/4096), -1,'256K',
    decode(sign(64-sum(bytes)/4096) , -1,'128K',
    '64K'
    b1
    from user_extents
    where segment_name=upper('&1');
    set pages 0 feed off verify off lines 150
    col c1 format a80
    spool reorgs1.sql
    PROMPT drop table bk_&1
    prompt /
    PROMPT create table bk_&1 storage (initial &stor) as select * from &1
    prompt /
    spool off
    spool reorgs2.sql
    PROMPT spool reorgs2
    select 'alter table '||table_name||' drop constraint
    '||constraint_name||';'
    from user_constraints where r_constraint_name
    in (select constraint_name from user_constraints where
    table_name=upper('&1')
    and constraint_type in ('P','U'));
    PROMPT drop table &1
    prompt /
    prompt create table &1
    select decode(column_id,1,'(',',')
    ||rpad(column_name,40)
    ||decode(data_type,'DATE' ,'DATE '
    ,'LONG' ,'LONG '
    ,'LONG RAW','LONG RAW '
    ,'RAW' ,'RAW '
    ,'CHAR' ,'CHAR '
    ,'VARCHAR' ,'VARCHAR '
    ,'VARCHAR2','VARCHAR2 '
    ,'NUMBER' ,'NUMBER '
    ,'unknown')
    ||rpad(
    decode(data_type,'DATE' ,null
    ,'LONG' ,null
    ,'LONG RAW',null
    ,'RAW' ,decode(data_length,null,null
    ,'('||data_length||')')
    ,'CHAR' ,decode(data_length,null,null
    ,'('||data_length||')')
    ,'VARCHAR' ,decode(data_length,null,null
    ,'('||data_length||')')
    ,'VARCHAR2',decode(data_length,null,null
    ,'('||data_length||')')
    ,'NUMBER' ,decode(data_precision,null,' '
    ,'('||data_precision||
    decode(data_scale,null,null
    ,','||data_scale)||')')
    ,'unknown'),8,' ')
    ||decode(nullable,'Y','NULL','NOT NULL') c1
    from user_tab_columns
    where table_name = upper('&1')
    order by column_id
    prompt )
    select 'pctfree '||t.pct_free c1
    ,'pctused '||t.pct_used c1
    ,'initrans '||t.ini_trans c1
    ,'maxtrans '||t.max_trans c1
    ,'tablespace '||s.tablespace_name c1
    ,'storage (initial '||'&stor' c1
    ,' next '||'&stor' c1
    ,' minextents '||t.min_extents c1
    ,' maxextents '||t.max_extents c1
    ,' pctincrease '||t.pct_increase||')' c1
    from user_Segments s, user_tables t
    where s.segment_name = upper('&1') and
    t.table_name = upper('&1')
    and s.segment_type = 'TABLE'
    prompt /
    select 'comment on table &1 is '''||comments||''';' c1 from
    user_tab_comments
    where table_name=upper('&1');
    select 'comment on column &1..'||column_name||
    ' is '''||comments||''';' c1 from user_col_comments
    where table_name=upper('&1');
    prompt insert into &1 select * from bk_&1
    prompt /
    set serveroutput on
    declare
    cursor c1 is select index_name,decode(uniqueness,'UNIQUE','UNIQUE')
    unq
    from user_indexes where
    table_name = upper('&1');
    indname varchar2(50);
    cursor c2 is select
    decode(column_position,1,'(',',')||rpad(column_name,40) cl
    from user_ind_columns where table_name = upper('&1') and
    index_name = indname
    order by column_position;
    l1 varchar2(100);
    l2 varchar2(100);
    l3 varchar2(100);
    l4 varchar2(100);
    l5 varchar2(100);
    l6 varchar2(100);
    l7 varchar2(100);
    l8 varchar2(100);
    l9 varchar2(100);
    begin
    dbms_output.enable(100000);
    for c in c1 loop
    dbms_output.put_line('create '||c.unq||' index '||c.index_name||' on
    &1');
    indname := c.index_name;
    for q in c2 loop
    dbms_output.put_line(q.cl);
    end loop;
    dbms_output.put_line(')');
    select 'pctfree '||i.pct_free ,
    'initrans '||i.ini_trans ,
    'maxtrans '||i.max_trans ,
    'tablespace '||i.tablespace_name ,
    'storage (initial '||
    decode(sign(1024-sum(e.bytes)/1024),-1,
    to_char((round(sum(e.bytes)/(1024*1024))+1))||'M',
    decode(sign(512-sum(e.bytes)/1024), -1,'1M',
    decode(sign(256-sum(e.bytes)/1024), -1,'512K',
    decode(sign(128-sum(e.bytes)/1024), -1,'256K',
    decode(sign(64-sum(e.bytes)/1024) , -1,'128K',
    '64K'))))) ,
    ' next '||
    decode(sign(1024-sum(e.bytes)/4096),-1,
    to_char((round(sum(e.bytes)/(4096*1024))+1))||'M',
    decode(sign(512-sum(e.bytes)/4096), -1,'1M',
    decode(sign(256-sum(e.bytes)/4096), -1,'512K',
    decode(sign(128-sum(e.bytes)/4096), -1,'256K',
    decode(sign(64-sum(e.bytes)/4096) , -1,'128K',
    '64K'))))) ,
    ' minextents '||s.min_extents ,
    ' maxextents '||s.max_extents ,
    ' pctincrease '||s.pct_increase||')'
    into l1,l2,l3,l4,l5,l6,l7,l8,l9
    from user_extents e,user_segments s, user_indexes i
    where s.segment_name = c.index_name
    and s.segment_type = 'INDEX'
    and i.index_name = c.index_name
    and e.segment_name=s.segment_name
    group by s.min_extents,s.max_extents,s.pct_increase,
    i.pct_free,i.ini_trans,i.max_trans,i.tablespace_name ;
    dbms_output.put_line(l1);
    dbms_output.put_line(l2);
    dbms_output.put_line(l3);
    dbms_output.put_line(l4);
    dbms_output.put_line(l5);
    dbms_output.put_line(l6);
    dbms_output.put_line(l7);
    dbms_output.put_line(l8);
    dbms_output.put_line(l9);
    dbms_output.put_line('/');
    end loop;
    end;
    declare
    cursor c1 is
    select constraint_name, decode(constraint_type,'U',' UNIQUE',' PRIMARY
    KEY') typ,
    decode(status,'DISABLED','DISABLE',' ') status from user_constraints
    where table_name = upper('&1')
    and constraint_type in ('U','P');
    cname varchar2(100);
    cursor c2 is
    select decode(position,1,'(',',')||rpad(column_name,40) coln
    from user_cons_columns
    where table_name = upper('&1')
    and constraint_name = cname
    order by position;
    begin
    for q1 in c1 loop
    cname := q1.constraint_name;
    dbms_output.put_line('alter table &1');
    dbms_output.put_line('add constraint '||cname||q1.typ);
    for q2 in c2 loop
    dbms_output.put_line(q2.coln);
    end loop;
    dbms_output.put_line(')' ||q1.status);
    dbms_output.put_line('/');
    end loop;
    end;
    declare
    cursor c1 is
    select c.constraint_name,c.r_constraint_name cname2,
    c.table_name table1, r.table_name table2,
    decode(c.status,'DISABLED','DISABLE',' ') status,
    decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
    delete_rule
    from user_constraints c,
    user_constraints r
    where c.constraint_type='R' and
    c.r_constraint_name = r.constraint_name and
    c.table_name = upper('&1')
    union
    select c.constraint_name,c.r_constraint_name cname2,
    c.table_name table1, r.table_name table2,
    decode(c.status,'DISABLED','DISABLE',' ') status,
    decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
    delete_rule
    from user_constraints c,
    user_constraints r
    where c.constraint_type='R' and
    c.r_constraint_name = r.constraint_name and
    r.table_name = upper('&1');
    cname varchar2(50);
    cname2 varchar2(50);
    cursor c2 is
    select decode(position,1,'(',',')||rpad(column_name,40) colname
    from user_cons_columns
    where constraint_name = cname
    order by position;
    cursor c3 is
    select decode(position,1,'(',',')||rpad(column_name,40) refcol
    from user_cons_columns
    where constraint_name = cname2
    order by position;
    begin
    dbms_output.enable(100000);
    for q1 in c1 loop
    cname := q1.constraint_name;
    cname2 := q1.cname2;
    dbms_output.put_line('alter table '||q1.table1||' add constraint ');
    dbms_output.put_line(cname||' foreign key');
    for q2 in c2 loop
    dbms_output.put_line(q2.colname);
    end loop;
    dbms_output.put_line(') references '||q1.table2);
    for q3 in c3 loop
    dbms_output.put_line(q3.refcol);
    end loop;
    dbms_output.put_line(') '||q1.delete_rule||q1.status);
    dbms_output.put_line('/');
    end loop;
    end;
    col c1 format a79 word_wrap
    set long 32000
    set arraysize 1
    select 'create or replace trigger ' c1,
    description c1,
    'WHEN ('||when_clause||')' c1,
    trigger_body ,
    '/' c1
    from user_triggers
    where table_name = upper('&1') and when_clause is not null
    select 'create or replace trigger ' c1,
    description c1,
    trigger_body ,
    '/' c1
    from user_triggers
    where table_name = upper('&1') and when_clause is null
    select 'alter trigger '||trigger_name||decode(status,'DISABLED','
    DISABLE',' ENABLE')
    from user_Triggers where table_name='&1';
    set serveroutput on
    declare
    cursor c1 is
    select 'alter table
    '||'&1'||decode(substr(constraint_name,1,4),'SYS_',' ',
    ' add constraint ') a1,
    decode(substr(constraint_name,1,4),'SYS_','
    ',constraint_name)||' check (' a2,
    search_condition a3,
    ') '||decode(status,'DISABLED','DISABLE','') a4,
    '/' a5
    from user_constraints
    where table_name = upper('&1') and
    constraint_type='C';
    b1 varchar2(100);
    b2 varchar2(100);
    b3 varchar2(32000);
    b4 varchar2(100);
    b5 varchar2(100);
    fl number;
    begin
    open c1;
    loop
    fetch c1 into b1,b2,b3,b4,b5;
    exit when c1%NOTFOUND;
    select count(*) into fl from user_tab_columns where table_name =
    upper('&1') and
    upper(column_name)||' IS NOT NULL' = upper(b3);
    if fl = 0 then
    dbms_output.put_line(b1);
    dbms_output.put_line(b2);
    dbms_output.put_line(b3);
    dbms_output.put_line(b4);
    dbms_output.put_line(b5);
    end if;
    end loop;
    end;
    create or replace procedure dumzxcvreorg_dep(nam varchar2,typ
    varchar2) as
    cursor cur is
    select type,decode(type,'PACKAGE BODY','PACKAGE',type) type1,
    name from user_dependencies
    where referenced_name=upper(nam) and referenced_type=upper(typ);
    begin
    dbms_output.enable(500000);
    for c in cur loop
    dbms_output.put_line('alter '||c.type1||' '||c.name||' compile;');
    dumzxcvreorg_dep(c.name,c.type);
    end loop;
    end;
    exec dumzxcvreorg_dep('&1','TABLE');
    drop procedure dumzxcvreorg_Dep;
    select 'grant '||privilege||' on '||table_name||' to '||grantee||
    decode(grantable,'YES',' with grant option;',';') from
    user_tab_privs where table_name = upper('&1');
    select 'grant '||privilege||' ('||column_name||') on &1 to
    '||grantee||
    decode(grantable,'YES',' with grant option;',';')
    from user_col_privs where grantor=user and
    table_name=upper('&1')
    order by grantee, privilege;
    select 'create synonym '||synonym_name||' for
    '||table_owner||'.'||table_name||';'
    from user_synonyms where table_name=upper('&1');
    PROMPT REM
    PROMPT REM YOU MAY HAVE TO LOG ON AS SYSTEM TO BE
    PROMPT REM ABLE TO CREATE ANY OF THE PUBLIC SYNONYMS!
    PROMPT REM
    select 'create public synonym '||synonym_name||' for
    '||table_owner||'.'||table_name||';'
    from all_synonyms where owner='PUBLIC' and table_name=upper('&1') and
    table_owner=user;
    prompt spool off
    spool off
    set echo on feed on verify on
    The scripts REORGS1.SQL and REORGS2.SQL have been
    created. Alter these script as necesarry.
    To recreate the table-structure, first run REORGS1.SQL.
    This script saves the content of your table in a table
    called bk_.
    If this script runs successfully run REORGS2.SQL.
    The result is spooled to REORGTB.LST.
    Check this file before dropping the bk_ table.
    */

    Please do NOT cross-postings: create a deep structure for dynamic internal table
    Regards
      Uwe

  • BODI-1112339 - Unable to import table in BW datasource QA Environment

    Hi,
    I'm using BusinessObjects Data Services XI 12.2.2. I am trying to import table to BW datastores QA environment but unfortunately it's failing. Can someone help me? Below is the error message that I get:
    RFC CallReceive error <Function RFC_ABAP_INSTALL_AND_RUN: . SAP System has status 'not modifiable'. The problem may go away if you change the SAP datastore property to Execute in background(batch)>.
    I already set the Execute in background(batch) to Yes but it also failed to import.
    Please advice.
    Thanks and Regards,
    Randell

    Hi Henry,
    I am also getting similar error for my SAP BW data store. The exact message is as under
    Error: Cannot import the metadata table <name=DR3_600),
    Import transfer structure failed:
    Infosource <ZASSET_ATTR_TEXT> for source system <DR3+600>, (BODI-1112339)
    Did you manage to fix this? Provide the hints if you manage to fix it.
    Regards,
    Bhavesh

  • Table not found error while importing table defintion in PI 7.1?

    Hi Guys,
    I am trying to import the table structure from DB2 as an external defintion but i am getting the error table not found.
    There are no connection issues with the DB2 and CC is good. Are there any additional settings i need to perform to import the table?
    any help or suggestions would be really appreciated
    Thanks,
    Srini

    Hi Srinivas,
    I think you want to do JDBC lookup.. you must have created the JDBC receiver communication channel. As your communication channel is correct and still you are getting Table not found error.. Check following steps.
    - Check that your Communication channel is activated and working fine.
    - Check that the ID which you are using in JDBC receiver is having proper authorization to import the table definition.
      Just check with all authorization.
    I think the problem is with insufficient authorization to User ID used on JDBC communication channel.
    Thanks,
    Bhupesh.

  • Can we import a Structure in RFC's.

    Hi All
    Can we import a Structure in RFC's.
    If yes then how can we do????
    Urgent.

    Hi,
    You can do that. Give your parameter name of structure type in the 'Tables' tab.
    Regards,
    Renjith Michael.

  • Table structure changed in testing system after system refresh.

    Hi Team,
    Recently we underwent a system refresh in Testing System where the Testing data is filled with Production data. But now we find that in one table some fields which we had deleted they are again found there. The version history of table is also gone. The table object was under testing in testing system and after which it was supposed to be transported to production. I believe System refresh only means refresh of data and it has nothing to do with table structure. Please correct me if I am wrong. Please also let me know what could be the reason of those changes in table if it is not System Refresh.
    Regards,
    Amit

    I believe System refresh only means refresh of data and it has nothing to do with table structure.
    Alas, you were wrong, after all table definition are data too, as program sources...
    You have to re-import every transport request which was not yet transported to production in your test system.
    Regards,
    Raymond

  • Give me some PP important tables and Tcodes for abapers

    give me some PP important tables and Tcodes for abapers
    thank you,
    Regards,
    Jagrut Bharatkumar Shukla

    10     Production Planning (PP)
    10.1     Work center
         CRHH                    Work center hierarchy
         CRHS                    Hierarchy structure
    CRHD                    Work center header
    CRTX                    Text for the Work Center or Production Resource/Tool
         CRCO                    Assignment of Work Center to Cost Center
         KAKO                    Capacity Header Segment
         CRCA                    Work Center Capacity Allocation
         TC24                    Person responsible for the workcenter
         CRCO                    Allocation of costcentre to workcentre
                 S022                    Order Operation Data for Work Center
    10.2     Routings/operations
         MAPL                    Allocation of task lists to materials
         PLAS                    Task list - selection of operations/activities
         PLFH                    Task list - production resources/tools
         PLFL                    Task list - sequences
         PLKO                    Task list - header
         PLKZ                    Task list: main header
         PLPH                    Phases / suboperations
         PLPO                    Task list operation / activity
         PLPR                    Log collector for tasklists
         PLMZ                    Allocation of BOM - items to operations
    10.3     Bill of material
         STKO                    BOM - header
         STPO                    BOM - item
         STAS                    BOMs - Item Selection
         STPN                    BOMs - follow-up control
         STPU                    BOM - sub-item
         STZU                    Permanent BOM data
         PLMZ                    Allocation of BOM - items to operations
         MAST                    Material to BOM link
         KDST                    Sales order to BOM link
    10.4     Production orders
         AUFK                    Production order headers
         AFIH                    Maintenance order header
         AUFM                    Goods movement for prod. order
         AFKO                    Order header data PP orders
         AFPO                    Order item
         RESB                    Order componenten     
           AFVC                    Order operations
         AFVV                    Quantities/dates/values in the operation
         AFVU                    User fields of the operation
         AFFL                    Work order sequence
         AFFH                    PRT assignment data for the work order(routing)
         JSTO                    Status profile
    JEST                    Object status
         AFRU                    Order completion confirmations
               PRT’s voor production orders
         AFFH                    PRT assignment data for the work order
         CRVD_A               Link of PRT to Document
         DRAW                    Document Info Record
         TDWA                    Document Types
         TDWD                    Data Carrier/Network Nodes
         TDWE                    Data Carrier Type
    10.5     Planned orders
         PLAF                    Planned orders
    10.6     KANBAN
         PKPS                    Kanban identification, control cycle
         PKHD                    Kanban control cycle (header data)
         PKER                    Error log for Kanban containers
    10.7     Reservations
         RESB                    Material reservations
         RKPF                    header
    10.8     Capacity planning
    KBKO                    Header record for capacity requirements
    KBED                    Capacity requirements records
    KBEZ                    Add. data for table KBED (for indiv. capacities/splits)
    10.9     Planned independent requirements
         PBIM                    Independent requirements for material
         PBED                    Independent requirement data
         PBHI                    Independent requirement history
         PBIV                    Independent requirement index
         PBIC                    Independent requirement index for customer req.

  • Determining internal table structure dynamically

    Hi,
    I have a number of internal tables in my program which I declare using types. As an example:
    TYPES: begin of ty_hierarchy,
                   control_id(6)   type c,
                   node_id(10)     type c,
                   node_name       type bezei40,
                   material        type matnr,
                   node_level      type prodh_stuf,
                   node_parent(10) type c,
                end of ty_hierarchy.
    DATA: it_hierarchy type ty_hierarchy occurs 0.
    Further down my program I need to determine the structure of internal table IT_HIERARCHY dynamically. Because I have a number of internal tables, I need to determine which internal table is being processed. Therefore it's important that I know the structure of the table that I'm currently processing.
    I am aware of CL_ABAP* classes and functions like GET_COMPONENT_LIST. However because I have declared my tables using the TYPE statement the method/function cannot read my table structure correctly. If I changed my declaration to be as below, the method/function work! However I don't want to do this as I use field symbols to reference my internal tables and need to use the TYPE statement.
    DATA: begin of ty_hierarchy,
                   control_id(6)   type c,
                   node_id(10)     type c,
                   node_name       type bezei40,
                   material        type matnr,
                   node_level      type prodh_stuf,
                   node_parent(10) type c,
                end of ty_hierarchy.
    DATA: begin of it_hierarchy occurs 0.
                 include structure ty_hierarchy
    DATA: end of it_hierarchy.
    Does anyone know on how I can determine my  internal table structure dynamically but still keeping my internal table declarations using TYPE statement?
    Any help would be greatly appreciated with reward points .....
    Thanks
    Liam

    Hello Liam
    Both the ABAP-OO as well as the FM-based approach described by Eswar work well with your way of defining the itabs. I described three different ways how to get the structure of your itab dynamically:
    - directly using the itab
    - using a field symbol
    - using a data reference
    REPORT  zus_sdn_dynamic_itabs.
    TYPE-POOLS: abap.
    TYPES: BEGIN OF ty_hierarchy,
    control_id(6) TYPE c,
    node_id(10) TYPE c,
    node_name TYPE bezei40,
    material TYPE matnr,
    node_level TYPE prodh_stuf,
    node_parent(10) TYPE c,
    END OF ty_hierarchy.
    DATA:
      gs_hierarchy    TYPE ty_hierarchy,
      it_hierarchy    TYPE ty_hierarchy OCCURS 0.
    DATA:
      gt_comp         TYPE abap_compdescr_tab,
      gs_comp_a       LIKE LINE OF gt_comp,
      gd_type         TYPE abap_typekind,
      gs_comp    TYPE rstrucinfo,
      it_comp TYPE TABLE OF rstrucinfo.
    DATA:
      go_struct    TYPE REF TO cl_abap_structdescr,
      go_table     TYPE REF TO cl_abap_tabledescr,
      gdo_data     TYPE REF TO data.
    FIELD-SYMBOLS:
      <gt_itab>    TYPE table.
    START-OF-SELECTION.
      GET REFERENCE OF it_hierarchy INTO gdo_data.
      ASSIGN gdo_data->* TO <gt_itab>.
    * (1) Describe directly by using the itab
    *  go_table  ?= cl_abap_structdescr=>describe_by_data( it_hierarchy ).
    * (2) Describe indirectly by using field symbol
    *  go_table  ?= cl_abap_structdescr=>describe_by_data( <gt_itab> ).
    * (3) Describe by data reference to itab
      go_table  ?= cl_abap_structdescr=>describe_by_data_ref( gdo_data ).
      go_struct ?= go_table->get_table_line_type( ).
      WRITE: / 'ABAP-OO Version:'.
      gt_comp = go_struct->components.
      LOOP AT gt_comp INTO gs_comp_a.
        WRITE: / gs_comp_a-name,
                 gs_comp_a-length,
                 gs_comp_a-type_kind,
                 gs_comp_a-decimals.
      ENDLOOP.
      SKIP 2.
      CALL FUNCTION 'GET_COMPONENT_LIST'
        EXPORTING
          program    = sy-repid
          fieldname  = 'GS_HIERARCHY'
        TABLES
          components = it_comp.
      WRITE: / 'Function Module Version:'.
      LOOP AT it_comp INTO gs_comp.
        WRITE: / gs_comp-compname,
                 gs_comp-level,
                 gs_comp-leng,
                 gs_comp-type,
                 gs_comp-olen,
                 gs_comp-decs.
      ENDLOOP.
    END-OF-SELECTION.
    Regards
      Uwe

Maybe you are looking for