Importing simple tables

Ok, this may seem like a silly question, but I've looked for info and can't find any...and am about ready to rip my hair out.
Is there a way to put a simple table into iWeb?
I have tried creating it in Pages, but all that imports is the file name of the document....
Cut and Paste looses the formatting....
what am I doing wrong?
thanks in advance

sorry, should have added that it is just a text based table, so there is no 'math' stuff involved. I just want to put it on the web page in a table style to make it look nice

Similar Messages

  • {SOL}Problem in Export/Import a simple table between two diff. characterset

    Hi ,
    I have created a simple table on SCOTT schema....
    SQL> CREATE TABLE TEST(A NUMBER(1) , B VARCHAR2(10));
    Table created
    SQL> INSERT INTO TEST VALUES(1 , 'TEST_TEST');
    1 row inserted
    SQL> COMMIT;
    Commit complete
    SQL> INSERT INTO TEST VALUES(2 , 'ΤΕΣΤ_ΤΕΣΤ');     <------------greek chars
    1 row inserted
    SQL> COMMIT;
    Commit complete
    The nls_parameters:
    SQL> SELECT * FROM NLS_INSTANCE_PARAMETERS;
    PARAMETER                      VALUE
    NLS_LANGUAGE                   GREEK
    NLS_TERRITORY                  GREECE
    NLS_SORT                      
    NLS_DATE_LANGUAGE             
    NLS_DATE_FORMAT               
    NLS_CURRENCY                  
    NLS_NUMERIC_CHARACTERS        
    NLS_ISO_CURRENCY              
    NLS_CALENDAR                  
    NLS_TIME_FORMAT               
    NLS_TIMESTAMP_FORMAT          
    NLS_TIME_TZ_FORMAT            
    NLS_TIMESTAMP_TZ_FORMAT       
    NLS_DUAL_CURRENCY             
    NLS_COMP                      
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    17 rows selected
    SQL> SELECT * FROM NLS_SESSION_PARAMETERS;
    PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    17 rows selected
    and db characterset is EL8MSWIN1253
    I export such as(following generally the instuctions found on Note:227332.1-Metalink):
    C:\Documents and Settings\s_k>SET ORACLE_SID=EPESY
    C:\Documents and Settings\s_k>SET NLS_LANG=GREEK_GREECE.EL8MSWIN1253
    C:\Documents and Settings\s_k>C:\oracle\product\10.2.0\database10g\BIN\exp SYSTE
    M/passwd@EPESY FILE=C:\TEST.DMP TABLES=(SCOTT.TEST) ROWS=Y LOG=C:\TEST2.TXT
    Export: Release 10.2.0.1.0 - Production on ╩Ϋ± ╔ΎΫΊ 22 12:28:58 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ╕ήώΊί ≤²Ίϊί≤ύ ≤ί: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP and Data Mining options
    ╟ ίΌάή∙ή▐ ▌ήώΊί ≤ΪΎ ≤ίΪ ≈ά±άΆΪ▐±∙Ί EL8MSWIN1253 Άάώ ≤ΪΎ ≤ίΪ ≈ά±άΆΪ▐±∙Ί NCHAR AL1
    6UTF16
    ╨±ΎίΪΎώΉά≤▀ά ήώά ίΌάή∙ή▐ Ϊ∙Ί Ώ±Ύ≤ϊώΎ±ώ≤Ή▌Ί∙Ί ΏώΊ▄Ά∙Ί Ή▌≤∙ ╙ΫΉέάΪώΆ▐≥ ─ώάϊ±ΎΉ▐≥ .
    ╧ Ϊ±▌≈∙Ί ≈±▐≤Ϊύ≥ ▄ΈΈάΌί ≤ί SCOTT
    . . ίΌάή∙ή▐ ΪΎΫ Ώ▀ΊάΆά                           TEST          2 ή±άΉΉ▌≥ ίΌ▐≈ϋύ≤
    άΊ
    ╟ ίΌάή∙ή▐ ΪίΈί▀∙≤ί ίΏώΪΫ≈■≥ ≈∙±▀≥ Ώ±ΎίώϊΎΏΎ▀ύ≤ύ.Then , i shutdown this database and i start the other.....
    with this nls_parameters
    SQL> select * from nls_session_parameters;
    PARAMETER                                                                        VALUE
    NLS_LANGUAGE                                                                     AMERICAN
    NLS_TERRITORY                                                                    AMERICA
    NLS_CURRENCY                                                                     $
    NLS_ISO_CURRENCY                                                                 AMERICA
    NLS_NUMERIC_CHARACTERS                                                           .,
    NLS_CALENDAR                                                                     GREGORIAN
    NLS_DATE_FORMAT                                                                  DD-MON-RR
    NLS_DATE_LANGUAGE                                                                AMERICAN
    NLS_SORT                                                                         BINARY
    NLS_TIME_FORMAT                                                                  HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT                                                             DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT                                                               HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT                                                          DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY                                                                $
    NLS_COMP                                                                         BINARY
    NLS_LENGTH_SEMANTICS                                                             CHAR
    NLS_NCHAR_CONV_EXCP                                                              FALSE
    17 rows selected
    SQL>
    SQL> select * from nls_instance_parameters;
    PARAMETER                                                                        VALUE
    NLS_LANGUAGE                                                                     GREEK
    NLS_TERRITORY                                                                    GREECE
    NLS_SORT                                                                        
    NLS_DATE_LANGUAGE                                                               
    NLS_DATE_FORMAT                                                                 
    NLS_CURRENCY                                                                    
    NLS_NUMERIC_CHARACTERS                                                          
    NLS_ISO_CURRENCY                                                                
    NLS_CALENDAR                                                                    
    NLS_TIME_FORMAT                                                                 
    NLS_TIMESTAMP_FORMAT                                                            
    NLS_TIME_TZ_FORMAT                                                              
    NLS_TIMESTAMP_TZ_FORMAT                                                         
    NLS_DUAL_CURRENCY                                                               
    NLS_COMP                                                                        
    NLS_LENGTH_SEMANTICS                                                             CHAR
    NLS_NCHAR_CONV_EXCP                                                              FALSE
    17 rows selected
    with this db characterset: UTF8
    C:\Documents and Settings\s_k>SET NLS_LANG=GREEK_GREECE.EL8MSWIN1253
    C:\Documents and Settings\s_k>C:\oracle\product\10.2.0\database10g\BIN\imp syste
    m/passwd@info FROMUSER=SCOTT TOUSER=SCOTT FILE=C:\TEST.DMP LOG=C:\TEST0_IMP.TXT
    Import: Release 10.2.0.1.0 - Production on ╩Ϋ± ╔ΎΫΊ 22 12:40:16 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ╕ήώΊί ≤²Ίϊί≤ύ ≤ί: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP and Data Mining options
    ┴±≈ί▀Ύ ίΌάή∙ή▐≥ ϊύΉώΎΫ±ή▐ϋύΆί άΏⁿ EXPORT:V10.02.01 Ή▌≤∙ ≤ΫΉέάΪώΆ▐≥ ϊώάϊ±ΎΉ▐≥
    ίώ≤άή∙ή▐ ▌ήώΊί ≤ί ≤ίΪ ≈ά±άΆΪ▐±∙Ί EL8MSWIN1253 Άάώ ≤ίΪ ≈ά±άΆΪ▐±∙Ί NCHAR UTF8
    server ίώ≤άή∙ή▐≥ ≈±ύ≤ώΉΎΏΎώί▀ ≤ίΪ ≈ά±άΆΪ▐±∙Ί UTF8 (ϊΫΊάΪ▐ ΉίΪάΪ±ΎΏ▐ ≤ίΪ ≈ά±άΆΪ▐±
    ∙Ί)
    server ίΌάή∙ή▐≥ ≈±ύ≤ώΉΎΏΎώί▀ ≤ίΪ ≈ά±άΆΪ▐±∙Ί NCHAR AL16UTF16 (ϊΫΊάΪ▐ ΉίΪάΪ±ΎΏ▐ ≤ί
    Ϊ ≈ά±άΆΪ▐±∙Ί nchar)
    . ίώ≤άή∙ή▐ Ϊ∙Ί άΊΪώΆίώΉ▌Ί∙Ί ΪΎΫ SCOTT ≤ΪΎ SCOTT
    . . ίώ≤άή∙ή▐ ΪΎΫ Ώ▀ΊάΆά                         "TEST"          2 ή±άΉΉ▌≥ ίώ≤▐≈ϋ
    ύ≤άΊ
    ╟ ίώ≤άή∙ή▐ ΪίΈί▀∙≤ί ίΏώΪΫ≈■≥ ≈∙±▀≥ Ώ±ΎίώϊΎΏΎ▀ύ≤ύ.
    C:\Documents and Settings\s_k>SQLPLUS SCOTT/TIGER
    SQL*Plus: Release 10.2.0.1.0 - Production on ╩Ϋ± ╔ΎΫΊ 22 12:41:20 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ╙²Ίϊί≤ύ ≤ί:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> SELECT * FROM TEST;
             A B
             1 TEST_TEST
             2 ????_????What may be the cause.....????
    Note: I use db 10g v.2 on Windows XP platform.. and the two db instances reside on the same machine....
    Thanks...
    Sim

    "Generally speaking the value of the NLS_LANG registry key or environment variable needs to be equal to the characterset of the database."
    Yes...that's why i have set the NLS_LANG env.variable to GREEK_GREECE.EL8MSWIN1253 ..equal to:
    SQL> select * from nls_database_parameters;
    PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET EL8MSWIN1253
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              10.2.0.1.0"nls_language doesn't come into play, nor nls_instance_parameters."
    Yes...it's true.
    "So, in the dump you posted, no one can tell whether those characters were INSERTed correctly at all. Your NLS_LANG *registry key* may have been set to an incorrect value (it defaults to American_America.MSWIN1252)."
    Actually , i have used a third-party tool PL/SQL Developer (which does have the OracleDB10g as default home).
    Looking at the Windows registry of OracleDB10g the NLS_LANG is equal to GREEK_GREECE.EL8MSWIN1253.
    "Thirdly, as I implied above the NLS_LANG on import should have been American_America.UTF8."
    According to the Note 227332.1 , if the db characterset of the two dbs are not the same.. then it is preferable the conversion should be done on the import process and not the export....
    So, in an example described there -export from a AMERICAN_AMERICA.WE8MSWIN1252 db and import on UTF8 db - (seems exactly the same as mine) the import is done as such:
    c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
           c:\>imp ....
    The conversion to UTF8 is done while inserting the data
           in the UTF8 database.Additional notes....
    I have used many different patterns doing the import......
    1) Use of AMERICAN_AMERICA.UTF8
    2) Use of GREEK_GREECE.EL8ISO8859P7
    3) Use the appropriate NLS_LANG that corresponds to the display of chcp command....
    All tries display some '?' chars.....
    Anyway... I 'll continue reading ... and testing
    Thanks... a lot for your points
    Sim

  • Datapump table import, simple question

    Hi,
    As a junior dba, i am a little bit confused.
    Suppose that a user wants me to import a table with datapump which he exported from another db with different schemas and tablespaces(He exported with expdp using tables=XX and i dont know the details..)...
    If i only know the table name, should i ask these three questions 1)schema 2)tablespace 3)oracle version ?
    From docs. i know remapping capabilities of impdp.. But are they mandatory when importing just a table?
    thanks in advance

    Hi,
    Suppose that a user wants me to import a table with datapump which he exported from another db with different schemas and tablespaces(He >exported with expdp using tables=XX and i dont know the details..)...
    If i only know the table name, should i ask these three questions 1)schema 2)tablespace 3)oracle version ?You can get this information from the dumpfile if you want - just to make sure you get the right information. If you run your import command but add:
    sqlfile=mysql.sql
    Then you can edit that sql file to see what is in the dumpfile. It won't show data, but it will show all of the metadata (tables, tablespaces, etc). It is a .sql file that contains all of the create statements that would have been executed if you did not add sqlfile parameter.
    From docs. i know remapping capabilities of impdp.. But are they mandatory when importing just a table?
    thanks in advanceYou never have to remap anything, but if the dumpfile contains a table scott.emp, then it will import that table into scott.emp. If you want it to go into blake, then you need to remap_schema. If it is going into tablespace tbs1 and you want it in tbs2, then you need a remap_tablespace.
    Suppose that an enduser wants me to export a spesific table using datapump ..
    Should i give him also the name of the tablespace where exported table resides?It would be nice, but see above, you can get the tablespace name from the sqlfile command on import.
    Hope this helps.
    Dean

  • Import Internal Table

    Hi guys...
    How can I import a internal table in a method of Badi?
    For example:
    I have the itab_ekpo that I export to memory in a especific program and need to import this table.
    Hw can I do this in a method ?
    tks
    flavio

    Hi,
    If the table data is filled in same session and BADI is called is in same session then you use simple EXPORT TO MEMORY-ID and  IMPORT TO MEMORY-ID statement.
    Else if both are done in different session then use EXPORT using INDX table and IMPORT the DATABASE INDX in your BADI.
    Please see the F1 documentation for the keyword help.
    Regards,
    SRinivas

  • Export & Import internal table data across the programs

    Hi All,
    I have two different reports,let say ZPRO1 & ZPRO2. I want to export internal table data of ZPRO1  to  ZPRO2 and then I want to do some calculations in ZPRO2 with the imported internal table data(ZPRO1).
    If I use 'SUBMIT' or CALL TRANSACTION syntax ZPRO1 is displaying..but I want to use only internal table data of ZPRO1.
    Pls advise.
    Pranitha.

    Hi,
      Please follow the simple code and try to solve your issue.
    Code in program1
    types: begin of ty_itab,
           matnr type matnr,
           end of ty_itab.
    data: it_itab type table of ty_itab,
          wa_itab type ty_itab,
          it_itab1 type table of ty_itab.
    select matnr from mara into table it_itab.
    export it_itab to shared buffer indx(st) id 'ABC'.
    Code in program2
    types: begin of ty_itab,
           matnr type matnr,
           end of ty_itab.
    data: it_itab type table of ty_itab,
          wa_itab type ty_itab.
    import it_itab from shared buffer indx(st) id 'ABC'.
    loop at it_itab into wa_itab.
    write:/ wa_itab-matnr.
    endloop.
      Please delete shared buffer indx(st) id 'ABC', once we don't need the internal table data.
    Regards
    Dande

  • Export and import XMLType table

    Hi ,
    I want to export one table which contain xmltype column form oracle 11.2.0.1.0 and import into 11.2.0.2.0 version.
    I got following errors when i export the table , when i tried with exp and imp utility
    EXP-00107: Feature (BINARY XML) of column ZZZZ in table XXXX.YYYY is not supported. The table will not be exported.
    then i tried export and import pump.Exporting pump is working ,following is the log
    Export: Release 11.2.0.1.0 - Production on Wed Oct 17 17:53:41 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ;;; Legacy Mode Active due to the following parameters:
    ;;; Legacy Mode Parameter: "log=<xxxxx>Oct17.log" Location: Command Line, Replaced with: "logfile=T<xxxxx>_Oct17.log"
    ;;; Legacy Mode has set reuse_dumpfiles=true parameter.
    Starting "<xxxxx>"."SYS_EXPORT_TABLE_01": <xxxxx>/******** DUMPFILE=<xxxxx>Oct172.dmp TABLES=<xxxxx>.<xxxxx> logfile=<xxxxx>Oct17.log reusedumpfiles=true
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 13.23 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "<xxxxx>"."<xxxxx>" 13.38 GB 223955 rows
    Master table "<xxxxx>"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for <xxxxx>.SYS_EXPORT_TABLE_01 is:
    E:\ORACLEDB\ADMIN\LOCALORA11G\DPDUMP\<xxxxx>OCT172.DMP
    Job "<xxxxx>"."SYS_EXPORT_TABLE_01" successfully completed at 20:30:14
    h4. I got error when i import the pump using following command
    +impdp sys_dba/***** dumpfile=XYZ_OCT17_2.DMP logfile=import_vmdb_XYZ_Oct17_2.log FROMUSER=XXXX TOUSER=YYYY CONTENT=DATA_ONLY TRANSFORM=oid:n TABLE_EXISTS_ACTION=append;+
    error is :
    h3. KUP-11007: conversion error loading table "CC_DBA"."XXXX"*
    h3. ORA-01403: no data found*
    h3. ORA-31693: Table data object "XXX_DBA"."XXXX" failed to load/unload and is being skipped due to error:*
    Please help me to get solution for this.

    CREATE UNIQUE INDEX "XXXXX"."XXXX_XBRL_XMLINDEX_IX" ON "CCCC"."XXXX_XBRL" (EXTRACTVALUE(SYS_MAKEXML(128,"SYS_NC00014$"),'/xbrl'))above index is created by us because we are storing file which like
    <?xml version="1.0" encoding="UTF-8"?>
    <xbrl xmlns="http://www.xbrl.org/2003/instance" xmlns:AAAAAA="http://www.AAAAAA.COM/AAAAAA" xmlns:ddd4217="http://www.xbrl.org/2003/ddd4217" xmlns:link="http://www.xbrl.org/2003/linkbase" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <link:schemaRef xlink:href="http://www.fsm.AAAAAA.COM/AAAAAA-xbrl/v1/AAAAAA-taxonomy-2009-v2.11.xsd" xlink:type="simple" />
    <context id="Company_Current_ForPeriod"> ...I tried to export pump with and without using DATA_OPTIONS=XML_CLOBS too.Both time exporting was success but import get same KUP-11007: conversion error loading table "tab_owner"."bbbbb_XBRL" error.
    I tried the import in different ways
    1. Create table first then import data_only (CONTENT=DATA_ONLY)
    2. Import all table
    In both way table ddl is created successfully ,but it fail on import data.Following is the log when i importing
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "aaaaa"."SYS_IMPORT_TABLE_02" successfully loaded/unloaded
    Starting "aaaaa"."SYS_IMPORT_TABLE_02":  aaaaa/********@vm_dba TABLES=tab_owner.bbbbb_XBRL dumpfile=bbbbb_XBRL_OCT17_2.DMP logfile=import_vmdb
    bbbbb_XBRL_Oct29.log DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    *KUP-11007: conversion error loading table "tab_owner"."bbbbb_XBRL"*
    *ORA-01403: no data found*
    ORA-31693: Table data object "tab_owner"."bbbbb_XBRL" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-26062: Can not continue from previous errors.
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Job "aaaaa"."SYS_IMPORT_TABLE_02" completed with 1 error(s) at 18:42:26

  • Importing 30 tables into one SQL Table (Help Required)

    Dear Experts,
    I am new in SQL server, actually i need to gather 30 different excel file in one sql server table and i have imported all excel file in different databases, all tables have 186 different columns and datatypes. I couldnt change data type while conversion.
    Now all columns have different data type which are occupying extra space in my database.
    Now the problem is that i need to convert all databases into one database or table. Although i have created a table but i dont have idea how to import all table into one table  and defining datatype in new table while importing the old tables.
    Please help me in this matter or if any body has skype or any other chatting id please do let me know so that i may explain it better.
    Thanking you in advance.
    Best Regards,
    SQL_beginner

    There are several things you can try.  If you have SSIS, take a look at this.
    http://www.singhvikash.in/2013/06/ssis-how-to-load-multiple-excel-files.html
    https://www.simple-talk.com/sql/ssis/importing-excel-data-into-sql-server-via-ssis-questions-you-were-too-shy-to-ask/
    Also, if your files have virtually the same name, like files with dates in the name, you can loop through files in your folder, and increment the loop with each run through.
    DECLARE @intFlag
    INT
    SET @intFlag
    = 1
    WHILE (@intFlag
    <=30)
    BEGIN
    PRINT @intFlag
    declare @fullpath1
    varchar(1000)
    select @fullpath1
    = '''\\path to your files\'
    + convert(varchar,
    getdate()- @intFlag
    , 112)
    + '_your-text-file-name.txt'''
    declare @cmd1
    nvarchar(1000)
    select @cmd1
    = 'bulk insert [dbo].[your-table-name] from '
    + @fullpath1 +
    ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 2, ROWTERMINATOR=''0x0a'')'
    exec (@cmd1)
    SET @intFlag
    = @intFlag + 1
    END
    GO
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Importing mapping tables via txt-file

    Hi
    Is it possible to import mapping tables for some dimensions using txt- or csv-file instead of excel-template?

    Yes.
    There is a format called "Ledger Link" which is dirt simple. It is a CSV file that has Source, Target, and Description. If you have a sign flip, you put a - in front of the Target. If instead of explicit map, it is a like, Source is the rule.
    For example......
    110011,3rdPartySales,Third Party Sales
    110021,-ICSales,Intercompany Sales
    11003?,ServiceRevenue,Service Revenue for Accounts starting with 11003 with a length of 6 characters

  • Regarding Exporting and Importing internal table

    Hello Experts,
    I have two programs:
    1) Main program: It create batch jobs through open_job,submit and close job.Giving sub program as SUBMIT.
    I am using Export IT to memory id 'MID' to export internal table data to sap memory in the subprogram.
    The data will be processed in the subprogram and exporting data to sap memory.I need this data in the main program(And using import to get the data,but it is not working).
    Importing IT1 from memory id 'MID' to import the table data in the main program after completing the job(SUBMIT SUBPROGRAM AND RETURN).
    Importing is not getting data to internal table.
    Can you please suggest something to solve this issue.
    Thank you.
    Regards,
    Anand.

    Hi,
    This is the code i am using.
    DO g_f_packets TIMES.
    * Start Immediately
           IF NOT p_imm IS INITIAL .
             g_flg_start = 'X'.
           ENDIF.
           g_f_jobname = 'KZDO_INHERIT'.
           g_f_jobno = g_f_jobno + '001'.
           CONCATENATE g_f_jobname g_f_strtdate g_f_jobno INTO g_f_jobname
                                                  SEPARATED BY '_'.
           CONDENSE g_f_jobname NO-GAPS.
           p_psize1 = p_psize1 + p_psize.
           p_psize2 = p_psize1 - p_psize + 1.
           IF p_psize2 IS INITIAL.
             p_psize2  = 1.
           ENDIF.
           g_f_spname = 'MID'.
           g_f_spid = g_f_spid + '001'.
           CONDENSE g_f_spid NO-GAPS.
           CONCATENATE g_f_spname  g_f_spid INTO g_f_spname.
           CONDENSE g_f_spname NO-GAPS.
    * ... (1) Job creating...
           CALL FUNCTION 'JOB_OPEN'
             EXPORTING
               jobname          = g_f_jobname
             IMPORTING
               jobcount         = g_f_jobcount
             EXCEPTIONS
               cant_create_job  = 1
               invalid_job_data = 2
               jobname_missing  = 3
               OTHERS           = 4.
           IF sy-subrc <> 0.
             MESSAGE e469(9j) WITH g_f_jobname.
           ENDIF.
    * (2)Report start under job name
           SUBMIT (g_c_prog_kzdo)
                  WITH p_lgreg EQ p_lgreg
                  WITH s_grvsy IN s_grvsy
                  WITH s_prvsy IN s_prvsy
                  WITH s_prdat IN s_prdat
                  WITH s_datab IN s_datab
                  WITH p1      EQ p1
                  WITH p3      EQ p3
                  WITH p4      EQ p4
                  WITH p_mailid EQ g_f_mailid
                  WITH p_psize EQ p_psize
                  WITH p_psize1 EQ p_psize1
                  WITH p_psize2 EQ p_psize2
                  WITH spid     EQ g_f_spid
                  TO SAP-SPOOL WITHOUT SPOOL DYNPRO
                  VIA JOB g_f_jobname NUMBER g_f_jobcount AND RETURN.
    *(3)Job closed when starts Immediately
           IF NOT p_imm IS INITIAL.
             IF sy-index LE g_f_nojob.
               CALL FUNCTION 'JOB_CLOSE'
                 EXPORTING
                   jobcount             = g_f_jobcount
                   jobname              = g_f_jobname
                   strtimmed            = g_flg_start
                 EXCEPTIONS
                   cant_start_immediate = 1
                   invalid_startdate    = 2
                   jobname_missing      = 3
                   job_close_failed     = 4
                   job_nosteps          = 5
                   job_notex            = 6
                   lock_failed          = 7
                   OTHERS               = 8.
               gs_jobsts-jobcount = g_f_jobcount.
               gs_jobsts-jobname  = g_f_jobname.
               gs_jobsts-spname   = g_f_spname.
               APPEND gs_jobsts to gt_jobsts.
             ELSEIF sy-index GT g_f_nojob.
               CLEAR g_f_flg.
               DO.                         " Wiating untill any job completion
                 LOOP AT gt_jobsts into gs_jobsts.
                   CLEAR g_f_status.
                   CALL FUNCTION 'BP_JOB_STATUS_GET'
                     EXPORTING
                       JOBCOUNT                         = gs_jobsts-jobcount
                       JOBNAME                          = gs_jobsts-jobname
                    IMPORTING
                       STATUS                           = g_f_status
    *            HAS_CHILD                        =
    *          EXCEPTIONS
    *            JOB_DOESNT_EXIST                 = 1
    *            UNKNOWN_ERROR                    = 2
    *            PARENT_CHILD_INCONSISTENCY       = 3
    *            OTHERS                           = 4
                   g_f_mid = gs_jobsts-spname.
                   IF g_f_status = 'F'.
                     IMPORT gt_final FROM MEMORY ID g_f_mid .
                     FREE MEMORY ID gs_jobsts-spname.
                     APPEND LINES OF gt_final to gt_final1.
                     REFRESH gt_prlist.
                     CALL FUNCTION 'JOB_CLOSE'
                       EXPORTING
                         jobcount             = g_f_jobcount
                         jobname              = g_f_jobname
                         strtimmed            = g_flg_start
                       EXCEPTIONS
                         cant_start_immediate = 1
                         invalid_startdate    = 2
                         jobname_missing      = 3
                         job_close_failed     = 4
                         job_nosteps          = 5
                         job_notex            = 6
                         lock_failed          = 7
                         OTHERS               = 8.
                     IF sy-subrc = 0.
                       g_f_flg = 'X'.
                       gs_jobsts1-jobcount = g_f_jobcount.
                       gs_jobsts1-jobname  = g_f_jobname.
                       gs_jobsts1-spname   = g_f_spname.
                       APPEND gs_jobsts1 TO gt_jobsts.
                       DELETE TABLE gt_jobsts FROM gs_jobsts.
                       EXIT.
                     ENDIF.
                   ENDIF.
                 ENDLOOP.
                 IF g_f_flg = 'X'.
                   CLEAR g_f_flg.
                   EXIT.
                 ENDIF.
               ENDDO.
             ENDIF.
           ENDIF.
           IF sy-subrc <> 0.
             MESSAGE e539(scpr) WITH g_f_jobname.
           ENDIF.
           COMMIT WORK .
         ENDDO.

  • Getting error while importing a table partition

    Hi,
    I am trying to import a table partition from OEM and occurred with following error:
    Job IMPORT000042 has been reopened at Friday, 13 June, 2008 14:44
    Restarting "SYSMAN"."IMPORT000042":
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    ORA-31693: Table data object "SCOTT"."CONTAINER":"PARTITION_5" failed to load/unload and is being skipped due to error:
    ORA-06502: PL/SQL: numeric or value error
    LPX-00210: expected '<' instead of 'n'
    Job "SYSMAN"."IMPORT000042" completed with 1 error(s) at 14:44
    Job state: COMPLETED
    Thanks

    What's the source and target database Oracle version?
    What's the character set of both databases?

  • Error while Importing a Table in To Target Module

    Hi All,
    Iam getting follwong error msg while importing a table in OWB Target Module.
    API4806: Object description is not allowed to be translated before object business name. Please give a translation to the business name first.
    why I am getting this error ?
    Rajesh

    Guys,
    I am able to solve the problem. I think the problem is Language settings difference between Database and OWB repository.

  • How to import external table, which exist in export dump file.

    My export dump file has one external table. While i stated importing into my developement instance , I am getting the error "ORA-00911: invalid character".
    The original definition of the extenal table is as given below
    CREATE TABLE EXT_TABLE_EV02_PRICEMARTDATA
    EGORDERNUMBER VARCHAR2(255 BYTE),
    EGINVOICENUMBER VARCHAR2(255 BYTE),
    EGLINEITEMNUMBER VARCHAR2(255 BYTE),
    EGUID VARCHAR2(255 BYTE),
    EGBRAND VARCHAR2(255 BYTE),
    EGPRODUCTLINE VARCHAR2(255 BYTE),
    EGPRODUCTGROUP VARCHAR2(255 BYTE),
    EGPRODUCTSUBGROUP VARCHAR2(255 BYTE),
    EGMARKETCLASS VARCHAR2(255 BYTE),
    EGSKU VARCHAR2(255 BYTE),
    EGDISCOUNTGROUP VARCHAR2(255 BYTE),
    EGREGION VARCHAR2(255 BYTE),
    EGAREA VARCHAR2(255 BYTE),
    EGSALESREP VARCHAR2(255 BYTE),
    EGDISTRIBUTORCODE VARCHAR2(255 BYTE),
    EGDISTRIBUTOR VARCHAR2(255 BYTE),
    EGECMTIER VARCHAR2(255 BYTE),
    EGECM VARCHAR2(255 BYTE),
    EGSOLATIER VARCHAR2(255 BYTE),
    EGSOLA VARCHAR2(255 BYTE),
    EGTRANSACTIONTYPE VARCHAR2(255 BYTE),
    EGQUOTENUMBER VARCHAR2(255 BYTE),
    EGACCOUNTTYPE VARCHAR2(255 BYTE),
    EGFINANCIALENTITY VARCHAR2(255 BYTE),
    C25 VARCHAR2(255 BYTE),
    EGFINANCIALENTITYCODE VARCHAR2(255 BYTE),
    C27 VARCHAR2(255 BYTE),
    EGBUYINGGROUP VARCHAR2(255 BYTE),
    QTY NUMBER,
    EGTRXDATE DATE,
    EGLISTPRICE NUMBER,
    EGUOM NUMBER,
    EGUNITLISTPRICE NUMBER,
    EGMULTIPLIER NUMBER,
    EGUNITDISCOUNT NUMBER,
    EGCUSTOMERNETPRICE NUMBER,
    EGFREIGHTOUTBOUNDCHARGES NUMBER,
    EGMINIMUMORDERCHARGES NUMBER,
    EGRESTOCKINGCHARGES NUMBER,
    EGINVOICEPRICE NUMBER,
    EGCOMMISSIONS NUMBER,
    EGCASHDISCOUNTS NUMBER,
    EGBUYINGGROUPREBATES NUMBER,
    EGINCENTIVEREBATES NUMBER,
    EGRETURNS NUMBER,
    EGOTHERCREDITS NUMBER,
    EGCOOP NUMBER,
    EGPOCKETPRICE NUMBER,
    EGFREIGHTCOSTS NUMBER,
    EGJOURNALBILLINGCOSTS NUMBER,
    EGMINIMUMORDERCOSTS NUMBER,
    EGORDERENTRYCOSTS NUMBER,
    EGRESTOCKINGCOSTSWAREHOUSE NUMBER,
    EGRETURNSCOSTADMIN NUMBER,
    EGMATERIALCOSTS NUMBER,
    EGLABORCOSTS NUMBER,
    EGOVERHEADCOSTS NUMBER,
    EGPRICEADMINISTRATIONCOSTS NUMBER,
    EGSHORTPAYMENTCOSTS NUMBER,
    EGTERMCOSTS NUMBER,
    EGPOCKETMARGIN NUMBER,
    EGPOCKETMARGINGP NUMBER,
    EGWEIGHTEDAVEMULTIPLIER NUMBER
    ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
    DEFAULT DIRECTORY EV02_PRICEMARTDATA_CSV_CON
    ACCESS PARAMETERS
    LOCATION (EV02_PRICEMARTDATA_CSV_CON:'VPA.csv')
    REJECT LIMIT UNLIMITED
    NOPARALLEL
    NOMONITORING;
    While importing , when i seen the log file , it is failing the create the external table. Getting the error "ORA-00911: invalid character".
    Can some one suggest how to import external tables
    Addressing this issue will be highly appriciated.
    Naveen

    Hi Srinath,
    When i observed the create table syntax of external table from import dump log file, it show few lines as below. I could not understand these special characters. And create table definationis failing with special character viz ORA-00911: invalid character
    ACCESS PARAMETERS
    LOCATION (EV02_PRICEMARTDATA_CSV_CON:'VPA.csv').
    I even observed the create table DDL from TOAD. It is same as i mentioned earlier
    Naveen

  • Can I use Power Query to Import a table from Excel sheet range which starts not from the top row?

    Hi,
    Being an experienced Excel user before Power BI, I am just starting to explore the M and Power Query capabilities, and need help already (ain't easy to google this use case somehow):
    I need to import the table which sits in the Excel file with header row in the row 17 of Excel sheet, with some metadata header in the preceding rows of the columns A and B.
    01: Report name, Quick Report
    02: Report Date, 1/1/2014
    17: Employee Name, Manager, etc...
    18: John Doe, Matt Beaver, etc.
    Both (a) direct attempt to load as Excel file and (b) the indirect way through [From Folder] and formula in custom column -- both lead to the same error: "[DataFormat.Error] External table is not in the expected format."
    Specifically, I tried to use the [Power Query -> From File -> From Folder] functionality, select an Excel file and add a custom column to access the binary content: [Add Custom Column] with formula "=Excel.Workbook([Content])".
    It looks like Power Query expects a rectangular range with headers full-width followed by a contiguous table range to import anything, and refuses to load if that is not the case...
    QUESTION: Is there any way to load whatever-formatted data from Excel first, and then manipulate the overall imported range (like referring to rows starting from 17th using "Table.SelectRows" etc.) to read the actual data? Reading and using
    the metadata from header would be a bonus, but that comes second... The main issue is to get something from a non-regular Excel file to later work with using M formulae ...
    Thanks!
    SAM

    Finally found the answer to this one in ():
    You Cannot Open a Password-Protected Workbook
     If the Excel workbook is protected by a password, you
      cannot open it for data access, even by supplying the correct password with
      your connection settings, unless the workbook file is already open in the
      Microsoft Excel application. If you try, you receive the following error
      message:
    Could not decrypt file.
    ANSWER: So, will have either weave in the work with temporary unprotected files or requires opening them before updating the data source (although this almost defeats the purpose of automation...)
    ANSWER to ORIGINAL QUESTION: password was preventing Power Query from reading the Excel file. For solution see above.
    Thanks anyway for participation and inspiration, Imke!

  • Importing internal table in Adobe Interactive Forms

    Hi all,
    at the moment I do my first steps in AIF.
    I have created a report and select dd02l, dd03l and dd04l.
    dd02l is a structure. dd03l and dd04l are internal tables.
    How can I import internal tables into my function module with getting a runtime error? Fields and structures are no problem?
    I have defined both internal tables as importing parameter in my form interface because I cannot define internal tables ;o(
    Thx 4 help & regards
    Michael

    hi,
    find attached document it will helpful for u go to link
    1.https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c2567f2b-0b01-0010-b7b5-977cbf80665d
    2.https://www.sdn.sap.com/irj/sdn/abap-elearning
    thanks reward if helpful

  • Importing internal table from one program to another program

    Hi everybody,
    i have one small doubt.
    i am using submit statement and passing the values from this program to another program selection screen. in that program logic is written.In that program one internal table values are being exported to the memory id of that program. now i have to import that internal table values into my program by using import statement. i am using the following syntax
    import itab from menory id 'program name'.
    but i am getting an error saying program name is unknown.
    what is the exat syntax for this .
    thanking you,
    giri.

    hi,
    check these statements.
    IMPORT - Get data
    Variants:
    1. IMPORT obj1 ... objn FROM DATA BUFFER f.
    2. IMPORT obj1 ... objn FROM INTERNAL TABLE itab.
    2. IMPORT obj1 ... objn FROM MEMORY.
    3. IMPORT obj1 ... objn FROM SHARED MEMORY itab(ar) ID key.
    4. IMPORT obj1 ... objn FROM SHARED BUFFER itab(ar) ID key.
    5. IMPORT obj1 ... objn FROM DATABASE dbtab(ar) ID key.
    6. IMPORT obj1 ... objn FROM DATASET dsn(ar) ID key.
    7. IMPORT obj1 ... objn FROM LOGFILE ID key.
    8. IMPORT DIRECTORY INTO itab FROM DATABASE dbtab(ar) ID key.
    9. IMPORT (itab) FROM ... .
    In some cases, the syntax rules that apply to Unicode programs are different than those for non-Unicode programs. For more details, see Storing Cluster Tables.
    Variant 1
    IMPORT obj1 ... objn FROM DATA BUFFER f.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... ACCEPTING PADDING
    4. ... ACCEPTING TRUNCATION
    5. ... IGNORING STRUCTURE BOUNDARIES
    6. ... IGNORING CONVERSION ERRORS
    7. ... REPLACEMENT CHARACTER c
    8. ... IN CHAR-TO-HEX MODE
    9. ... CODE PAGE INTO f1
    10. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See You Cannot Use Implicit Field Names in Clusters.
    Effect
    Imports the data objects obj1 ... objn from the data buffer declared. The data buffer must be of type XSTRING . The data objects obj1 ... objn can be fields, structures, complex structures, or tables. The system imports all the data that has been stored in the data buffer f using the EXPORT ... TO DATA BUFFER statement and is listed here. It also checks that the structure used in the IMPORT statement matches the one in the EXPORT statement.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged. (In some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. The contents of all the objects remain unchanged.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is stored in the field f.
    Addition 3
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end
    of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 4
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR
    fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 5
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is
    relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 3 (enlarge structure) or addition 4 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Addition 6
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition prevents the system from triggering a
    runtime error, if an error occurs when the character set is converted. '#' is used as a replacement character.
    Addition 7
    ... REPLACEMENT CHARACTER c
    Effect
    The replacement character is used if a particular
    character cannot be converted when the character set is converted.
    This addition can only be used in conjunction with addition 6.
    Addition 8
    ... IN CHAR-TO-HEX MODE
    Effect
    Not all character-type fields are converted. To convert
    a field, you must create a field (or structure) that is identical to the exported field or structure, except that all its character-type components must be replaced with hexadecimal fields.
    You can only use this addition in Unicode programs, to allow you to import camouflaged binary data as single-byte characters.
    Moreover, you cannot use this addition in conjunction with the additions 3, 4, 5, 6, or 7.
    Addition 9
    ... CODE PAGE INTO f1
    Effect
    The code page of the exported data is stored in the
    character-type field f1 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition.
    Addition 10
    ... ENDIAN INTO f2
    Effect
    The byte order (LITTLE or BIG) of the
    exported data is stored in the field f2 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition. The field f2 must have the type ABAP_ENDIAN, which is defined in the type group ABAP. For this reason, the type group ABAP must be included in the ABAP program using a TYPE-POOLS statement.
    Variant 2
    IMPORT obj1 ... objn FROM INTERNAL TABLE itab.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... ACCEPTING PADDING
    4. ... ACCEPTING TRUNCATION
    5. ... IGNORING STRUCTURE BOUNDARIES
    6. ... IGNORING CONVERSION ERRORS
    7. ... REPLACEMENT CHARACTER c
    8. ... IN CHAR-TO-HEX MODE
    9. ... CODE PAGE INTO f1
    10. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See No implicit field names in cluster.
    Effect
    Imports the data objects obj1 ... objn (fields, structures, complex structures, or tables) from the specified internal table itab. The first column in the internal table must be of the predefined type INT2 and the second must be type X. To define the first column you must refer to a data element in the ABAP Dictionary that has the predefined type INT2.
    All data that was stored in the internal table itab using EXPORT ... TO INTERNAL TABLE and listed, is imported. The system checks that the EXPORT and IMPORT structures match.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the specified data cluster were imported, the rest remain unchanged (it is possible that no data object was imported).
    SY-SUBRC = 4:
    The data objects could not be imported.
    The contents of all listed objects remain unchanged
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    Places the object in the field f.
    Addition 3
    ... ACCEPTING PADDING
    Effect
    This addition allows you to add new fields to the ends
    of structures, even to substructures and internal tables (the additional fields are filled with initial value during the IMPORT). It also allows you to increase the size of existing fields (C, N, X, P, I1, and I2) and to map Char fields to STRING type fields or byte fields to XSTRING type fields.
    Addition 4
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR
    field or omit the last component on the highest level (till Release 4.6 this was possible without specifying an addition).
    Addition 5
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the page order is
    relevant, that is any substructures match. With this addition, the system also ignores alignment changes arising from the Unicode conversion (for example, due to subsequent insertion of named includes).
    This addition rules out any subsequent structural enhancements (addition 3) or structural shortening (addition 4) because with this addition it is the structural limits and include limits that are to be ignored.
    As from Release 6.10, the include information will also be stored in the dataset, so that it is possible to also check whether the includes match, that is substructures and includes (named or unnamed) are treated the same. When importing data that was exported in a Release lower than 6.10, the includes are not checked.
    Addition 6
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition has the effect that an error in the
    character set conversion does not cause a runtime error. The system uses "#" as a replacement character.
    Addition 7
    ... REPLACEMENT CHARACTER c
    Effect
    The system uses the specified replacement character if a
    character cannot be converted during a character set conversion. If this addition is not specified, the system uses "#" as a replacement character.
    This addition can only be used in conjunction with addition 6.
    Addition 8
    ... IN CHAR-TO-HEX MODE
    Effect
    No character type fields are converted. For this you
    must create a field or structure that is identical to the exported field or exported structure, except that all character type fields must be replaced with hexadecimal fields.
    This addition, which is only allowed in programs with a set Unicode flag, allows you to import binary data disguised as single byte characters. This addition cannot be used in conjunction with additions 3, 4, 5, 6, and 7.
    Addition 9
    ... CODE PAGE INTO f1
    Effect
    The codepage of the exported data is stored in the
    character-type field f1 (for example, to be able to analyze the data imported with the addition IN CHAR-TO-HEX MODE).
    Addition 10
    ... ENDIAN INTO f2
    Effect
    The byte order (LITTLE or BIG) of the
    exported data is stored in the field f2 (for example, to be able analyze the data imported using the addition IN CHAR-TO-HEX MODE). The field f2 must be of type ABAP_ENDIAN, defined in type group ABAP. You must therefore include the type group ABAP in the ABAP program with a TYPE-POOLS statement.
    Variant 3
    IMPORT obj1 ... objn FROM MEMORY.
    Extras:
    1. ... = f (for each object to be imported) 2. ... TO f (for each object to be imported)
    3. ... ID key
    4. ... ACCEPTING PADDING
    5. ... ACCEPTING TRUNCATION
    6. ... IGNORING STRUCTURE BOUNDARIES
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See You Must Enter Identification and Cannot Use Implicit Field Names inClusters
    Effect
    Imports data objects obj1 ... objn (fields, structures, complex structures or tables) from a data cluster in the ABAP memory (see EXPORT). Reads in all data without an ID that was exported to memory with "EXPORT ... TO MEMORY.". In contrast to the variant IMPORT FROM DATABASE, it does not check that the structure matches in EXPORT and IMPORT.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because the ABAP memory was empty.
    The contents of all objects remain unchanged.
    Note
    You should always use the addition 3 (... ID key) with the statement. Otherwise, the effect of the variant is not certain (EXPORT statements in different parts of a program overwrite each other in the ABAP memory), since it exists only for reasons of compatibility with R/2.
    Additional methods for selecting and deleting data clusters in the ABAP memory are provided by the system class CL_ABAP_EXPIMP_MEM.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Addition 3
    ... ID key
    Effect
    Imports only data stored in ABAP memory under the ID key.
    Notes
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Addition 4
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 5
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR field, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 6
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 3 (enlarge structure) or addition 4 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Related
    EXPORT TO MEMORY, DELETE FROM MEMORY, FREE MEMORY
    Variant 4
    IMPORT obj1 ... objn FROM SHARED MEMORY itab(ar) ID key.
    Extras:
    1. ... = f (for each object to be exported) 2. ... TO f (for each object to be exported)
    3. ... CLIENT g (before ID key)
    4. ... TO wa (after itab(ar) or ID key )
    5. ... ACCEPTING PADDING
    6. ... ACCEPTING TRUNCATION
    7. ... IGNORING STRUCTURE BOUNDARIES
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See You Cannot Use Implicit Field Names in Clusters and You Cannot Use Table Work Areas.
    Effect
    Imports the data objects obj1 ... objn (fields, structures, complex structures, or tables) from shared memory. The data objects are read using the ID key from the area ar in the table itab - c.f. EXPORT TO SHARED MEMORY). You must use itab to specify a database table although the system reads from a memory table with the appropriate structure.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged. (In some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. You may have used the wrong ID. The contents of all the objects remain unchanged.
    Notes
    The table dbtab named according to SHARED MEMORY must be declared using TABLES (except in addition 2).
    The structure of fields (field symbols and internal tables) to be imported must match the structure of the objects exported in the dataset. The objects must be imported under the same names as those under which they were exported. Otherwise, they will not be imported.
    The key length consists of: the client (3 digits, but only if tab is client-specific); area (2 characters); ID; and line number (4 bytes). It must not exceed 64 bytes - that is, the ID must not be longer than 55 characters, if the table is client- specific.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the shared memory are provided by the system class CL_ABAP_EXPIMP_SHMEM.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is stored in the field f.
    Addition 3
    ... CLIENT g (before ID key)
    Effect
    The data is imported from client g (provided the import/export table is tab client-specific). The client, g must be a character-type data object (but not a string).
    Addition 4
    ... TO wa (after itab(ar) or ID key)
    Effect
    You need to use this addition if user data fields have been stored in the application buffer and are to be read from there. The work area wa is used instead of the table work area. The target area must correspond to the structure of the called table tab.
    Addition 5
    ... ACCEPTING PADDING
    Effect
    This addition allows you to: append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 6
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 7
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 4 (enlarge structure) or addition 5 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Related
    EXPORT TO SHARED MEMORY, DELETE FROM SHARED MEMORY
    Variant 5
    IMPORT obj1 ... objn FROM SHARED BUFFER itab(ar) ID key.
    Extras:
    1. ... = f (for each object to be exported) 2. ... TO f (for each object to be exported)
    3. ... CLIENT g (before ID key)
    4. ... TO wa (last addition or after itab(ar))
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See Cannot Use Implicit Fieldnames in Clusters und Cannot Use Table Work Areas.
    Effect
    Imports data objects obj1 ... objn (fields or
    tables) from the cross-transaction application buffer. The data objects are read in the application buffer using the ID key of the area ar of the buffer area for the table itab (see EXPORT TO SHARED BUFFER). You must use dbtab to specify a database table although the system reads from a memory table with an appropriate structure.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this means that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Example
    Import two fields and an internal table from the application buffer with the structure INDX:
    TYPES: BEGIN OF ITAB3_LINE,
             CONT(4),
           END OF ITAB3_LINE.
    DATA: INDXKEY LIKE INDX-SRTFD VALUE 'KEYVALUE',
          F1(4),
          F2(8) TYPE P DECIMALS 0,
          ITAB3 TYPE STANDARD TABLE OF ITAB3_LINE,
          INDX_WA TYPE INDX.
    Import data.
    IMPORT F1 = F1 F2 = F2 ITAB3 = ITAB3
           FROM SHARED BUFFER INDX(ST) ID INDXKEY TO INDX_WA.
    After import, the data fields INDX-AEDAT and
    INDX-USERA in front of CLUSTR are filled with
    the values in the fields before the EXPORT
    statement.
    Notes
    You must declare the table dbtab, named after DATABASE using a TABLES statement.
    The structure of the fields, structures, and internal tables to be imported must match the structure of the objects exported to the dataset. Moreover, the objects must be imported with the same name used to export them. Otherwise, the import is not performed.
    The maximum total key length is 64 bytes. It must include: a client if the table is client-specific (3 characters); an area (2 characters); identification; and line counter (4 bytes). This means that the number of characters available for the identification of a client-specific table is 55 characters.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the cross-transaction application buffer are provided by the system class CL_ABAP_EXPIMP_SHBUF.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in the field f
    Addition 3
    ... CLIENT g (after dbtab(ar))
    Effect
    Takes the data from the client g (if the import/export table dbtab is client-specific). The client g must be a character-type data object (but not a string).
    Addition 4
    ... TO wa (as the last addition or after itab(ar))
    Effect
    You need to use this addition if you want to save user data fields in the application buffer and then read them from there later. The system uses a work area wa instead of a table work area. The target area must have the same structure as the table tab.
    Example
    DATA: INDX_WA TYPE INDX,
          F1.
    IMPORT F1 = F1 FROM SHARED BUFFER INDX(AR)
                   CLIENT '001' ID 'TEST'
                   TO INDX_WA.
    WRITE: / 'AEDAT:', INDX_WA-AEDAT,
           / 'USERA:', INDX_WA-USERA,
           / 'PGMID:', INDX_WA-PGMID.
    Variant 6
    IMPORT obj1 ... objn FROM DATABASE dbtab(ar) ID key.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... CLIENT g (before ID key )
    4. ... USING form
    5. ... TO wa (last addition or after dbtab(ar))
    6. ... MAJOR-ID id1 (instead of ID key)
    7. ... MINOR-ID id2 (with MAJOR-ID id1 )
    8. ... ACCEPTING PADDING
    9. ... ACCEPTING TRUNCATION
    10. ... IGNORING STRUCTURE BOUNDARIES
    11. ... IGNORING CONVERSION ERRORS
    12. ... REPLACEMENT CHARACTER c
    13. ... IN CHAR-TO-HEX MODE
    14. ... CODE PAGE INTO f1
    15. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Implicit Fieldnames in Clusters and Cannot Use Table Work Areas.
    Effect
    Imports data objects obj1 ... objn (fields, structures, complex structures, or tables) from the data cluster with ID key in area ar of the database table dbtab (see EXPORT TO DATABASE).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that not data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Example
    Import two fields and an internal table:
    TYPES: BEGIN OF TAB3_TYPE,
              CONT(4),
           END OF TAB3_TYPE.
    DATA: INDXKEY LIKE INDX-SRTFD,
          F1(4), F2 TYPE P,
          TAB3 TYPE STANDARD TABLE OF TAB3_TYPE WITH
                    NON-UNIQUE DEFAULT KEY,
          WA_INDX TYPE INDX.
    INDXKEY = 'INDXKEY'.
    IMPORT F1   = F1
           F2   = F2
           TAB3 = TAB3 FROM DATABASE INDX(ST) ID INDXKEY
           TO WA_INDX.
    Notes
    You must declare the table dbtab, named after DATABASE, using the TABLES statement (except in addition 5).
    The structure of fields, field strings and internal tables to be imported must match the structure of the objects exported to the dataset. In addition, the objects must be imported under the same name used to export them. If this is not the case, either a runtime error occurs or no import takes place.
    Exception: You can lengthen or shorten the last field if it is of type CHAR, or add/omit CHAR fields at the end of the structure.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the database table specified are provided by the system class CL_ABAP_EXPIMP_DB.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Addition 3
    ... CLIENT g (before the ID key)
    Effect
    Data is taken from the client g (in client-specific import/export databases only). Client g must be a character-type data object (but not a string).
    Example
    DATA: F1,
          WA_INDX TYPE INDX.
    IMPORT F1 = F1 FROM DATABASE INDX(AR) CLIENT '002' ID 'TEST'
                   TO WA_INDX.
    Addition 4
    ... USING form
    Note
    This statement is for internal use only.
    Incompatible changes or further developments may occur at any time without warning or notice.
    Effect
    Does not read the data from the database. Instead, calls the FORM routine form for each record read from the database without this addition. This routine can take the data key of the data to be retrieved from the database table work area and write the retrieved data to this work area. The name of the routine has the format <name of database table>_<name of form>; it has one parameter which describes the operation (READ, UPDATE or INSERT). The routine must set the field SY-SUBRC in order to show whether the function was successfully performed.
    Addition 5
    ... TO wa (after key or after dbtab(ar))
    Effect
    You need to use this addition if you want to save user data fields in the cluster database and then read from there. The system uses the work area wa instead of a table work area. The target area entered must have the same structure as the table dbtab.
    Example
    DATA WA LIKE INDX.
    DATA F1.
    IMPORT F1 = F1 FROM DATABASE INDX(AR)
                   CLIENT '002' ID 'TEST'
                   TO WA.
    WRITE: / 'AEDAT:', WA-AEDAT,
           / 'USERA:', WA-USERA,
           / 'PGMID:', WA-PGMID.
    Addition 6
    ... MAJOR-ID id1 (instead of the ID key).
    Addition 7
    ... MINOR-ID id2 (with MAJOR-ID id1)
    This addition is not allowed in an ABAP Objects context. See Cannot Use Generic Identification.
    Effect
    Searches for a record the first part of whose ID (length of id1) matches id1 and whose second part - if MINOR-ID id2 is also declared - is greater than or equal to id2.
    Addition 8
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 9
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 10
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 8 (enlarge structure) or addition 9 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Addition 11
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition prevents the system from triggering a runtime error, if an error occurs when the character set is converted. '#' is used as a replacement character.
    Addition 12
    ... REPLACEMENT CHARACTER c
    Effect
    The replacement character is used if a particular character cannot be converted when the character set is converted. If you do not use this addition, '#' is used as a replacement character.
    This addition can only be used in conjunction with addition 11.
    Addition 13
    ... IN CHAR-TO-HEX MODE
    Effect
    All character-type fields are not converted. To convert a field, you must create a field (or structure) that is identical to the exported field or structure, except that all its character-type components must be replaced with hexadecimal fields.
    You can only use this addition in Unicode programs, to allow you to import camouflaged binary data as single-byte characters. Moreover, you cannot use this addition in conjunction with the additions 8, 9, 10, 11, and 12.
    Addition 14
    ... CODE PAGE INTO f1
    Effect
    The code page of the exported data is stored in the character-type field f1 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition.
    Addition 15
    ... ENDIAN INTO f2
    Effect
    The byte order(LITTLE or BIG) of the exported data is stored in the field f2 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition. The field f2 must have the type ABAP_ENDIAN, which is defined in the type group ABAP. For this reason, the type group ABAP must be included in the ABAP program using a TYPE-POOLS statement.
    Variant 7
    IMPORT obj1 ... objn FROM DATASET dsn(ar) ID key.
    This variant is not allowed in an ABAP Objects context. See Cannot Use Clusters in Files
    Note
    This variant is no longer supported and cannot be used.
    Variant 8
    IMPORT obj1 ... objn FROM LOGFILE ID key.
    Note
    This statement is for internal use only.
    Incompatible changes or further developments may occur at any time without warning or notice.
    Extras:
    1. ... = f (for each field f to be imported) 2. ... TO f (for each field f to be imported)
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Implicit Field Names in Clusters
    Effect
    Imports data objects (fields, field strings or internal tables) from the update data. You must specify the update key assigned by the system (with current request number) as the key.
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. An incorrect ID may have been used.
    The contents of all objects remain unchanged.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Variant 9
    IMPORT DIRECTORY INTO itab FROM DATABASE dbtab(ar) ID key.
    Extras:
    1. ... CLIENT g (after dbtab(ar)) 2. ... TO wa (last addition or after dbtab(ar))
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Table Work Areas.
    Effect
    Imports an object directory stored under the specified ID with EXPORT TO DATABASE into the table itab. The internal table itab may not have the type HASHED TABLE or ANY TABLE.
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The directory was successfully imported.
    SY-SUBRC = 4:
    The directory could not be imported, probably because an incorrect ID was used.
    The internal table itab must have the same structure as the Dictionary structure CDIR (INCLUDE STRUCTURE).
    Addition 1
    ... CLIENT g (before ID key)
    Effect
    Takes data from the client g (only with client-specific import/export databases). Client g must be a character-type data object (but not a string).
    Addition 2
    ... TO wa (last addition or after dbtab(ar))
    Effect
    Uses the work area wa instead of the table work area. When you use this addition, you do not need to declare the table dbtab, named after DATABASE using a TABLES statement. The work area entered must have the same structure as the table dbtab.
    Example
    Directory of a cluster consisting of two fields and an internal table:
    TYPES: BEGIN OF TAB3_LINE,
             CONT(4),
           END OF TAB3_LINE,
           BEGIN OF DIRTAB_LINE.
             INCLUDE STRUCTURE CDIR.
    TYPES  END OF DIRTAB_LINE.
    DATA: INDXKEY LIKE INDX-SRTFD,
          F1(4),
          F2(8)   TYPE P decimals 0,
          TAB3    TYPE STANDARD TABLE OF TAB3_LINE,
          DIRTAB  TYPE STANDARD TABLE OF DIRTAB_LINE,
          INDX_WA TYPE INDX.
    INDXKEY = 'INDXKEY'.
    EXPORT F1 = F1
           F2 = F2
           TAB3 = TAB3
           TO DATABASE INDX(ST) ID INDXKEY " TAB3 has 17 entries
           FROM INDX_WA.
    IMPORT DIRECTORY INTO DIRTAB FROM DATABASE INDX(ST) ID INDXKEY
           TO INDX_WA.
    Then, the table DIRTAB contains the following:
    NAME     OTYPE  FTYPE  TFILL  FLENG
    F1         F      C      0      4
    F2         F      P      0      8
    TAB3       T      C      17     4
    The meaning of the individual fields is as follows:
    NAME:
    Name of stored object
    OTYPE:
    Object type (F: Field, R: Field string / Dictionary struc

Maybe you are looking for

  • Imported Fireworks Nav Bar Animation Looping

    Hi I have imported a navigation bar I made in fireworks into dreamweaver and when I preview in browser it keeps looping. In fireworks I added a rollover effect so that when you rollover each bit of text, e.g. home, contact, games  it goes bold, and w

  • Problem with date format convertion

    Dear All, i want to convert the current system date to particular format that i have specified below in SimpleDateFormat but when ever i exceute my program it comes in different format could you help me to solve the problem DateFormat sdf = new Simpl

  • User Managed Hot Backup via Sql

    Hi being a bit old school - I am trying to understand the steps required for a Hotbackup managed via Sql ( sorry all RMAN fans ! ) I understand that I can backup each tablespace in turn ( via the necessary ALTER TABLESPACE <x> BEGIN BACKUP and then t

  • Downloading Photoshop CS5 issue

    im having the same problem. im trying to download photoshop cs5 trial version. the download assistant ask me to login in so I log in again and then downloader does nothing. I downloaded Fireworks CS5 two months ago on this laptop so I cant understand

  • Table move

    Hi, We are using oracle 11.2.0.3 on aix. while move the table,it got hang.The table size only 2gb.Eventhough we down the database and started it. We checked in v$session_longops.nothing there.Noone use the db. Thanks