Import one table taking toom uch time

Hi All,
i have database 10g running with ASM under HPUX 04 processor with 08Gb of RAM.
I have a problem of importing one table with 6M lines, it takes too much time even 02 days the data is not yet importing, i don't know where is the problem.
I tried also to use data pump but when exporting the table from the other server in RAC it gives error bellow:
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 2 with process name "DW04" prematurely terminated
ORA-31671: Worker process DW04 had an unhandled exception.
ORA-12801: error signaled in parallel query server P029, instance ab-db2:abdb2 (2)
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-31626: job does not exist
ORA-06512: at "SYS.ORACLE_DATAPUMP", line 19
ORA-06512: at "SYS.KUPW$WORKER", line 1342
ORA-06512: at line 2
Thanks fro your help for this problem.
regards
raitsarevo

This the command i used:
time expdp abillity/4dd1ct3d dumpfile=cb_coupons.dmp logfile=cb_coupons.log directory=exp_dir parallel=4 tables=cb_coupons
and this is a portion of my alert log file, no error signaled:
Fri Apr 18 15:57:35 2008
ALTER SYSTEM SET service_names='abdb','SYS$SYS.KUPC$C_1_20080418155733.ABDB' SCOPE=MEMORY SID='abdb1';
Fri Apr 18 15:57:35 2008
ALTER SYSTEM SET service_names='SYS$SYS.KUPC$C_1_20080418155733.ABDB','abdb','SYS$SYS.KUPC$S_1_20080418155733.ABDB' SCOPE=MEMORY SID='abdb1';
kupprdp: master process DM00 started with pid=212, OS id=10470
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_TABLE_09', 'ABILLITY', 'KUPC$C_1_20080418155733', 'KUPC$S_1_20080418155733', 0);
kupprdp: worker process DW01 started with worker id=1, pid=215, OS id=10621
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_09', 'ABILLITY');
kupprdp: worker process DW02 started with worker id=2, pid=219, OS id=11777
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_09', 'ABILLITY');
Fri Apr 18 15:59:41 2008
ALTER SYSTEM SET service_names='SYS$SYS.KUPC$S_1_20080418155733.ABDB','abdb' SCOPE=MEMORY SID='abdb1';
Fri Apr 18 15:59:41 2008
ALTER SYSTEM SET service_names='abdb' SCOPE=MEMORY SID='abdb1';
Fri Apr 18 16:01:16 2008
Thread 1 advanced to log sequence 62141
Current log# 5 seq# 62141 mem# 0: +ASM_DG2/abdb/onlinelog/group_5.286.618685253
Current log# 5 seq# 62141 mem# 1: +ASM_DG1/abdb/onlinelog/group_5.10591.618685257
Thanks for your help

Similar Messages

  • Problem with one table taking a long time to query.

    We have one table with around 6000 rows and 30 columns. It has 1 primary key, 5 indexes, 2 foreign Keys
    select primarykeycolumn from table
    it takes 0.003 seconds.
    If we add a where clause on an indexed column it takes 0.003 seconds approx.
    If we add a simple where clause on ANY column that is not indexed then it takes around 7 seconds!
    I have done a create table test as (select * from table) recreated all constraints and this again take milliseconds!
    The only thing i noticed was that the table had a long RAW in it and the phyical size of it via dba_segments was 777mb. I dropped this column but the speed has not improved. But then again the dba_segment size has not changed?
    Is there anything i can check or anything i can do to see where the problem could be?
    Hope i have made myself clear.
    Thanks in advance

    Is this what is required?
    PLAN_TABLE_OUTPUT
    Plan hash value: 3474698526
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 10 | 21582 (1)| 00:04:19 |
    |* 1 | TABLE ACCESS FULL| SFJR | 1 | 10 | 21582 (1)| 00:04:19 |
    recursive calls     1
    db block gets     0
    consistent gets     98098
    physical reads     95014
    redo size     0
    bytes sent via SQL*Net to client     725
    bytes received via SQL*Net from client     606
    SQL*Net roundtrips to/from client     2
    sorts (memory)     1
    sorts (disk)     0

  • Data Pump -Importing one table index

    Is it possible to import one table index alone(any table ex emp ) .If it can be how should the param should look like ..
    Thanks and Regards
    harris

    I can't think of anything that would prevent this from working. You just need to make sure that the large table does not have any ref constraints, or other associations with the other tables that may get screwed up while the other users are using the database.
    Dean

  • Function Module Extraction from KONV Table taking lot of time for extractio

    Hi
    I have a requirement wherein i need to get records from KONV Table (Conditions (Transaction Data) ). i need the data corresponding to Application (KAPPL) = 'F'.
    For this i had written one function module but it is taking lot of time (@ 2.5 hrs) for fetching records as there are large number of records in KONV Table.
    I am pasting the Function Module code for reference.
    <b>kindly guide me as to how the extraction performance can be improved.</b>
    <b>Function Module Code:</b>
    FUNCTION ZBW_SHPMNT_COND.
    ""Local interface:
    *"  IMPORTING
    *"     VALUE(I_REQUNR) TYPE  SBIWA_S_INTERFACE-REQUNR
    *"     VALUE(I_ISOURCE) TYPE  SBIWA_S_INTERFACE-ISOURCE OPTIONAL
    *"     VALUE(I_MAXSIZE) TYPE  SBIWA_S_INTERFACE-MAXSIZE OPTIONAL
    *"     VALUE(I_INITFLAG) TYPE  SBIWA_S_INTERFACE-INITFLAG OPTIONAL
    *"     VALUE(I_UPDMODE) TYPE  SBIWA_S_INTERFACE-UPDMODE OPTIONAL
    *"     VALUE(I_DATAPAKID) TYPE  SBIWA_S_INTERFACE-DATAPAKID OPTIONAL
    *"     VALUE(I_PRIVATE_MODE) OPTIONAL
    *"     VALUE(I_CALLMODE) LIKE  ROARCHD200-CALLMODE OPTIONAL
    *"  TABLES
    *"      I_T_SELECT TYPE  SBIWA_T_SELECT OPTIONAL
    *"      I_T_FIELDS TYPE  SBIWA_T_FIELDS OPTIONAL
    *"      E_T_DATA STRUCTURE  ZBW_SHPMNT_COND OPTIONAL
    *"      E_T_SOURCE_STRUCTURE_NAME OPTIONAL
    *"  EXCEPTIONS
    *"      NO_MORE_DATA
    *"      ERROR_PASSED_TO_MESS_HANDLER
    The input parameter I_DATAPAKID is not supported yet !
      TABLES: KONV.
    Auxiliary Selection criteria structure
      DATA: l_s_select TYPE sbiwa_s_select.
    Maximum number of lines for DB table
      STATICS: l_maxsize TYPE sbiwa_s_interface-maxsize.
    Maximum number of lines for DB table
      STATICS: S_S_IF TYPE SRSC_S_IF_SIMPLE,
    counter
              S_COUNTER_DATAPAKID LIKE SY-TABIX,
    cursor
              S_CURSOR TYPE CURSOR.
    Select ranges
      RANGES: L_R_KNUMV  FOR KONV-KNUMV,
              L_R_KSCHL  FOR KONV-KSCHL,
              L_R_KDATU  FOR KONV-KDATU.
    Declaring internal tables
    DATA : I_KONV LIKE KONV OCCURS 0 WITH HEADER LINE.
      DATA : Begin of I_KONV occurs 0,
             MANDT LIKE konv-mandt,
             KNUMV LIKE konv-knumv,
             KPOSN LIKE konv-kposn,
             STUNR LIKE konv-stunr,
             ZAEHK LIKE konv-zaehk,
             KAPPL LIKE konv-kappl,
             KSCHL LIKE konv-kschl,
             KDATU LIKE konv-kdatu,
             KBETR LIKE konv-kbetr,
             WAERS LIKE konv-waers,
             END OF I_KONV.
    Initialization mode (first call by SAPI) or data transfer mode
    (following calls) ?
      IF i_initflag = sbiwa_c_flag_on.
    Initialization: check input parameters
                    buffer input parameters
                    prepare data selection
    The input parameter I_DATAPAKID is not supported yet !
    Invalid second initialization call -> error exit
        IF NOT g_flag_interface_initialized IS INITIAL.
          IF
            1 = 2.
            MESSAGE e008(r3).
          ENDIF.
          log_write 'E'                    "message type
                    'R3'                   "message class
                    '008'                  "message number
                    ' '                    "message variable 1
                    ' '.                   "message variable 2
          RAISE error_passed_to_mess_handler.
        ENDIF.
    Check InfoSource validity
        CASE i_isource.
          WHEN 'X'.
         WHEN 'Y'.
         WHEN 'Z'.
          WHEN OTHERS.
           IF 1 = 2. MESSAGE e009(r3). ENDIF.
           log_write 'E'                  "message type
                     'R3'                 "message class
                     '009'                "message number
                     i_isource            "message variable 1
                     ' '.                 "message variable 2
           RAISE error_passed_to_mess_handler.
        ENDCASE.
    Check for supported update mode
        CASE i_updmode.
    For full upload
          WHEN 'F'.
          WHEN 'D'.
          WHEN OTHERS.
           IF 1 = 2. MESSAGE e011(r3). ENDIF.
           log_write 'E'                  "message type
                     'R3'                 "message class
                     '011'                "message number
                     i_updmode            "message variable 1
                     ' '.                 "message variable 2
           RAISE error_passed_to_mess_handler.
        ENDCASE.
        APPEND LINES OF i_t_select TO g_t_select.
    Fill parameter buffer for data extraction calls
        g_s_interface-requnr    = i_requnr.
        g_s_interface-isource   = i_isource.
        g_s_interface-maxsize   = i_maxsize.
        g_s_interface-initflag  = i_initflag.
        g_s_interface-updmode   = i_updmode.
        g_s_interface-datapakid = i_datapakid.
        g_flag_interface_initialized = sbiwa_c_flag_on.
    Fill field list table for an optimized select statement
    (in case that there is no 1:1 relation between InfoSource fields
    and database table fields this may be far from beeing trivial)
        APPEND LINES OF i_t_fields TO g_t_fields.
    Interpretation of date selection for generic extraktion
       CALL FUNCTION 'RSA3_DATE_RANGE_CONVERT'
         TABLES
           i_t_select = g_t_select.
      ELSE.                 "Initialization mode or data extraction ?
       CASE g_s_interface-updmode.
         WHEN 'F' OR 'C' OR 'I'.
    First data package -> OPEN CURSOR
        IF g_counter_datapakid = 0.
       L_MAXSIZE = G_S_INTERFACE-MAXSIZE.
          LOOP AT g_t_select INTO l_s_select WHERE fieldnm = 'KNUMV'.
            MOVE-CORRESPONDING l_s_select TO l_r_knumv.
            APPEND l_r_knumv.
          ENDLOOP.
          LOOP AT g_t_select INTO l_s_select WHERE fieldnm = 'KSCHL'.
            MOVE-CORRESPONDING l_s_select TO l_r_kschl.
            APPEND l_r_kschl.
          ENDLOOP.
          Loop AT g_t_select INTO l_s_select WHERE fieldnm = 'KDATU'.
            MOVE-CORRESPONDING l_s_select TO l_r_kdatu.
            APPEND l_r_kdatu.
          ENDLOOP.
    *In case of full upload
    Fill field list table for an optimized select statement
    (in case that there is no 1:1 relation between InfoSource fields
    and database table fields this may be far from beeing trivial)
       APPEND LINES OF I_T_FIELDS TO S_S_IF-T_FIELDS.
          OPEN CURSOR G_CURSOR FOR
            SELECT MANDT
                   KNUMV
                   KPOSN
                   STUNR
                   ZAEHK
                   KAPPL
                   KSCHL
                   KDATU
                   KBETR
                   WAERS
            FROM   KONV
            WHERE KNUMV IN l_r_knumv
            AND   KSCHL IN l_r_kschl
            AND   KDATU IN l_r_kdatu
            AND   KAPPL EQ 'F'.
        ENDIF.
        Refresh I_KONV.
        FETCH NEXT CURSOR G_CURSOR
                   APPENDING CORRESPONDING FIELDS OF TABLE I_KONV
                   PACKAGE SIZE S_S_IF-MAXSIZE.
        IF SY-SUBRC <> 0.
          CLOSE CURSOR G_CURSOR.
          RAISE NO_MORE_DATA.
        ENDIF.
        LOOP AT I_KONV.
         IF I_KONV-KAPPL EQ 'F'.
          CLEAR :E_T_DATA.
          E_T_DATA-MANDT = I_KONV-MANDT.
          E_T_DATA-KNUMV = I_KONV-KNUMV.
          E_T_DATA-KPOSN = I_KONV-KPOSN.
          E_T_DATA-STUNR = I_KONV-STUNR.
          E_T_DATA-ZAEHK = I_KONV-ZAEHK.
          E_T_DATA-KAPPL = I_KONV-KAPPL.
          E_T_DATA-KSCHL = I_KONV-KSCHL.
          E_T_DATA-KDATU = I_KONV-KDATU.
          E_T_DATA-KBETR = I_KONV-KBETR.
          E_T_DATA-WAERS = I_KONV-WAERS.
          APPEND E_T_DATA.
       ENDIF.
        ENDLOOP.
        g_counter_datapakid = g_counter_datapakid + 1.
      ENDIF.
    ENDFUNCTION.
    Thanks in Advance
    Regards
    Swapnil.

    Hi,
    one option to investigate is to select the data with a condition on KNUMV (primary IDX).
    Since shipment costs are store in VFKP I would investigate if all your F condition records are used in this table (field VFKP-KNUMV).
    If this is the case then something like
    SELECT *
    FROM KONV
    WHERE KNUMV IN (SELECT DISTINCT KNUMV FROM VFKP)
    or
    SELECT DISTINCT KNUMV
    INTO CORRESPONDING FIELD OF <itab>
    FROM VFKP
    and then
    SELECT *
    FROM KONV
    FOR ALL ENTRIES IN <itab>
    WHERE...
    will definitively speed it up.
    hope this helps....
    Olivier

  • ACCTIT table Taking too much time

    Hi,
      In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
    Thanku

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

  • Importing one table question

    I have a DMP file and I want to import all tables except one (it's a large table). I know I can do that with a parameter file, but my question is, can I.........
    import all but the 1 table (the large one), start the import of that 1 table and have people start using the DB while that one table is importing. This large table does not have critical data they need to access right away so my thought was that I could import everything else first (small amount of data), start the import of the large table and they users could access the DB while that 1 table is importing.
    Due to special circumstances IMP/EXP is their only backup solution (please no lectures on that, I KNOW, I know....., but it is what it is)

    I can't think of anything that would prevent this from working. You just need to make sure that the large table does not have any ref constraints, or other associations with the other tables that may get screwed up while the other users are using the database.
    Dean

  • Why can I only import one TDMS signal at a time in Signal Express?

    I have dozens of data files which were originally recorded as LabVIEW waveform files.  I converted them all to TDMS in LabVIEW in hopes that I could import them to Signal Express (with the S&V Suite) for further processing.  Each data file contains more than 20 channels.
    When I try to import them to SE, I am unable to import the entire group.  Instead, I am limited to one channel at a time.  This would take days to import all channels, and is not what I want at all.  Is there some way to import an entire group of channels at once?
    JR

    I was using the "Logged Signals from LabVIEW TDMS file" import; perhaps I had installed it on SE 2.5 before upgrading to 3.0.  Regardless, I downloaded the ZIP file, and ran the msi with the "Repair" function.  This did not change the functionality.  I then used your suggestion, and redigitized one of the files with the Express VI set for TDMS.  Same deal, although instead of the channel names I had been able to specify using the TDMS subVIs, now it lists them all as "Voltage_x", where x is a channel number beginning with 0.
    I've attached two images as examples of what I'm experiencing.  The first (temp1.JPG) shows where I've tried to select the entire group.  You can see that "Convert File" is not enabled, and the sample information has not been read.  The second image (temp2.JPG) shows where I've selected a channel.  I am only able to select one channel, not a range.  The "Convert File" button is now enabled, and sample information is correctly displayed.
    Attachments:
    temp1.JPG ‏57 KB
    temp2.JPG ‏57 KB

  • Truncate table taking too much time

    hi guys,
    thnks in advince.......
    oracle version is =9.2.0.5.0
    os/ version =SunOS Ganesha1 5.9 Generic_122300-05 sun4u sparc SUNW,Sun-Fire-V890
    application people-soft
    version=8.4
    every thing is running fine from last week .
    whenever process executed like billing ,d_dairy.
    it selected some temporary tables and start truncate tables it takes 5 mint to 8 mint even table has 0 rows.
    if more then one users executed process but diff process it comes in lock ..
    regs
    deep..

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

  • Query on flashback_transaction_query table taking ridiculously long time

    Oracle 10.2.0.3.0 on Solaris :
    I am trying to use Flashback Transaction Query table to track transactions and generate the undo_sql within a time period for an entire schema using the following sql :
    SELECT XID,START_SCN,COMMIT_SCN,OPERATION,TABLE_NAME,TABLE_OWNER,LOGON_USER,UNDO_SQL
    FROM flashback_transaction_query
    WHERE start_timestamp >= TO_TIMESTAMP ('2007-08-16 11:50:00AM','YYYY-MM-DD HH:MI:SSAM')
    AND start_timestamp <= TO_TIMESTAMP ('2007-08-16 11:55:00AM','YYYY-MM-DD HH:MI:SSAM')
    AND TABLE_OWNER = 'JEFFERSON';
    None of my attempts to run this query has succeeded so far as it keeps executing and executing that never seems to end.
    The highest I waited is 50 minutes before cancelling it.
    I did read thru metalink doc id 270270.1 (which I think is close), however, the solution is not relevant to the requirement I have.
    Any suggestions would be of help. Thanks

    I found that if I did the following:
    select t2.*
    from
      select taddr
      from v$session
      where username = <username>
      ) t1
      inner join
      v$transaction t2
      on t1.taddr = t2.addr
    /... and used the XID value in this:
    select *
    from flashback_transaction_query
    where xid = hextoraw('< the value of XID from above');... that it would come back fast.
    But even then, I would have to wait a little bit before the update statement seemed to register elsewhere in the database. There was a delay. But once the update seemed to register -- and you reselected -- it was fast.
    I had no luck using those other columns in 10.1.0.5.
    I also ran DBMS_STATS.GATHER_FIXED_OBJECT_STATS and DBMS_STATS.GATHER_DICTIONARY_STATS but I do not know if they changed anything or if I just was not waiting long enough for the statement to register.

  • Delete operation on indexed table taking lot of time

    We are working on 10g R2. We have a script for archiving a table which upon copying into table (say Y a non indexed table); the pre-decided records from a table (say X indexed table).
    After inserting the records into Y the records are deleted from X. There are close to 50 million records to be archived this way.
    While testing the script in development instance for a million records, we find that on avoiding the delete operation the time taken for the script to execute is ~ 45 minutes which otherwise (including the delete operation) takes 2 hours.
    How can we reduce this overhead time.
    Dropping the index and recreating them is not an option!

    My method is logical if you are planing to migrate at least 90% of all data from X to Y. If so
    For the new X table you only need at most 10% of the size of the current X table ( At my previous post by saying "the same as" i did not mean data, i mean only DDL except storage). Moreover, after renaming X to X_ and copying "data not to be copied to Y" from X_ to X, you may drop unnecessary indexes from X_ to free up some space.

  • Analyze table taking lot of time

    HI,
    I am analyzing fact table. its taking almost 1 hour to. is there any solution for the same?
    i am using compute statistics.
    regards,
    sandeep

    Hi,
    Why not DBMS_STATS which collects stats in parallel so much faster then analyze command.I strongly recommand use DBMS_STATS in case of partition tables.Much faster, collects local and global stats both.
    Analyze cmd for partitioned table:--
    analyze table <schema>.<table> partition (<partition_name>) estimate statistics sample 5 percent;
    Use GATHER_SCHEMA_STATS for whole schema analysis .

  • Importing Configuration is taking lot of time

    Hi
    If I am importing configruation using xml file, it is taking more than half an hour.
    can we make it fast?
    sometimes I have observed, it is failing in this import.
    Regards,
    Praveena

    Hi Praveena,
    Do you want to purge configuration? if yes, then for purging entire configuration -
    1. run the setenv command file
    2. run command "java oracle.tip.repos.purge.PurgeManager purgeAll"
    For purging single configuration, try with providing the configuration name.
    For purging business and wire messages please follow -
    http://www.oracle.com/technology/products/integration/b2b/pdf/B2B_TN_017_Purge_Utility.pdf
    Message state (Complete, Error etc) should be given as argument for 'MessageState'
    Regards,
    Anuj

  • Import taking too much time

    Hi all
    I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
    Description::
    1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
    2 I am taking import in 10 g with character set utf 8 and national character set is also same.
    3 dump file is about 1.5 gb.
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow
    please help me thanks in advance.......

    Hello,
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
    You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
    For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
    VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
    So, I suggest you this:
    1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
    2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
    3. Import the datas to the Tables.
    4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
    Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
    AL32UTF8 is recommended for Unicode uses.
    Hope this help.
    Best regards,
    Jean-Valentin

  • How to export & import customer table space from non-unicode to unicode sys

    hello all,
    I have performed a migration of Single code to Unicode system. -
    (ECC 6.0 SR2)
    After completion of export & import i have noticed  there is one customer table space which is missing.
    eg - PSAPABCD
    I have checked in source system & noticed that Table space is created by customer & not maintained properly as required,
    therefore it is not either exported or imported.
    Please advice me how should i manage to export tablespace from Non - unicode system & import in Unicode.
    Is there any work out.
    Thanks & rgds,
    -rahul

    Hi Rahul,
    please have a look at
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f0f9f37c-be11-2a10-3baf-d5d7de5fb78d
    point 3) f. and g.
    However in order to keep consistency I would highly recommend to export and import all tables at the same time.
    Best regards,
    Nils Buerckel
    SAP AG

  • How to import a table from another oracle database ?

    Hi all ,
    i could like to use pl/sql to import one table from another oracle database server ?
    is it possible to do this ?
    A server B server
    table: test <------------------------> table : newtest
    the tns profile already configurated . the connection is ready .
    thanks a lot !
    Best Regards,
    Carlos

    if i don't have TEST table on server B whether COPY command will create this table on server B with the same structure ? If you specify CREATE as a clause the table will be created:
    SQL> help copy
    COPY
    COPY copies data from a query to a table in a local or remote
    database. COPY supports CHAR, DATE, LONG, NUMBER and VARCHAR2.
    COPY {FROM database | TO database | FROM database TO database}
                APPENDCREATE|INSERT|REPLACE} destination_table
                [(column, column, column, ...)] USING query
    where database has the following syntax:
         username[password]@connect_identifier

Maybe you are looking for

  • Firefox keeps crashing on facebook

    When i use my scroll wheel on my mouse and even when I don't it crashes scrolling through facebook. I do not have hardware acceleration on it makes my text look funky. My video drivers are up to date. Just made a new profile for this same problem. It

  • RBAC configuration in Solaris 10

    Hello All, I am trying to implement a script for resetting the users password without allowing user management (usrmgr group) to have direct access to password command by added this script in "usrmgr RBAC profile". Kindly let me know the syntax that

  • ***?! Problems with Safari after 5.1.4 update

    Not what anyone needs on a Monday morning apple! My imac made an update last night: Safari 5.1.4 and this morning some web pages can't be displayed. Initially webpages load up, then the load loops a few times until it comes up with 'Safari can't disp

  • Reason code 34952

    hello in reports i have seen reason code 34952 but i do not see 34952 in predefined codes in ucce admin guide. this code not put in by us and does not show in registry keys. has any body seen 34952 before? we use cad 9 -aashish

  • Resend Outbound IDOC

    Hi I am trying to achieve a functionality wherein i want to RESEND  and not reprocess my Outbound Idoc ( only if the Out IDOC is in status 03 or 12 ) . the aim is to copy the existing IDOC verbatim ( without allowing the user to modify any fields ) ,