Table export

Hi,
I have a table with 100 million records, I need to take an export of that table and import into different database...how much time it will take for this??
-GK

Tons of variables here, but, as a guideline, if you're just looking for a benchmark, I exported and imported this table, 2,000,000 rows, on my desktop (3ghz) who knows how fast my drive is... WinXP.
The (direct mode) export took about 20 seconds. Import took about 45 seconds.
It's modeled after all_objects, so you have an idea of what the lengths of the data is.
For what that's worth.
NAME Null? Type
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NOT NULL NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(18)
CREATED NOT NULL DATE
LAST_DDL_TIME NOT NULL DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)

Similar Messages

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • [SQL SERVER 2000] Generic table exporter

    Hello every body.
    First of all sorry for my bad english but I'm french ;-)
    My internship consits into making a generic table exporter (with a table list). Export into csv files.
    I have tried 2 solutions :
    1 - Create a DTS with a Dynamic Properties Task. The problem is I can easily change the destination file but when I change the table source I don't know how to remap the transformations between source and destination (do you see what I mean ?). Any idea ?
    2 - Use the bcp command. Very simple but how to do when tables contains the separator caracter ? for example : If a table line is "toto" | "I am , very happy" --> the csv file will look like this : toto, I am , very happy --> problem to get back the data ( to much comma ).
    Does someone has a solution ?
    Last point is how to export the table structure ? For the moment, using the table structure, I generate an sql query which creates a table (I write this query in a file). Isn't there any "cleaner" solution ?
    Thanks in advance and have a nice day all

    Answers,
    1. Use ActiveX script to transform. Refer
    http://technet.microsoft.com/en-us/library/aa933459(v=sql.80).aspx
    2. Replace the pipe delimiter first with comma if it is a single column and use bcp command. Refer
    http://technet.microsoft.com/en-us/library/aa174646(v=sql.80).aspx
    3. Regarding generating script refer
    http://stackoverflow.com/questions/4058977/exporting-tables-and-indexes-from-one-sql-server-2000-database-to-another
    Regards, RSingh

  • During the Unicode conversion , Cluster table export taken too much time ap

    Dear All
    during the Unicode conversion , Cluster table export taken too much time approximately 24 hours of 6 tables , could you please advise , how can  we   reduse  the time
    thanks
    Jainnedra

    Hello,
    Use latest R3load from market place.
    also refer note
    Note 1019362 - Very long run times during SPUMG scans
    Regards,
    Nitin Salunkhe

  • 많은 개수의 TABLE을 한번에 TABLE 별로 EXPORT받는 방법

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-12
    많은 개수의 TABLE을 한번에 TABLE 별로 EXPORT받는 방법
    =====================================================
    Purpose
    export를 table 별로 받아야 하는 데, export를 받아야 할 table이 매우
    많은 경우 tables option에 모두 적는 것이 불가능한 경우가 있다.
    이러한 경우에 대해 보다 쉽게 작업할 수 있는 방법을 알아보자.
    Explanation
    1. sqlplus scott/tiger로 login
    SQL> set heading off
    SQL> set pagesize 5000 (user가 소유한 table의 갯수이상)
    SQL> spool scott.out
    SQL> select tname from tab;
    SQL> exit
    2. 위와 같이하면 모든 scott user의 table들이 scott.out에 저장
    $ vi scott.out
         SQL> select tname from tab;
         BONUS
         DEPT
         DUMMY
         EMP
         SALGRADE
         SQL> exit
    vi editor로 불필요한 처음과 마지막 두라인 삭제후 table 이름뒤에
    있는 null문자를 제거 한다.
    < null문자 제거 및 export 화일을 만드는 사전 작업 >
    화일을 open 한 후
    1) :g/ /s///g      <--- table name뒤의 null문자 제거
    2) :1
    3) bonus table 뒤에 comma 를 append
    4) :map @ j$. 하고 Enter <--- 다음 라인에도 2번의 작업을 하기 위한 macro
    5) Shift+2 (계속 누르고 있음)<--- 다음 라인의 마지막에 comma 추가
    6) 제일 마지막 라인은 comma 불필요
    위의 out file을 100 개씩(table name이 길 경우는 그 이하로) 라인을
    쪼개어 화일을 나누어 개별 화일 이름을 부여하여 저장한다.
    예) 1~100은 scott1.out 101~200은 scott2.out .....과 같이 나누고
    화일의 제일 마지막 라인의 comma를 제거
    아래의 script4exp.c를 compile하여 export를 위한 shell script를
    작성한다. ( 필요하다면 script내의 export option을 수정하여 compile)
    compile이 끝난후
    $ script4exp scott1.out scott1.sh scott tiger scott1.dmp scott1.log
    $ script4exp scott2.out scott2.sh scott tiger scott2.dmp scott2.log
    하게 되면 scott1.sh, scott2.sh,.....가 생기며 이를 모드를 바꿔
    background job으로 수행하면 된다.
    주의) 1. 작업이 끝난후 *.sh의 file size를 check 한다.
    2. 가능한 큰 table은 outfile에서 빼내 따로 export한다.
    ====script4exp.c=================
    #include <stdio.h>
    #include <string.h>
    #define EXPCMD "exp %s/%s buffer=52428800 file=%s log=%s tables="
    main(int argc, char **argv)
    FILE ifp, ofp;
    char buff[256], *pt;
    if (argc != 7)
    printf("\nUSAGE :\n");
    printf("$ script4exp infile.out, outfile.sh, username,
    passwd, dmpfile.dmp, logfile.log\n\n");
    exit(0);
    if ((ifp = fopen(argv[1], "r")) == NULL)
    printf("%s file open fail !!\n", argv[1]);
    exit(0);
    if ((ofp = fopen(argv[2], "w")) == NULL)
    printf("%s file open fail !!\n", argv[1]);
    exit(0);
    fprintf(ofp, EXPCMD, argv[3], argv[4], argv[5], argv[6]);
    while((fgets(buff, 80, ifp)) != NULL)
    if ((pt = strchr(buff, '\n')) != NULL) *pt = NULL;
    fprintf(ofp, "%s", buff);
    memset(buff, 0, sizeof(buff));
    fprintf(ofp, "\n");
    fclose(ifp);
    fclose(ifp);
    }

  • Internal table export and import in ECC 5.0 version

    Hi friends,
    I am trying to export and import internal table from one program to other program.
    The below… export and import commands are not working when I run the program in background (using SUBMIT zxxxx via JOB name NUMBER number…..)
    EXPORT ITAB TO MEMORY id 'ZMATERIAL_CREATE'.
    IMPORT ItAB FROM MEMORY ID 'ZMATERIAL_CREATE'.
    Normally it should work. Since It’s not working I am trying with another alternative..
    i.e EXPORT (ptab) INTERNAL TABLE itab.
    My sap version is ECC 5.0….
    For your information, here I am forwarding sap help. Pls have a look and explain how to declare ptab internal table.
    +Extract from SAP help+
    In the dynamic case the parameter list is specified in an index table ptab with two columns. These columns can have any name and have to be of the type "character". In the first column of ptab, you have to specify the names of the parameters and in the second column the data objects. If the second column is initial, then the name of the parameter in the first column has to match the name of a data object. The data object is then stored under its name in the cluster. If the first column of ptab is initial, an uncatchable exception will be raised.
    Outside of classes you can also use a single-column internal table for parameter_list for the dynamic form. In doing so, all data objects are implicitly stored under their name in the data cluster.
    My internal table having around 45 columns.
    pls help me.
    Thanks in advance
    raghunath

    The export/import should work the way you are using it. Just make sure you are using same memory id and make sure its unique - meaning u are using it only for this itab purpose and not overwriting it with other values. Check itab is not initial before you export in program 1 - then import it in prog2 with same memory id...also check case, I am not sure if its case sensitive...
    Here is how you use the second variant...
    Two fields with two different identifications "P1" and "P2" with the dynamic variant of the cluster definition are written to the ABAP Memory. After execution of the statement IMPORT, the contents of the fields text1 and text2 are interchanged.
    TYPES:
      BEGIN OF tab_type,
        para TYPE string,
        dobj TYPE string,
      END OF tab_type.
    DATA:
      id    TYPE c LENGTH 10 VALUE 'TEXTS',
      text1 TYPE string VALUE `IKE`,
      text2 TYPE string VALUE `TINA`,
      line  TYPE tab_type,
      itab  TYPE STANDARD TABLE OF tab_type.
    line-para = 'P1'.
    line-dobj = 'TEXT1'.
    APPEND line TO itab.
    line-para = 'P2'.
    line-dobj = 'TEXT2'.
    APPEND line TO itab.
    EXPORT (itab)     TO MEMORY ID id.
    IMPORT p1 = text2
           p2 = text1 FROM MEMORY ID id.

  • Timestamp datatype not output correctly in table export

    When using the data export from table view timestamp datatype is not handled correctly. It shows as oracle.sql.TIMESTAMP@14c0761.
    Works fine from SQL Explorer view though.

    Im using the same build. 1.0.0.15.27.
    You can try any export option. I tried SQL Insert.
    If you right click from the data grid (SQL Worksheet or Table view it works fine)
    In the table view, if you go to Actions -> Export -> SQL Insert then it doesn't.

  • Reg. Particulare table export and import the same table

    Dear Sir
    I am MM consultant, I would like to export only one table and import the same after some request released,how to do this. Please help me.
    I am working in Oracle 10.2.0.2.0 release
    Thanks in advance
    Rajakumar.K

    Hello Raja,
    you want to export some table, perform some changes on the system (releasing a transport) and then reimport the old state of the table? This sounds like a very bad idea. You are inviting desaster and compromise the consistency of your system.
    Go to http://help.sap.com chose your release and enter the search term brspace to find out
    supported ways to reorganize a table.
    Regards,
    Mark

  • Partition table export and import

    i am using Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production. And have a 133 partition in a single table. I want to take dump of each partition separately and restore it in a new schema. How to go about it.

    You can specify the partition you want to export by using the tables= parameter
    expdp user/password tables=your_table:your_partitoin ...
    If you eliminate your_partition, you will export the complete table. Do you want each partition to be a non-partitioned table? So if you have 133 partitions, do you want 133 tables? If so, just export the complete table and then on import use:
    impdp user/password partition_options=departitoin ...
    The result will be 1 table for each partition. The table will be called
    tablename_partitionname
    If it is too long, it gets truncated. The departition support was added in 11.1.0.6.
    Dean

  • User Master Records/Tables - Exporting User tables for Recovery

    Hi there,
    Work in the Security area ....Outside of the client export of SCC8 for user master records/roles. Is there another recommended method for saving user tables that can be re-imported on the chance of deletion of users and roles and can be selectively re-imported if needed. For instance if a particular User Grouip was deleted with X number of users, could that be re-imported by a selective means ? SU10 is handled delicately in prod when used but unwanted results still occur. Approaches appreciated ..

    Hi,
    Try the following..
    BAPI_USER_CREATE         Create a User
    BAPI_USER_CREATE1       Create a User 
    Hope that helps! 
    Regards,
    Tanveer
    <b>Please mark helpful answers</b>

  • FAGLFLEXA table export and import.

    Hello Gurus,
    I want to export FAGLFLEXA table from quality 200 client & import into developement 110 client.
    Want to know different ways to copy it between two different sap system. If anybody knows, help me for it.
    Thanks & Warm Regards,
    Prashant

    Thanks for Reply,
    Actually i have done client copy from qas 200 to dev 110 client, in that all records in FAGLFLEXA tables are not copied, so i wan to copy that apporx. 2,000,000 records from qas to dev. I already tried with r3tans, but it is not exporting records from this table.
    Thanks & Warm Regards,
    Prashant
    Edited by: PrashantCKunjir on Jun 29, 2011 6:11 AM
    Edited by: PrashantCKunjir on Jun 29, 2011 6:12 AM

  • Total number of records of partition tables exported using datapump

    Hi All,
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    OS: RHEL
    I exported a table with partitions using datapump and I would like to verify the total number of all the records exported of all these partition tables.The export has a query on it with WHERE ITEMDATE< TO_DATE (1-JAN-2010). I need it to compare with the exact amount of records in the actual table if it's the same.
    Below is the log file of the exported table. It does not show the total number of rows exported but only individually via partition.
    Starting "SYS"."SYS_EXPORT_TABLE_05": '/******** AS SYSDBA' dumpfile=data_pump_dir:GSDBA_APPROVED_TL.dmp nologfile=y tables=GSDBA.APPROVED_TL query=GSDBA.APPROVED_TL:"
    WHERE ITEMDATE< TO_DATE(\'1-JAN-2010\'\)"
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 517.6 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q3" 35.02 MB 1361311 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q4" 33.23 MB 1292051 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q4" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q3" 30.53 MB 1186974 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q3" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q1" 30.44 MB 1183811 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q2" 30.29 MB 1177468 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q4" 30.09 MB 1170470 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q2" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q2" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q3" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q4" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2006_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2006_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q1" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q1" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_MAXVALUE" 0 KB 0 rows
    Master table "SYS"."SYS_EXPORT_TABLE_05" successfully loaded/unloaded
    Dump file set for SYS.SYS_EXPORT_TABLE_05 is:
    /u01/export/GSDBA_APPROVED_TL.dmp
    Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at 12:00:36
    Edited by: 831134 on Jan 25, 2012 6:42 AM
    Edited by: 831134 on Jan 25, 2012 6:43 AM
    Edited by: 831134 on Jan 25, 2012 6:43 AM

    I assume you want this so you can run a script to check the count? If not and this is being done manually, then just add up the individual rows. I'm not very good at writing scripts, but I would think that someone here could come up with a script that would sum up the row count for the partitions of the tables from your log file. This is not something that Data Pump writes to the log file.
    Dean

  • Reg PLD Table Export

    Hi All,
            I have designed the PLD for one company. So i need to use same format for most of the reports like Purchase Order, Sales Invoice  etc.. . For that, i woluld like to export RDOC, RITM tables to the New company DB. I am exporting with overwrite option.
    Is it ok, or it will cause any other problem?
    Pl suggest me to proceed
    Regards
    Suresh R

    Hi,
    i think the official way is to use the ReportLayoutsService Object.
    have a look at it - you can create xml files from your report and use this to load the xml file in another company.
    regards
    David

  • Include Row/Column Headers in Pivot Table Export to Excel

    I am using JDeveloper version 11.1.2.3
    I am trying to export my pivot table to excel using dvt:exportPivotTableData. I'd like to include column/row headers in the export, but can't seem to find a way to do that. Is there a way to do this in my jdeveloper version?

    I am using JDeveloper version 11.1.2.3
    I am trying to export my pivot table to excel using dvt:exportPivotTableData. I'd like to include column/row headers in the export, but can't seem to find a way to do that. Is there a way to do this in my jdeveloper version?

  • Table-Export to XML

    Hi all!
    I'm a Newbie in BC4J, but I have to create an apllication with this technology.
    My problem is, that one table should be exported to XML. I know that the standard Apllication, created by a wizard got this feature, but I didn't found the sourcecode.
    Is there anybody, who knows how to implement an XML-Export?
    Thanx
    Peter Zerwes

    Get the ViewObject reference that you want to export, e.g, in an AM say there's Dept/Emp VOs setup as masterdetail,
    you may then get the DeptVO and call one of the writeXML() apis on it.
    See javadoc for oracle.jbo.ViewObject and oracle.jbo.XMLInterface.writeXML().

  • Table export to csv files and blob/lob/clob data types

    Greetings,
    Im planning to export oracle tables to a csv file, however, it came to my attention if BLOB/LOB/CLOB datatypes will be included during the export to the csv file?
    DB: Oracle9i Database 9.2.0.1.0
    Regards
    Edited by: oracleelcaro on 4-okt-2010 9:45

    Hi,
    The performance would be slow, since as per my knowledge if you go for direct path - still tables or segments dealing with lob will opt for "conventional path".
    Any non-textual information exists - kindly check out before to export.. ??
    - Pavan Kumar N

Maybe you are looking for

  • Where is the user.js file in my profile directory?

    I need to use Rich Text editor for work in echalk so I can cut, paste, copy, etc. I followed your support instructions at mozilla.org/editor/midasdemo/securityprefs.html and http://support.mozilla.com/en-US/kb/Profiles. I was able to find the profile

  • Oracle ERP adapter or Develop our own?

    Within our company we have a discussion if we shall use the out-of-the-box Oracle ERP adapter to integrate to Oracle e-Business Suite or if we shall develop our own web service & WMQ to get it loosely coupled between SIM and EBS. The main reason we c

  • Will buzzword allow to upload word template, edit and convert to PDF.

    Hi, we have a module in our project where we need to Upload a word template - Edit the template - and save the changes - and attach exisitng uploaded files (for e.g. Word, excel and pdf file) and convert the word template along with the attached file

  • Determine http source for file in my PKGBUILD

    I'm trying to write a PKGBUILD for a JAlbum skin.  The problem I'm having is that the download link doesn't point to a specific file.  Ideas on how I can translate the following into a source=(http://) line in my PKGBUILD are welcomed Here is the dow

  • Capacity Limit for work center

    Hi PP experts, Our client have a requirement to map the moulds used in their manufacturing process as a resource to plan the capacity. But they want to assignment of capacity limit to this resource. I.e. A mould can produce 1000 products. So 1000 is