Data Pump execution time

I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 19.87 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
. . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
thanks,
Jim

Jim,
Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
Dean

Similar Messages

  • Data Pump takes too much time importing (13 hours)

    Hi
    I need some help with an import that take around 13 hours finishing.
    This is the configuration;
    RAC - 2 nodes, 4 processors each
    OS HP UX
    DB Oracle 10.2.0.5.0 64 bits
    IMPDP parfile:
    USERID=xxx/xxx
    DUMPFILE=('DB_FULLEXPORT_01.dmp','DB_FULLEXPORT_02.dmp','DB_FULLEXPORT_03.dmp','DB_FULLEXPORT_04.dmp')
    LOGFILE=DB_import.log
    DIRECTORY=EXP_DIR
    SCHEMAS='DB_OWNER'
    JOB_NAME=IMPORT_DB
    PARALLEL=8
    I am comparing the time difference between a common import and a data pump import in a test database.
    The data pump import starts with no problem. It just takes around 3 hours importing all schema tables (a very good time), but before the import of the last table, the process takes 10 hours to finish. We are not importing indexes yet, so, we think it should be very very fast.
    . . imported "DB_OWNER"."DEPOSITO" 9.992 KB 12 rows
    . . imported "DB_OWNER"."LIBRO_DIARIO" 42.41 GB 457598398 rows _(BEFORE THIS... 10 MORE HOURS)_+
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
    Pleae... Can anybody tell me why is this happening and how can I solve this problem?
    Thanks!!!
    Edited by: Adrián on 27-jul-2012 11:31

    Are you saying that it is taking about 10 hours to load 458 million rows (or about 12,722 rows a second) ? That is a pretty good pace. What are you expectation on the time it should take ?
    Pl see if this MOS Doc can help
    Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
    HTH
    Srini

  • Execution time seems to be dependent on data when it shouldn't

    Hi, consider the code shown below.  It generates a 2D array of random values between 0 and 1 and then builds an array with values of 0 or 1 depending on whether the random value was greater than a predefined threshold or not. 
    I expected the running time to be independent of the value of the threshold, but there is significant difference (see results below). 
    Any idea why this would be?  Also note the difference in running time when autoindexing and when using an array-greater-than.

    I would say the way to determine whether it is the branch-prediction (i.e. sequence of comparison) or the 64/32 bit effect (actual data dependence independent of the order of comparisons) that is causing this behaviour is through the following experiment:
    Create 2 large equal-sized arrays with random values:  the first with random values between 0 and 1, and the second with the first half filled with random values between 0 and 0.5 and the second half with random values between 0.5 and 1.  Then time how long it takes to perform a comparison on the whole array with a threshold of 0.5. 
    So if the cause is branch prediction, the second array will threshold faster than the first as the branch prediction will be correct in all but 2 comparisons (at the start of the array and at the middle off the array).
    However, if the cause is the 64/32 bit suggestion, both arrays will threshold in the same amount of time.
    Well here are my results:
    On my PC there is no significant running time difference whether it is single our double precision. 
    There is a large difference though when I run the two same size arrays and compare them to 0.5 - the one with the first half containing random values smaller than 0.5 and the second half random values larger than 0.5 runs 25% faster than one with random values between 0 and 1.
    Thus my verdict - it is branch prediction that is the cause of the data-dependent running times. 

  • What to do if data grows heavily.How to decrease the execution time

    Procedure execution time increases from 1 minute to 40 minutes in the past three years. I think this is due to increase in size of tables. We are maintaining the database from last 6years. Nearly 25,000 records are adding per month in a table. Now each contains above 12Lakh records. I already used Query Optimization techniques. So please suggest me what to do.

    So, does this process need to access the whole table or is it just interested in the last months records?
    I already used Query Optimization techniques.meaning what precisely???
    So please suggest me what to do.Without knowing a whole bunch more about your situation we cannot give you a solution. You have one slow process. So what? For instance, let's presume this is a batch process that runs once a month. Does a forty minute elapsed time matter compare to processes that are used by all your users many times every day? Are you prepared to risk the performance of those OLTP functions to improve the runtime of that batch process?
    Cheers, APC

  • Data Pump - expdp and slow performance on specific tables

    Hi there
    I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
    I have chekced:
    - no lobs
    - no long/raw
    - no VPD
    - no partitions
    - no bitmapped index
    - just date, number, varchar2's
    I'm runing with trace 400300
    But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
    1 > direct path (i think)
    2 > external table (i think)
    4 > ?
    others?
    I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
    I have a table 2.5 GB -> 3 minutes
    and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
    There are 367.000 blks (8 K) and avg rowlen = 71
    I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
    Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
    System name:  Linux
    Node name:  tiaprod.thi.somethingamt.dk
    Release:  2.6.18-194.el5
    Version:  #1 SMP Mon Mar 29 22:10:29 EDT 2010
    Machine:  x86_64
    VM name:  Xen Version: 3.4 (HVM)
    Instance name: prod
    Redo thread mounted by this instance: 1
    Oracle process number: 222
    Unix process pid: 24268, image: [email protected] (DW00)
    *** 2011-09-20 09:39:39.671
    *** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
    *** CLIENT ID:() 2011-09-20 09:39:39.671
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
    *** MODULE NAME:() 2011-09-20 09:39:39.671
    *** ACTION NAME:() 2011-09-20 09:39:39.671
    KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
    *** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
    *** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
    KUPC:09:39:39.693: Setting remote flag for this process to FALSE
    prvtaqis - Enter
    prvtaqis subtab_name upd
    prvtaqis sys table upd
    KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
    KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
    KUPW:09:39:39.820: 1: worker max message number: 1000
    KUPW:09:39:39.822: 1: Full cluster access allowed
    KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
    KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
    KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
    KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
    KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
    KUPW:09:39:39.998: 1: Max character width: 1
    KUPW:09:39:39.998: 1: Max clob fetch: 32757
    KUPW:09:39:39.998: 1: Max varchar2a size: 32757
    KUPW:09:39:39.998: 1: Max varchar2 size: 7990
    KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
    KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
    KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
    KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
    KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
    KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
    KUPW:09:39:40.005: 1: Master table             : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
    KUPW:09:39:40.005: 1: Metadata job mode        : SCHEMA_EXPORT
    KUPW:09:39:40.005: 1: Debug enable             : TRUE
    KUPW:09:39:40.005: 1: Profile enable           : FALSE
    KUPW:09:39:40.005: 1: Transportable enable     : FALSE
    KUPW:09:39:40.005: 1: Metrics enable           : FALSE
    KUPW:09:39:40.005: 1: db version               : 11.2.0.2.0
    KUPW:09:39:40.005: 1: job version              : 11.2.0.0.0
    KUPW:09:39:40.005: 1: service name             :
    KUPW:09:39:40.005: 1: Current Edition          : ORA$BASE
    KUPW:09:39:40.005: 1: Job Edition              :
    KUPW:09:39:40.005: 1: Abort Step               : 0
    KUPW:09:39:40.005: 1: Access Method            : AUTOMATIC
    KUPW:09:39:40.005: 1: Data Options             : 0
    KUPW:09:39:40.006: 1: Dumper directory         :
    KUPW:09:39:40.006: 1: Master only              : FALSE
    KUPW:09:39:40.006: 1: Data Only                : FALSE
    KUPW:09:39:40.006: 1: Metadata Only            : FALSE
    KUPW:09:39:40.006: 1: Estimate                 : BLOCKS
    KUPW:09:39:40.006: 1: Data error logging table :
    KUPW:09:39:40.006: 1: Remote Link              :
    KUPW:09:39:40.006: 1: Dumpfile present         : TRUE
    KUPW:09:39:40.006: 1: Table Exists Action      :
    KUPW:09:39:40.006: 1: Partition Options        : NONE
    KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
    KUPW:09:39:40.006: 1: Metadata Filter    Index : 1    Count : 10
    KUPW:09:39:40.006: 1:         1           Name - INCLUDE_USER
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:              Object Name - SCHEMA_EXPORT
    KUPW:09:39:40.006: 1:         2           Name - SCHEMA_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TIA')
    KUPW:09:39:40.006: 1:         3           Name - NAME_EXPR
    KUPW:09:39:40.006: 1:                    Value -  ='ACC_PAYMENT_SPECIFICATION'
    KUPW:09:39:40.006: 1:                   Object - TABLE
    KUPW:09:39:40.006: 1:         4           Name - INCLUDE_PATH_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TABLE')
    KUPW:09:39:40.006: 1:         5           Name - ORDERED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE_DATA
    KUPW:09:39:40.006: 1:         6           Name - NO_XML
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:                   Object - XMLSCHEMA/EXP_XMLSCHEMA
    KUPW:09:39:40.006: 1:         7           Name - XML_OUTOFLINE
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TABLE_DATA
    KUPW:09:39:40.006: 1:         8           Name - XDB_GENERATED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TRIGGER
    KUPW:09:39:40.007: 1:         9           Name - XDB_GENERATED
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE/RLS_POLICY
    KUPW:09:39:40.007: 1:         10           Name - PRIVILEGED_USER
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1: MD remap schema    Index : 4    Count : 0
    KUPW:09:39:40.007: 1: MD remap other     Index : 5    Count : 0
    KUPW:09:39:40.007: 1: MD Transform ddl   Index : 2    Count : 11
    KUPW:09:39:40.007: 1:         1           Name - DBA
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - JOB
    KUPW:09:39:40.007: 1:         2           Name - EXPORT
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:         3           Name - PRETTY
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         4           Name - SQLTERMINATOR
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         5           Name - CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         6           Name - REF_CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         7           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         8           Name - RESET_PARALLEL
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INDEX
    KUPW:09:39:40.007: 1:         9           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TYPE
    KUPW:09:39:40.007: 1:         10           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INC_TYPE
    KUPW:09:39:40.007: 1:         11           Name - REVOKE_FROM
    KUPW:09:39:40.008: 1:                    Value - SYSTEM
    KUPW:09:39:40.008: 1:                   Object - ROLE
    KUPW:09:39:40.008: 1: Data Filter        Index : 6    Count : 0
    KUPW:09:39:40.008: 1: Data Remap         Index : 7    Count : 0
    KUPW:09:39:40.008: 1: MD remap name      Index : 8    Count : 0
    KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
    KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
    KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:40.038: 1: Flags: 18
    KUPW:09:39:40.038: 1: Start sequence number:
    KUPW:09:39:40.038: 1: End sequence number:
    KUPW:09:39:40.038: 1: Metadata Parallel: 1
    KUPW:09:39:40.038: 1: Primary worker id: 1
    KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
    KUPW:09:39:40.041: 1: In procedure CREATE_MSG
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
    KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
    KUPW:09:39:40.046: 1: Created type completion for duplicate 62
    KUPW:09:39:40.046: 1: In procedure CREATE_MSG
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name:  Filter Value:
    KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    *** 2011-09-20 09:39:40.325
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    *** 2011-09-20 09:39:42.603
    KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
    KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
    KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
    KUPW:09:39:42.603: 1: Nothing to remap
    KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
    KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
    KUPW:09:39:42.620: 1: flags mask: 0
    KUPW:09:39:42.620: 1: dapi_possible_meth: 1
    KUPW:09:39:42.620: 1: data_size: 3019898880
    KUPW:09:39:42.620: 1: et_parallel: TRUE
    KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"                               <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
    KUPW:09:39:42.648: 1: l_client_bit_mask: 7
    KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12               <<<<< Here is says either (I thought that was method ?)  <<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
    KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
    KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
    KUPW:09:39:42.680: 1: 1 rows fetched
    KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0               <<<<<<<<<<<<<<<<  HERE IT SAYS METHOD = 4  and PARALLEL=12 (I'm not using the parallel parameter ???)  <<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
    KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
    KUPW:09:39:42.684: 1: Send table_data_varray called.  Count: 1
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.695: 1: Send table_data_varray returned.
    KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:42.695: 1: Old Seqno: 62 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:39:42.695: 1: Object count: 1
    KUPW:09:39:42.697: 1: 1 completed for 62
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:39:42.697: 1: In procedure CREATE_MSG
    KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
    KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
    KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
    KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
    *** 2011-09-20 09:40:01.798
    KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:40:01.798: 1: Object seqno fetched:
    KUPW:09:40:01.799: 1: Object path fetched:
    KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
    KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
    KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
    KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:40:01.815: 1: Old Seqno: 226 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:40:01.815: 1: Object count: 1
    KUPW:09:40:01.815: 1: 1 completed for 226
    KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called.  Handle: 200001
    KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
    KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
    KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
    KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:40:01.828: 1: Process order range: 1..1
    KUPW:09:40:01.828: 1: Method: 1
    KUPW:09:40:01.828: 1: Parallel: 1
    KUPW:09:40:01.828: 1: Creation level: 0
    KUPW:09:40:01.830: 1: BULK COLLECT called.
    KUPW:09:40:01.830: 1: BULK COLLECT returned.
    KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"            <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
    KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
    KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
    KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
    expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300

    Hi there ...
    I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
    But I still need an explanation for the methods (1,2,4 etc)
    regards
    Mette

  • How to get the execution time of a Discoverer Report from qpp_stats table

    Hello
    by reading some threads on this forum I became aware of the information stored in eul5_qpp_stats table. I would like to know if I can use this table to determine the execution time of a worksheet. In particular it looks like the field qs_act_elap_time stores the actual elapsed time of each execution of specific worksheet: am I correct? If so, how is this value computed? What's the unit of measure? I assume it's seconds, but then I've seen that sometimes I get numbers with decimals.
    For example I ran a worksheet and it took more than an hour to run, and the value I get in the qs_act_elap_time column is 2218.313.
    Assuming the unit of measure was seconds than it would mean approx 37 mins. Is that the actual execution time of the query on the database? I guess the actual execution time on my Discoverer client was longer since some calculations were performed at the client level and not on the database.
    I would really appreciate if you could shed some light on this topic.
    Thanks and regards
    Giovanni

    Thanks a lot Rod for your prompt reply.
    I agree with you about the accuracy of the data. Are you aware of any other way to track the execution times of Discoverer reports?
    Thanks
    Giovanni

  • How to improve the execution time of my VI?

    My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

  • Using  Data Pump when database is read-only

    Hello
    I used flashback and returned my database to the past time then I opened the database read only
    then I wanted use data pump(expdp) for exporting a schema but I encounter this error
    ORA-31626: job does not exist
    ORA-31633: unable to create master table "SYS.SYS_EXPORT_SCHEMA_05"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT", line 863
    ORA-16000: database open for read-only access
    but I could by exp, export that schema
    My question is that , don't I can use Data Pump while database is read only ? or do you know any resolution for the issue ?
    thanks

    You need to use NETWORK_LINK, so the required tables are created in a read/write database and the data is read from the read only database using a database link:
    SYSTEM@db_rw> create database link db_r_only
      2   connect to system identified by oracle using 'db_r_only';
    $ expdp system/oracle@db_rw network_link=db_r_only directory=data_pump_dir schemas=scott dumpfile=scott.dmpbut I tried it with 10.2.0.4 and found and error:
    Export: Release 10.2.0.4.0 - Production on Thursday, 27 November, 2008 9:26:31
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39006: internal error
    ORA-39065: unexpected master process exception in DISPATCH
    ORA-02054: transaction 1.36.340 in-doubt
    ORA-16000: database open for read-only access
    ORA-02063: preceding line from DB_R_ONLY
    ORA-39097: Data Pump job encountered unexpected error -2054
    I found in Metalink the bug 7331929 which is solved in 11.2! I haven't tested this procedure with prior versions or with 11g so I don't know if this bug only affects to 10.2.0.4 or 10* and 11.1*
    HTH
    Enrique
    PS. If your problem was solved, consider marking the question as answered.

  • How do I prevent windows from shutting down ITUNES with a Data Protection Execution error when trying to sync with my Iphone 4S in a 32 bit windows environment?

    I first was not able to install ITunes 10.5.  Apple upgrade installer caused a data protection execution error and windows reverted the process back to the original software.  I then removed ITunes through the control panel ap.  I removed Bonjour and quicktime.  I could not use the windows uninstaller to remove the apple upgrade installer and had to find a fix it program on the microsoft web site to fully remove this program.  After doing this, I was able to load and install itunes 10.5.  However, now when I try to sync my new 4S phone, the sync only goes to the 2nd or 3rd step before eliciting a DPE error and causing itunes to shut down.  I tried making a new user profile.  Opened I tunes in the new profile and began to search for music on my computer.  The program again elicited a DPE error.  I looked to see if quicken was still on the computer and maybe causing a hangup, but it is no longer installed.  Ok, I am at wits end and have spent much too much time on this issue.  Please help.

    I was able to finally stop ITunes from crashing after many days of searching n restarting my computer many times.
    a) First I went to this site ->http://forums.techarena.in/guides-tutorials/1119812.htm that showed me how to disable DPE. Please make sure that u have good virus...etc.. protection first before doing this. I have Avast Internet Security. RESTART UR COMPUTER..
    b) I then uninstall ITunes by following this site ->http://support.apple.com/kb/ht1923      RESTART UR COMPUTER..
    c) I then downloaded a beta version of ITunes. I happen to download beta 7 before the final of 10.5 version came out. Installed it. After installing I click on it n it said somewhere along the line of "this ITune ver. expired" I checked in my Programs n Features n notice that QuickTime was installed w that beta version.   
    d) Go online n download the final version of ITunes ->http://www.apple.com/itunes/download/ ...... After u finish dling install that. After that's done connect your IPod, IPhone...etc... n click on ITunes, it should sync all the way.
    I was finally able to sync all thru the steps insteading of it crashing on me midway. Hopefully this will work for you. Good luck !
    IPhone4, Gateway Windows Vista 32bit

  • DATA PUMP ISSUE

    SEVERAL USER BELONG TO THE TABLESPACE I WANT TO EXPORT
    I WANT TO ONLY WEED OUT THE TABLESSPACE AND THE TABLES BELONGING TO ONE USER.
    THE USER ALSO WANT SOME OF THE TABLES WITH ALL THE DATA IN THEM AND SOME OF THE TABLES WITH NO DATA IN THEM
    I TRIED ALL OPTIONS
    WITH DATA_ONLY, METADATA_ONLY ON BOTH THE IMPORT AND EXPORT AND HAVE ISSUES
    HAVE TRIED EXPORTING AND IT GIVES ME ALL THE TABLES AND A SIZE TO INDICATE IT WORKS BUT ON IMPORT KAZOOM-HELL BREAKS LOOSE. JUNIOR DBA NEED HELP
    SQL> select owner, table_name, tablespace_name from dba_tables where tablespace_name='USER_SEG';
    ORAPROBE OSP_ACCOUNTS USER_SEG
    PATSY TRAPP USER_SEG
    PATSY TRAPPO USER_SEG
    PATSY TRAUDIT USER_SEG
    PATSY TRCOURSE USER_SEG
    PATSY TRCOURSEO USER_SEG
    PATSY TRDESC USER_SEG
    PATSY TREMPDATA USER_SEG
    PATSY TRFEE USER_SEG
    PATSY TRNOTES USER_SEG
    PATSY TROPTION USER_SEG
    PATSY TRPART USER_SEG
    PATSY TRPARTO USER_SEG
    PATSY TRPART_OLD USER_SEG
    PATSY TRPERCENT USER_SEG
    PATSY TRSCHOOL USER_SEG
    PATSY TRSUPER USER_SEG
    PATSY TRTRANS USER_SEG
    PATSY TRUSERPW USER_SEG
    PATSY TRUSRDAT USER_SEG
    PATSY TRVARDAT USER_SEG
    PATSY TRVERIFY USER_SEG
    PATSY TRAPPO_RESET USER_SEG
    PATSY TRAPP_RESET USER_SEG
    PATSY TRCOURSEO_RESET USER_SEG
    PATSY TRCOURSE_RESET USER_SEG
    PATSY TRPARTO_RESET USER_SEG
    PATSY TRPART_RESET USER_SEG
    PATSY TRTRANS_RESET USER_SEG
    PATSY TRVERIFY_RESET USER_SEG
    MAFANY TRVERIFY USER_SEG
    MAFANY TRPART USER_SEG
    MAFANY TRPARTO USER_SEG
    MAFANY TRAPP USER_SEG
    MAFANY TRAPPO USER_SEG
    MAFANY TRCOURSE USER_SEG
    MAFANY TRCOURSEO USER_SEG
    MAFANY TRTRANS USER_SEG
    JULIE R_REPOSITORY_LOG USER_SEG
    JULIE R_VERSION USER_SEG
    JULIE R_DATABASE_TYPE USER_SEG
    JULIE R_DATABASE_CONTYPE USER_SEG
    JULIE R_NOTE USER_SEG
    JULIE R_DATABASE USER_SEG
    JULIE R_DATABASE_ATTRIBUTE USER_SEG
    JULIE R_DIRECTORY USER_SEG
    JULIE R_TRANSFORMATION USER_SEG
    JULIE R_TRANS_ATTRIBUTE USER_SEG
    JULIE R_DEPENDENCY USER_SEG
    JULIE R_PARTITION_SCHEMA USER_SEG
    JULIE R_PARTITION USER_SEG
    JULIE R_TRANS_PARTITION_SCHEMA USER_SEG
    JULIE R_CLUSTER USER_SEG
    JULIE R_SLAVE USER_SEG
    JULIE R_CLUSTER_SLAVE USER_SEG
    JULIE R_TRANS_SLAVE USER_SEG
    JULIE R_TRANS_CLUSTER USER_SEG
    JULIE R_TRANS_HOP USER_SEG
    JULIE R_TRANS_STEP_CONDITION USER_SEG
    JULIE R_CONDITION USER_SEG
    JULIE R_VALUE USER_SEG
    JULIE R_STEP_TYPE USER_SEG
    JULIE R_STEP USER_SEG
    JULIE R_STEP_ATTRIBUTE USER_SEG
    JULIE R_STEP_DATABASE USER_SEG
    JULIE R_TRANS_NOTE USER_SEG
    JULIE R_LOGLEVEL USER_SEG
    JULIE R_LOG USER_SEG
    JULIE R_JOB USER_SEG
    JULIE R_JOBENTRY_TYPE USER_SEG
    JULIE R_JOBENTRY USER_SEG
    JULIE R_JOBENTRY_COPY USER_SEG
    JULIE R_JOBENTRY_ATTRIBUTE USER_SEG
    JULIE R_JOB_HOP USER_SEG
    JULIE R_JOB_NOTE USER_SEG
    JULIE R_PROFILE USER_SEG
    JULIE R_USER USER_SEG
    JULIE R_PERMISSION USER_SEG
    JULIE R_PROFILE_PERMISSION USER_SEG
    MAFANY2 TRAPP USER_SEG
    MAFANY2 TRAPPO USER_SEG
    MAFANY2 TRCOURSE USER_SEG
    MAFANY2 TRCOURSEO USER_SEG
    MAFANY2 TRPART USER_SEG
    MAFANY2 TRPARTO USER_SEG
    MAFANY2 TRTRANS USER_SEG
    MAFANY2 TRVERIFY USER_SEG
    MAFANY BIN$ZY3M1IuZyq3gQBCs+AAMzQ==$0 USER_SEG
    MAFANY BIN$ZY3M1Iuhyq3gQBCs+AAMzQ==$0 USER_SEG
    MAFANY MYUSERS USER_SEG
    I ONLY WANT THE TATBLES FROM PATSY AND WANT TO MOVE IT TO ANOTHER DATABASE FOR HER TO USE AND WANT TO KEEP THE SAME TABLESPACE NAME
    THE TABLES BELOW SHOULD ALSO HAVE JUST THE METADATA AND NOT THE DATA
    PATSY TRAPP USER_SEG
    PATSY TRAPPO USER_SEG
    PATSY TRAUDIT USER_SEG
    PATSY TRCOURSE USER_SEG
    PATSY TRCOURSEO USER_SEG
    PATSY TRDESC USER_SEG
    PATSY TREMPDATA USER_SEG
    PATSY TRFEE USER_SEG
    PATSY TRNOTES USER_SEG
    PATSY TROPTION USER_SEG
    PATSY TRPART USER_SEG
    PATSY TRPARTO USER_SEG
    PATSY TRPART_OLD USER_SEG
    PATSY TRPERCENT USER_SEG
    PATSY TRSCHOOL USER_SEG
    PATSY TRSUPER USER_SEG
    PATSY TRTRANS USER_SEG
    PATSY TRUSERPW USER_SEG
    PATSY TRUSRDAT USER_SEG
    THE FOLLOWING WILL OR ARE SUPPOSED TO HAVE ALL THE DATA
    PATSY TRVERIFY USER_SEG
    PATSY TRAPPO_RESET USER_SEG
    PATSY TRAPP_RESET USER_SEG
    PATSY TRCOURSEO_RESET USER_SEG
    PATSY TRCOURSE_RESET USER_SEG
    PATSY TRPARTO_RESET USER_SEG
    PATSY TRPART_RESET USER_SEG
    PATSY TRTRANS_RESET USER_SEG
    PATSY TRVERIFY_RESET USER_SEG
    HAVE TRIED ALL THE FOLLOWING AND LATER GOT STUCK IN MY LIL EFFORT TO DOCUMENT AS I GO ALONG:
    USING DATA PUMP TO EXPORT DATA
    First:
    Create a directory object or use one that already exists.
    I created a directory object: grant
    CREATE DIRECTORY "DATA_PUMP_DIR" AS '/home/oracle/my_dump_dir'
    Grant read, write on directory DATA_PUMP_DIR to user;
    Where user will be the user such as sys, public, oracle etc.
    For example to create a directory object named expdp_dir located at /u01/backup/exports enter the following sql statement:
    SQL> create directory expdp_dir as '/u01/backup/exports'
    then grant read and write permissions to the users who will be performing the data pump export and import.
    SQL> grant read,write on directory dpexp_dir to system, user1, user2, user3;
    http://wiki.oracle.com/page/Data+Pump+Export+(expdp)+and+Data+Pump+Import(impdp)?t=anon
    To view directory objects that already exist
    -     use EM: under Administration tab to schema section and select Directory Objects
    -     DESC the views: ALL_DIRECTORIES, DBA_DIRECTORIES
    select * from all_directories;
    select * from dba_directories;
    export schema using expdb
    expdp system/SCHMOE DUMPFILE=patsy_schema.dmp
    DIRECTORY=DATA_PUMP_DIR SCHEMAS = PATSY
    expdp system/PASS DUMPFILE=METADATA_ONLY_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLESPACES = USER_SEG CONTENT=METADATA_ONLY
    expdp system/PASS DUMPFILE=data_only_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLES=PATSY.TRVERIFY,PATSY.TRAPPO_RESET,PATSY.TRAPP_RESET,PATSY.TRCOURSEO_RESET, PATSY.TRCOURSE_RESET,PATSY.TRPARTO_RESET,PATSY.TRPART_RESET,PATSY.TRTRANS_RESET,PATSY.TRVERIFY_RESET CONTENT=DATA_ONLY

    you are correct all the patsy tables reside in the tablespace USER_SEG that are in a diffrent database and i want to move them to a new database-same version 10g. i have created a user named patsy there also. the tablespace does not exist in the target i want to move it to-same thing with the objects and indexes-they don't exists in the target.
    so how can i move the schema and tablespace with all the tables and keeping the tablespace name and same username patsy that i created to import into.
    tried again and get some errrors: this is beter than last time and that because i remap_tablespace:USER_SEG:USERS
    BUT I WANT TO HAVE THE TABLESPACE IN TARGET CALLED USER_SEG TOO. DO I HAVE TO CREATE IT OR ?
    [oracle@server1 ~]$ impdp system/blue99 remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
    Import: Release 10.1.0.3.0 - Production on Wednesday, 08 April, 2009 11:10
    Copyright (c) 2003, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Release 10.1.0.3.0 - Production
    Master table "SYSTEM"."SYS_IMPORT_FULL_03" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_03": system/******** remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    . . imported "TEST"."TRTRANS" 10.37 MB 93036 rows
    . . imported "TEST"."TRAPP" 2.376 MB 54124 rows
    . . imported "TEST"."TRAUDIT" 1.857 MB 28153 rows
    . . imported "TEST"."TRPART_OLD" 1.426 MB 7183 rows
    . . imported "TEST"."TRSCHOOL" 476.8 KB 5279 rows
    . . imported "TEST"."TRAPPO" 412.3 KB 9424 rows
    . . imported "TEST"."TRUSERPW" 123.8 KB 3268 rows
    . . imported "TEST"."TRVERIFY_RESET" 58.02 KB 183 rows
    . . imported "TEST"."TRUSRDAT" 54.73 KB 661 rows
    . . imported "TEST"."TRNOTES" 51.5 KB 588 rows
    . . imported "TEST"."TRCOURSE" 49.85 KB 243 rows
    . . imported "TEST"."TRCOURSE_RESET" 47.60 KB 225 rows
    . . imported "TEST"."TRPART" 39.37 KB 63 rows
    . . imported "TEST"."TRPART_RESET" 37.37 KB 53 rows
    . . imported "TEST"."TRTRANS_RESET" 38.94 KB 196 rows
    . . imported "TEST"."TRCOURSEO" 30.93 KB 51 rows
    . . imported "TEST"."TRCOURSEO_RESET" 28.63 KB 36 rows
    . . imported "TEST"."TRPERCENT" 33.72 KB 1044 rows
    . . imported "TEST"."TROPTION" 30.10 KB 433 rows
    . . imported "TEST"."TRPARTO" 24.78 KB 29 rows
    . . imported "TEST"."TRPARTO_RESET" 24.78 KB 29 rows
    . . imported "TEST"."TRVERIFY" 20.97 KB 30 rows
    . . imported "TEST"."TRVARDAT" 14.13 KB 44 rows
    . . imported "TEST"."TRAPP_RESET" 14.17 KB 122 rows
    . . imported "TEST"."TRDESC" 9.843 KB 90 rows
    . . imported "TEST"."TRAPPO_RESET" 8.921 KB 29 rows
    . . imported "TEST"."TRSUPER" 6.117 KB 10 rows
    . . imported "TEST"."TREMPDATA" 0 KB 0 rows
    . . imported "TEST"."TRFEE" 0 KB 0 rows
    Processing object type TABLE_EXPORT/TABLE/GRANT/TBL_OWNER_OBJGRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE UNIQUE INDEX "TEST"."TRPART_11" ON "TEST"."TRPART" ("TRPCPID", "TRPSSN") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 32768 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE INDEX "TEST"."TRPART_I2" ON "TEST"."TRPART" ("TRPLAST") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE INDEX "TEST"."TRPART_I3" ON "TEST"."TRPART" ("TRPEMAIL") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    ORA-39083: Object type INDEX failed to create with error:
    ORA-00959: tablespace 'INDEX1_SEG' does not exist
    Failing sql is:
    CREATE INDEX "TEST"."TRPART_I4" ON "TEST"."TRPART" ("TRPPASSWORD") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "BILLZ"."TRCOURSE_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRCOURSE_I1"', NULL, NULL, NULL, 232, 2, 232, 1, 1, 7, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "BILLZ"."TRTRANS_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRTRANS_I1"', NULL, NULL, NULL, 93032, 445, 93032, 1, 1, 54063, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRAPP_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I1"', NULL, NULL, NULL, 54159, 184, 54159, 1, 1, 4597, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRAPP_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I2"', NULL, NULL, NULL, 54159, 182, 17617, 1, 2, 48776, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRAPPO_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I1"', NULL, NULL, NULL, 9280, 29, 9280, 1, 1, 166, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRAPPO_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I2"', NULL, NULL, NULL, 9280, 28, 4062, 1, 2, 8401, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRCOURSEO_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRCOURSEO_I1"', NULL, NULL, NULL, 49, 2, 49, 1, 1, 8, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TREMPDATA_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TREMPDATA_I1"', NULL, NULL, NULL, 0, 0, 0, 0, 0, 0, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TROPTION_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TROPTION_I1"', NULL, NULL, NULL, 433, 3, 433, 1, 1, 187, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_11" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I2" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I3" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I4" creation failed
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRPART_I5" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_I5"', NULL, NULL, NULL, 19, 1, 19, 1, 1, 10, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPARTO_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I1"', NULL, NULL, NULL, 29, 1, 29, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPARTO_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I2"', NULL, NULL, NULL, 29, 14, 26, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I2"', NULL, NULL, NULL, 7180, 19, 4776, 1, 1, 7048, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I3" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I3"', NULL, NULL, NULL, 2904, 15, 2884, 1, 1, 2879, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I4" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I4"', NULL, NULL, NULL, 363, 1, 362, 1, 1, 359, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRPART_I5" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I5"', NULL, NULL, NULL, 363, 1, 363, 1, 1, 353, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRPART_11" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_11"', NULL, NULL, NULL, 7183, 29, 7183, 1, 1, 6698, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRPERCENT_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPERCENT_I1"', NULL, NULL, NULL, 1043, 5, 1043, 1, 1, 99, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "DANQ"."TRSCHOOL_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRSCHOOL_I1"', NULL, NULL, NULL, 5279, 27, 5279, 1, 1, 4819, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "JULIE"."TRVERIFY_I2" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRVERIFY_I2"', NULL, NULL, NULL, 30, 7, 7, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-20000: INDEX "STU"."TRVERIFY_I1" does not exist or insufficient privileges
    Failing sql is:
    BEGIN DBMS_STATS.SET_INDEX_STATS('"STU"', '"TRVERIFY_I1"', NULL, NULL, NULL, 30, 12, 30, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Job "SYSTEM"."SYS_IMPORT_FULL_03" completed with 29 error(s) at 11:17

  • Need to know how to find the last execution time for a function module

    HI all
    I need to know
    1) How to find out the last execution time of the function module ?
      say for eg. I have executed a func. module at 1:39pm. How to retrieve this time  (1:39pm)
    2) I have created 3 billing document in tcode VF01 i.e 3 billing doucment no. would be created in SAP TABLE "VBRP" b/w 12am to 12:30 am.
    How to capture the latest SAP database update b/w time intervals?
    3) Suppose I am downloading TXT file using "GUI_DOWNLOAD" and say in 20th record some error has happened. I can capture the error using the exception.
    Is it possible to run the program once again from 21st records ? All this will be running in background...
    Kindly clarify....
    Points will be rewarded
    Thanks in advance

    1.Use tcode STAT input as Tcode of Fm and execute .
    2. See the billing documents are created in table VBRk header and there will always be Creation date and time.
    VBRk-Erdat "date ., u can check the time field also
    So now if u talk the date and time we can filter then display the records in intervals.
    3. with an error exeption how is my txt download finished .
    once exception is raised there will not be a download .
    regards,
    vijay

  • How to reduce execution time ?

    Hi friends...
    I have created a report to display vendor opening balances,
    total debit ,total credit , total balance & closing balance for the given date range. it is working fine .But it takes more time to execute . How can I reduce execution time ?
    Plz help me. It's a very urgent report...
    The coding is as below.....
    report  yfiin_rep_vendordetail no standard page heading.
    tables : bsik,bsak,lfb1,lfa1.
    type-pools : slis .
    --TABLE STRUCTURE--
    types : begin of tt_bsik,
            bukrs type bukrs,
            lifnr type lifnr,
            budat type budat,
            augdt type augdt,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
         end of tt_bsik,
         begin of tt_lfb1,
             lifnr type lifnr,
             mindk type mindk,
         end of tt_lfb1,
        begin of tt_lfa1,
            lifnr type lifnr,
            name1 type name1,
            ktokk type ktokk,
        end of tt_lfa1,
        begin of tt_opbal,
            bukrs type bukrs,
            lifnr type lifnr,
            gjahr type gjahr,
            belnr type belnr_d,
            budat type budat,
            bldat type bldat,
            waers type waers,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            blart type blart,
            monat type monat,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
            tdr type  dmbtr,
            tcr type  dmbtr,
            tbal type  dmbtr,
          end of tt_opbal,
         begin of tt_bs ,
            bukrs type bukrs,
            lifnr type lifnr,
            name1 type name1,
            prctr type prctr,
            tbal type dmbtr,
            bala type dmbtr,
            balb type dmbtr,
            balc type dmbtr,
            bald type dmbtr,
            bale type dmbtr,
            gbal type dmbtr,
        end of tt_bs.
    ************WORK AREA DECLARATION *********************
    data :  gs_bsik type tt_bsik,
            gs_bsak type tt_bsik,
            gs_lfb1 type tt_lfb1,
            gs_lfa1 type tt_lfa1,
            gs_ageing  type tt_ageing,
            gs_bs type tt_bs,
            gs_opdisp type tt_bs,
            gs_final type tt_bsik,
            gs_opbal type tt_opbal,
            gs_opfinal type tt_opbal.
    ************INTERNAL TABLE DECLARATION*************
    data :  gt_bsik type standard table of tt_bsik,
            gt_bsak type standard table of tt_bsik,
            gt_lfb1 type standard table of tt_lfb1,
            gt_lfa1 type standard table of tt_lfa1,
            gt_ageing type standard table of tt_ageing,
            gt_bs type standard table of tt_bs,
            gt_opdisp type standard table of tt_bs,
            gt_final type standard table of tt_bsik,
            gt_opbal type standard table of tt_opbal,
            gt_opfinal type standard table of tt_opbal.
    ALV DECLARATIONS *******************
    data : gs_fcat type slis_fieldcat_alv ,
           gt_fcat type slis_t_fieldcat_alv ,
           gs_sort type slis_sortinfo_alv,
           gs_fcats type slis_fieldcat_alv ,
           gt_fcats type slis_t_fieldcat_alv.
    **********global data declration***************
    data :   kb type dmbtr ,
              return like  bapireturn ,
              balancespgli like  bapi3008-bal_sglind,
              noteditems like  bapi3008-ntditms_rq,
              keybalance type table of  bapi3008_3 with header line,
             opbalance type p.
    SELECTION SCREEN DECLARATIONS *********************
    selection-screen begin of block b1 with frame .
    select-options : so_bukrs for bsik-bukrs obligatory,
                     so_lifnr for bsik-lifnr,
                     so_hkont for bsik-hkont,
                     so_prctr for bsik-prctr ,
                     so_mindk for lfb1-mindk,
                     so_ktokk for lfa1-ktokk.
    selection-screen end of block b1.
    selection-screen : begin of block b1 with frame.
    parameters       : p_rb1 radiobutton group rad1 .
    select-options   : so_date for sy-datum .
    selection-screen : end of block b1.
    ********************************ASSIGNING ALV GRID
    ****field catalog for balance report
    gs_fcats-col_pos = 1.
    gs_fcats-fieldname = 'BUKRS'.
    gs_fcats-seltext_m =  text-001.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 2 .
    gs_fcats-fieldname = 'LIFNR'.
    gs_fcats-seltext_m = text-002.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 3.
    gs_fcats-fieldname = 'NAME1'.
    gs_fcats-seltext_m =  text-003.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 4.
    gs_fcats-fieldname = 'BALC'.
    gs_fcats-seltext_m =  text-016.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 5.
    gs_fcats-fieldname = 'BALA'.
    gs_fcats-seltext_m =  text-012.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 6.
    gs_fcats-fieldname = 'BALB'.
    gs_fcats-seltext_m =  text-013.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 7.
    gs_fcats-fieldname = 'TBAL'.
    gs_fcats-seltext_m =  text-014.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 8.
    gs_fcats-fieldname = 'GBAL'.
    gs_fcats-seltext_m =  text-015.
    append gs_fcats to gt_fcats .
    data : repid1 type sy-repid.
    repid1 = sy-repid.
    INITIALIZATION EVENTS ******************************
    initialization.
    *Clearing the work area.
    clear gs_bsik.
    Refreshing the internal tables.
    refresh gt_bsik.
    ******************START OF  SELECTION EVENTS **************************
    start-of-selection.
    *get data for balance report.
      perform sub_openbal.
      perform sub_openbal_display.
    *&      Form  sub_openbal
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal .
      if   so_date-low > sy-datum or so_date-high > sy-datum .
          message i005(yfi02).
         leave screen.
    endif.
         select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsik into table gt_opbal
           where bukrs in so_bukrs and lifnr in so_lifnr
           and hkont in so_hkont and prctr in so_prctr
           and budat in so_date .
        select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsak appending table gt_opbal
           for all entries in gt_opbal
           where lifnr = gt_opbal-lifnr
           and budat in so_date .
    if sy-subrc <> 0.
      message i007(yfi02).
      leave screen.
      endif.
    select lifnr mindk from lfb1 into table gt_lfb1
      for all entries in gt_opbal
        where lifnr = gt_opbal-lifnr and mindk in so_mindk.
    select lifnr name1 ktokk from lfa1 into table gt_lfa1
      for all entries in gt_opbal
       where lifnr = gt_opbal-lifnr and ktokk in so_ktokk.
       loop at gt_opbal into gs_opbal .
         loop at gt_lfb1 into gs_lfb1 where lifnr = gs_opbal-lifnr.
           loop at gt_lfa1 into gs_lfa1 where lifnr = gs_opbal-lifnr.
            gs_opfinal-bukrs = gs_opbal-bukrs.
            gs_opfinal-lifnr = gs_opbal-lifnr.
            gs_opfinal-gjahr = gs_opbal-gjahr.
            gs_opfinal-belnr = gs_opbal-belnr.
            gs_opfinal-budat = gs_opbal-budat.
            gs_opfinal-bldat = gs_opbal-bldat.
            gs_opfinal-waers = gs_opbal-waers.
            gs_opfinal-dmbtr = gs_opbal-dmbtr.
            gs_opfinal-wrbtr = gs_opbal-wrbtr.
            gs_opfinal-shkzg = gs_opbal-shkzg.
            gs_opfinal-blart = gs_opbal-blart.
            gs_opfinal-monat = gs_opbal-monat.
            gs_opfinal-hkont = gs_opbal-hkont.
            gs_opfinal-prctr = gs_opbal-prctr.
            gs_opfinal-name1 = gs_lfa1-name1.
        if gs_opbal-shkzg    = 'H'.
            gs_opfinal-tcr   =  gs_opbal-dmbtr * -1.
            gs_opfinal-tdr   =  '000000'.
        else.
            gs_opfinal-tdr   =  gs_opbal-dmbtr.
            gs_opfinal-tcr   =  '000000'.
        endif.
           append gs_opfinal to gt_opfinal.
           endloop.
           endloop.
           endloop.
    sort gt_opfinal by bukrs lifnr prctr .
    so_date-low = so_date-low - 1 .
    loop at gt_opfinal into gs_opfinal.
    call function 'BAPI_AP_ACC_GETKEYDATEBALANCE'
      exporting
        companycode        = gs_opfinal-bukrs
        vendor             =  gs_opfinal-lifnr
        keydate            = so_date-low
       balancespgli        = ' '
       noteditems          = ' '
      importing
        return             = return
      tables
        keybalance         = keybalance.
    clear kb .
    loop at keybalance .
       kb = keybalance-lc_bal + kb .
    endloop.
          gs_opdisp-balc = kb.
          gs_opdisp-bukrs =  gs_opfinal-bukrs.
          gs_opdisp-lifnr =  gs_opfinal-lifnr.
          gs_opdisp-name1 =  gs_opfinal-name1.
        at new lifnr .
          sum .
          gs_opfinal-tbal =  gs_opfinal-tdr + gs_opfinal-tcr  .
          gs_opdisp-tbal = gs_opfinal-tbal.
          gs_opdisp-bala = gs_opfinal-tdr .
          gs_opdisp-balb = gs_opfinal-tcr .
          gs_opdisp-gbal = keybalance-lc_bal + gs_opfinal-tbal .
          append gs_opdisp to gt_opdisp.
        endat.
        clear gs_opdisp.
        clear keybalance .
      endloop.
      delete adjacent duplicates from gt_opdisp.
    endform.                    " sub_openbal
    *&      Form  sub_openbal_display
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal_display .
    call function 'REUSE_ALV_GRID_DISPLAY'
        exporting
      I_INTERFACE_CHECK                 = ' '
      I_BYPASSING_BUFFER                = ' '
      I_BUFFER_ACTIVE                   = ' '
          i_callback_program              =  repid1
      I_CALLBACK_PF_STATUS_SET          = ' '
      I_CALLBACK_USER_COMMAND           = ' '
      I_CALLBACK_TOP_OF_PAGE            = ' '
      I_CALLBACK_HTML_TOP_OF_PAGE       = ' '
      I_CALLBACK_HTML_END_OF_LIST       = ' '
      I_STRUCTURE_NAME                  =
      I_BACKGROUND_ID                   = ' '
      I_GRID_TITLE                      =
      I_GRID_SETTINGS                   =
      IS_LAYOUT                         =
          it_fieldcat                     = gt_fcats
      IT_EXCLUDING                      =
      IT_SPECIAL_GROUPS                 =
      IT_SORT                           =
      IT_FILTER                         =
      IS_SEL_HIDE                       =
      I_DEFAULT                         = 'X'
      I_SAVE                            = 'X'
      IS_VARIANT                        =
       it_events                        =
      IT_EVENT_EXIT                     =
      IS_PRINT                          =
      IS_REPREP_ID                      =
      I_SCREEN_START_COLUMN             = 0
      I_SCREEN_START_LINE               = 0
      I_SCREEN_END_COLUMN               = 0
      I_SCREEN_END_LINE                 = 0
      IT_ALV_GRAPHICS                   =
      IT_HYPERLINK                      =
      IT_ADD_FIELDCAT                   =
      IT_EXCEPT_QINFO                   =
      I_HTML_HEIGHT_TOP                 =
      I_HTML_HEIGHT_END                 =
    IMPORTING
      E_EXIT_CAUSED_BY_CALLER           =
      ES_EXIT_CAUSED_BY_USER            =
         tables
           t_outtab                       = gt_opdisp
      exceptions
        program_error                     = 1
        others                            = 2
      if sy-subrc <> 0.
        message id sy-msgid type sy-msgty number sy-msgno
                with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      endif.
    endform.                    " sub_openbal_display

    I think you are using for all entries statement in almost all select statements but i didnt see any condtion before you are using for all entries statement.
    If you are using for all entries in gt_opbal ... make sure that gt_opbal has some records other wise it will try to read all records from the data base tables.
    Try to check before using for all entries in the select statement like
    if gt_opbal is not initial.
    select adfda adfadf afdadf into table
      for all entries in gt_opbal.
    else.
    select abdf afad into table
    from abcd
    where a = 1
        and b = 2.
    endif.
    i didnt see anything wrong in your report but this is major time consuming when you dont have records in the table which you are using for all entries.

  • Reduce execution time with selects

    Hi,
    I have to reduce the execution time in a report, most of the consumed time is in the select query.
    I have a table, gt_result:
    DATA: BEGIN OF gwa_result,
          tknum            LIKE vttk-tknum,
          stabf            LIKE vttk-stabf,
          shtyp            LIKE vttk-shtyp,
          route            LIKE vttk-route,
          vsart            LIKE vttk-vsart,
          signi            LIKE vttk-signi,
          dtabf            LIKE vttk-dtabf,
          vbeln            LIKE likp-vbeln,
          /bshm/le_nr_cust LIKE likp-/bshm/le_nr_cust,
          vkorg            LIKE likp-vkorg,
          werks            LIKE likp-werks,
          regio            LIKE kna1-regio,
          land1            LIKE kna1-land1,
          xegld            LIKE t005-xegld,
          intca            LIKE t005-intca,
          bezei            LIKE tvrot-bezei,
          bezei1           LIKE t173t-bezei,
          fecha(10) type c.
    DATA: END OF gwa_result.
    DATA: gt_result LIKE STANDARD TABLE OF gwa_result.
    And the select query is this:
      SELECT ktknum kstabf kshtyp kroute kvsart ksigni
    k~dtabf
             lvbeln l/bshm/le_nr_cust lvkorg lwerks   nregio nland1 oxegld ointca
                 tbezei   ttbezei
      FROM vttk AS k
      INNER JOIN vttp  AS p ON ktknum = ptknum
      INNER JOIN likp  AS l ON pvbeln = lvbeln
      INNER JOIN kna1  AS n ON lkunnr = nkunnr
      INNER JOIN t005  AS o ON nland1 = oland1
      INNER JOIN tvrot AS t ON troute = kroute AND t~spras = sy-langu
      INNER JOIN t173t AS tt ON ttvsart = kvsart AND tt~spras = sy-langu
      INTO TABLE gt_result
      WHERE ktknum IN s_tknum AND ktplst IN s_tplst AND k~route IN s_route AND
         k~erdat BETWEEN s_erdat-low AND s_erdat-high AND
         l~/bshm/le_nr_cust <> ' '    "IS NOT NULL
         AND k~stabf = 'X'
         AND ktknum NOT IN ( SELECT tktknum  FROM vttk AS tk
                             INNER JOIN vttp AS tp ON tktknum = tptknum
                             INNER JOIN likp AS tl ON tpvbeln = tlvbeln
                             WHERE l~/bshm/le_nr_cust IS NULL )
         AND k~tknum NOT IN ( SELECT tknum FROM /bshs/ssm_eship )
         AND ( o~xegld = ' '
               OR ( o~xegld = 'X' AND
                    ( ( n~land1 = 'ES'
                        AND ( nregio = '51' OR nregio = '52'
                              OR nregio =  '35' OR nregio =  '38' ) )
                               OR n~land1 = 'ESC' ) )
                      OR ointca = 'AD' OR ointca = 'GI' ).
    Does somebody know how to reduce the execution time ?.
    Thanks.

    Hi,
    Try to remove the join. Use seperate selects as shown in example below and for the sake of selection, keep some key fields in your internal table.
    Then once your final table is created, you can copy the table into GT_FINAL which will contain only fields you need.
    EX
    data : begin of it_likp occurs 0,
             vbeln like likp-vbeln,
             /bshm/le_nr_cust like likp-/bshm/le_nr_cust,
             vkorg like likp-vkorg,
             werks like likp-werks,
             kunnr likr likp-kunnr,
           end of it_likp.
    data : begin of it_kna1 occurs 0,
           kunnr like...
           regio....
           land1...
          end  of it_kna1 occurs 0,
    Select tknum stabf shtyp route vsart signi dtabf
    from VTTP
    into table gt_result
    WHERE tknum IN s_tknum AND
          tplst IN s_tplst AND
          route IN s_route AND
          erdat BETWEEN s_erdat-low AND s_erdat-high.
    select vbeln /bshm/le_nr_cust
           vkorg werks kunnr
           from likp
           into table it_likp
           for all entries in gt_result
           where vbeln = gt_result-vbeln.
    select kunnr
           regio
           land1
           from kna1
           into it_kna1
           for all entries in it_likp.
    similarly for other tables.
    Then loop at gt result and read corresponding table and populate entire record :
    loop at gt_result.
    read table it_likp where vbeln = gt_result-vbeln.
    if sy-subrc eq 0.
      move corresponding fields of it_likp into gt_result.
      gt_result-kunnr = it_likp-kunnr.
      modify gt_result.
    endif.
    read table it_kna1 where kunnr = gt_result-vbeln.
    if sy-subrc eq 0.
      gt_result-regio = it-kna1-regio.
      gt_result-land1 = it-kna1-land1.
      modify gt_result.
    endif.
    endloop.

  • Oracle - select execution time

    hi all,
    when i executed an SQL - select (with joins and so on) - I have observed the following behaviour:
    Query execution times are like: -
    for 1000 records - 4 sec
    5000 records - 10 sec
    10000 records - 7 sec
    25000 records - 16 sec
    50000 records - 33 sec
    I tested this behaviour with different sets of sqls on different sets of data. But in each of the cases, the behaviour is more or less the same.
    Can any one explain - why Oracle takes more time to result 5000 records than that it takes to 10000.
    Please note that this has not something to do with the SQLs as - i tested this with different sets of sql on different sets of data.
    Can there be any Oracle`s internal reason - which can explain this behaviour?
    regards
    at

    That is not normal behaviour. I've never come across anything like that that wasn't explainable by some environment factor (e.g. someone else doing a big sort).
    I ran a couple of tests:
    (1) to insert 5000 rows 0.1 seconds
    to insert 10000 rows 0.18 seconds
    and
    (2) to select 5000 rows joined with 200K row table 0.19 seconds
    to select 10000 rows joined with 200K row table 0.2 seconds
    Although the second is close, I grant you!
    Cheers, APC

  • How to consolidate data files using data pump when migrating 10g to 11g?

    We have one 10.2.0.4 database to be migrated to a new box running 11.2.0.1. The 10g database has too many data files scattered within too many file systems. I'd like to consolidate the data files into one or two large chunk in one file systems. Both OSs are RHEL 5. How should I do that using Data Pump Export/Import? I knew there is "Remap" option could be used, but it's only one to one mapping. How can I map multiple old data files into one new data file?

    hi
    datapump is terribly slow, make sure you have as much memory as possible allocated for Oracle but the bottleneck can be I/O throughput.
    Use PARALLEL option, set also these ones:
    * DISK_ASYNCH_IO=TRUE
    * DB_BLOCK_CHECKING=FALSE
    * DB_BLOCK_CHECKSUM=FALSE
    set high enough to allow for maximum parallelism:
    * PROCESSES
    * SESSIONS
    * PARALLEL_MAX_SERVERS
    more:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_perf.htm
    that's it, patience welcome ;-)
    P.S.
    For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).
    Edited by: g777 on 2011-02-02 09:53
    P.S.2
    breaking news ;-)
    I am playing now with storage performance and I turned the option of disk cache (also called write-back cache) to ON (goes at least along with RAID0 and 5 and setting it you don't lose any data on that volume) - and it gave me 1,5 to 2 times speed-up!
    Some says there's a risk of lose of more data when outage happens, but there's always such a risk even though you can lose less. Anyway if you can afford it (and with import it's OK, as it ss not a production at that moment) - I recommend to try. Takes 15 minutes, but you can gain 2,5 hours out of 10 of normal importing.
    Edited by: g777 on 2011-02-02 14:52

Maybe you are looking for