Export/Import subpartition stats

I hope someone can give me a workaround for this, because it's causing our reports to take longer than they should!
Background:
We have some sub-partitioned tables on a 10.2.0.3 database, partitioned daily on the date column, with the subpartitions based on a list of values.
Overnight, various reports are run. Each report loads its data into the table, and then produces a file based on the data that's been loaded for that report. It is not practical (IMO) to analyze the tables after each report has loaded its data, due to other reports loading their data at the same time.
As the amount of data loaded into the tables each night does not vary significantly, we export the stats from a previous partition and import them into the new partition as part of the partition housekeeping job (stats imported from old partition, old partition gets dropped, new partition created with same name as the old one, and stats imported). This is done using dbms_sql.export_table_stats and dbms_sql.import_table_stats.
However, one report which currently loads 43million rows is taking 4.5 hours to run. The size of the load file increases daily, but looking at the history of the report, each relatively small increase causes the report to run a disproportional amount longer (ie. an increase of a similar amount of rows on one night can add twice as much time onto the length of the report than the increase the previous night did).
We've just implemented some changes to improve the buffer sizes, etc, on the database, in a bid to reduce some of the waits, but this has not improved matters much - the report now runs in 4 hours.
We know this report can run faster, because in testing, we saw the report run in 60 minutes! Subsequent investigation shows that this was after the partitions had been analyzed, whereas the slow report ran prior to the partitions being analyzed, despite the stats being there for the partition.
I have now tested the export/import stats process and found that they do not import the stats for the subpartitions. This looks like it is a large part of why the report takes longer before the relevant partitions/subpartitions have been analyzed than it does afterwards.
Does anyone know of anyway that I can export/import the stats at a subpartition level? (I tried putting a subpartition name in the partition parameter, but I just got an error about it being an unknown partition name.)
Any help, ideas or workarounds on this will be gratefully received!

*** Duplicate Post - Please Ignore ***

Similar Messages

  • Export import stats

    Hi
    Oracle 9i
    If I export entire schema stats, can I import only specific index stats?

    hi
    exec DBMS_STATS.export_schema_stats('&usrname','&tabname')
    This syntax will run in 10g. It may fail on 8i – 9i databases with some objects. That’s why I prefer the script on these versions
    Generate a script that import statistics on the clone database
    The purpose of this script is to generate one import statistics command per table, the source is the table created on step 1.
    &tabname = the table created on the previous step to hold the statistics
    &usrname = The name of the owner of &tabname
    ---- script to generate import table stats start here ----------
    set linesize 130 pagesize 0
    spool impstats.sql
    select 'exec dbms_stats.import_table_stats('||chr(39)||owner||chr(39)||','||chr(39)||table_name||chr(39)||',null,'||chr(
    39)||'&tabname'||chr(39)||',null,true,'||chr(39)||'&usrname'||chr(39)||')'
    from dba_tables where owner ='&usrname'
    pls also check full documents
    http://blogs.oracle.com/AlejandroVargas/gems/HowtoExportandImportStatisti.pdf
    hope this helps
    zekeriya

  • Using set/get parameters or export/import in BSP.

    Hi All,
    Is it possible to use set/get or export/import in BSP?
    We need to set/export some variables from a BADI and get/ import them in the BSP application.
    Code snippet will be of great help..
    Thanks,
    Anubhav

    Hi Anubhav,
    You can use the Export / Import statements for your requirement,
    from the BADI use EXPORT to send the variable data to a unique memory location
    with IDs
    e.g.
    *data declaration required for background processing
          DATA: WA_INDX TYPE INDX.
    **here CNAME is the variable you want to export
    EXPORT PNAME = CNAME TO DATABASE INDX(XY) FROM WA_INDX CLIENT
                SY-MANDT ID 'ZVAR1'.
    and in the BSP application use the IMPORT statement to fetch back the values
    set with the IDs above.
    IMPORT PNAME = LV_CNAME
      FROM DATABASE INDX(XY) TO WA_INDX CLIENT
      SY-MANDT ID 'ZVAR1'.
    deletes the data to save wastage of memory
      DELETE FROM DATABASE INDX(XY)
        CLIENT SY-MANDT
        ID 'ZVAR1'.
    Regards,
    Samson Rodrigues

  • Problem with EXPORT IMPORT PROCESS in ApEx 3.1

    Hi all:
    I'm having a problem with the EXPORT IMPORT PROCESS in ApEx 3.1
    When I export an application, and try to import it again. I get this error message
    ORA-20001: GET_BLOCK Error. ORA-20001: Execution of the statement was unsuccessful. ORA-06550: line 16, column 28: PLS-00103: Encountered the symbol "牃慥整㈰㈯⼴〲㐰〠㨷㐵㈺′䵐" when expecting one of the following: ( - + case mod new not null <an identifier> <a double-quoted delimited-identifier> <a bind variable> avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp in
    As a workaround, I check the exported file and found this
    wwv_flow_api.create_flow
    p_documentation_banner=> '牃慥整⠤㈰㈯⼴〲㠰〠㨷㠵㈺′äµ
    And when I replace with this
    p_documentation_banner=> ' ',
    I can import the application without the error.
    somebody knows why I have to do this??
    Thank you all.
    Nicolas.

    Hi,
    This issue seems to have been around for a while:
    Re: Error importing file
    I've had similar issues and made manual changes to the file to get it to install correctly. In my case, I got:
    ORA-20001: GET_BLOCK Error. ORA-20001: Execution of the statement was unsuccessful.<br>ORA-02047: cannot join the distributed transaction in progress<br>begin execute immediate 'alter session set nls_numeric_characters='''||wwv_flow_api.g_nls_numeric_chars||'''';end;There are several suggestions, if you follow that thread, about character sets or reviewing some of the line breaks within pl/sql code within your processes etc. Not sure what would work for you.

  • Regarding Distribution Monitor for export/import

    Hi,
    We are planning to migrate the 1.2TB of database from Oracle 10.2g to MaxDB7.7 . We are currently testing the database migration on test system for 1.2TB of data. First we tried with just simple export/import i.e. without distribution monitor we were able to export the database in 16hrs but import was running for more than 88hrs so we aborted the import process. And later we found that we can use distribution monitor and distribute the export/import load on multiple systems so that import will get complete within resonable time. We used 2 application server for export /import but export completed within 14hrs but here again import was running more than 80hrs so we aborted the import process. We also done table splitting for big tables but no luck. And 8 parallel process was running on each servers i.e. one CI and 2 App servers. We followed the document DistributionMonitorUserGuide from SAP. I observerd that  on central system CPU and Memory was utilizing above 94%. But on 2 application server which we added on that servers the  CPU and Memory utilization was very low i.e. 10%. Please find the system configuration as below,
    Central Instance - 8CPU (550Mhz) 32GB RAM
    App Server1 - 8CPU (550Mhz) 16GB RAM
    App Server2 - 8CPU (550Mhz) 16GB RAM
    And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load  process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
    Please can someone let me know how to improve the import time. And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
    Thanks,
    Narendra

    > And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load  process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
    > Please can someone let me know how to improve the import time.
    R3load connects directly to the database and loads the data. The quesiton is here: how is your database configured (in sense of caches and memory)?
    > And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
    There are no such documents available since the process of migration to another database is called "heterogeneous system copy". This process requires a certified migration consultant ot be on-site to do/assist the migraiton. Those consultants are trained specially for certain databases and know tips and tricks how to improve the migration time.
    See
    http://service.sap.com/osdbmigration
    --> FAQ
    For MaxDB there's a special service available, see
    Note 715701 - Migration to SAP DB/MaxDB
    Markus

  • Export/Import and Client Copy

    Hi All,
    Could you please help me with what are the major differences between Export/Import...client copy...and a System Refresh.How they differ from each other.
    Regards
    Rajesh

    Hello Rajesh ,
    I have capture some data from SAP Help :
    Local Copy: Copying Clients Within a System:
    You can improve the performance of the client copy by, for example, by excluding tables or packages, with Edit -> Expert Settings.
    You can exclude tables from the client copy, for example if they are not relevant for the target client, in the Tables tab
    Copying Clients Between Systems (Remote Copy):
    The same product is installed, with the same release, in both systems
    The client copier can copy a client into another system. The systems can be on different platforms. You can change the client number.
    When you copy a client from one system to another, the data is transferred directly via the RFC interface - there is no intermediate storage on hard disk.
    Transporting Clients Between Systems ;Client export (SCC8):
    The client copier can copy a client into another system, which can be on a different platform. You can change the client number.
    You are no longer required to transport clients before you can copy them between systems. You can make a remote copy instead.
    Up to three transport requests are created, depending on the selected copy profile and the existing data.
    The transport request for texts is e.g. only created if the source client contains customer texts.
    <sid>KO<no>  cross-client data
    <sid>KT<no>  client-specific data
    <sid>KX<no>  texts and forms
    The data export is performed automatically asynchronously. The output of the export includes the names of the transport requests that are to be imported.
    Import transport requests into the target client (STMS)
    Choose one of the transport requests of the client transport in the Transport Management System (TMS). The other transport requests belonging to this client transport are then automatically added in the correct order.
    Import these transport requests into the target client.
    client import postprocessing (SCC7)
    You need to perform postprocessing activities to adapt the runtime environment to the current state of the data.
    Copy by Transport Request :
    This function transports customizing changes that have been recorded in a transport request between two clients in a system.
    You can choose whether you only copy the object list of the request or also the object lists of unreleased tasks in the request.
    Entries in the target client are overwritten or deleted according to the key entries in the transport request.
    Choose Administration -> System administration -> Administration -> Client admin. ->Special Functions -> Copy Transport Request.
    <a href="http://help.sap.com/saphelp_nw70/helpdata/en/69/c24c0f4ba111d189750000e8322d00/frameset.htm">For more info Click here</a>
    Regards ,
    Santosh Karadkar

  • Migrate Database- Export/Import

    Hi,
    I need to migrate an Oracle database 9i from Sun Solaris to Linux. The final target database version would be 11g.
    Since this is a 9i database, I see that we have only option of export and database. We have around 15 schemas.
    I have some queries related to it.
    1. If I perform a export with full=y rows=y, Does it export Sys, System Schema objects also?
    2. Can we perform export in Oracle 9i and use datapump import on targert 11g?
    3. What is the ebst approach - a) to perform schema by schema export or b) to perform a full database export with exp / file=xxx.dmp log=xxxx.log full=y?
    Since there is a database version different I dont want to touch sys, system schema objects.
    Appreciate your thoughts.
    Regards
    Cherrish Vaidiyan

    Hi,
    Let me try to answer some of these questions you queried for:
    1. If I perform a export with full=y rows=y, Does it export Sys, System Schema objects also?Export won't export sys objects. For example, there are tables in sys, like obj$ that contain information for other metadata objects, like scott.emp, etc. This is not exported becuase when scott.emp is exported, the data from obj$ is essentially exported that way. When the dumpfile is imported and scott.emp is recreated, the data in sys.obj$ will be restored through the create table statement. As far as the SYSTEM schema is concerned, some objects are exported and some are not. There are tables in system that contain information about queues, jobs, etc. These will probably not make any sense on the target system so those types of tables are excluded from the export job. Other objects make sense to export/import so those are done. This is all figured out in the internals of export/import. Thre are other schemas that are not exproted. Some that I can think of are DMSYS, ORDSYS, etc. This would be for the same reason as SYS.
    2. Can we perform export in Oracle 9i and use datapump import on targert 11g?No, the dumpfiles are formatted differently. If you use exp, then you must use imp. If you use expdp, then you must use impdp. You can do exp on 9i and imp on 11g with the dumfile that was created on 9i.
    3. What is the ebst approach - a) to perform schema by schema export or b) to perform a full database export with exp / file=xxx.dmp log=xxxx.log full=y?This is case by case decision. It depends on what you want. If you want the complete database moved, then I would personally think that a full=y is what you would want to do. If you just did schema exports, then you would never export the tablespaces. This would mean that you would have to create the tablespaces on the source system before you ran imp. There are other objects that are not exported when a schema level export is performed that are exproted when a full is performed. This information can be seen in the utilities guide. Look to see what is exported when in user/schema mode vs full/database mode.
    Since there is a database version different I dont want to touch sys, system schema objects.This is all done for you with the internal workings of exp/imp.
    Dean
    Edited by: Dean Gagne on Jul 29, 2009 8:38 AM

  • Using export/import to migrate data from 8i to 9i

    We are trying to migrate all data from 8i database to 9i database. We plan to migrate the data using export/import utility so that we can have the current 8i database intact. And also the 8i and 9i database will reside on the same machine. Our 8i database size is around 300GB.
    We plan to follow below steps :
    Export data from 8i
    Install 9i
    Create tablespaces
    Create schema and tables
    create user (user used for exporting data)
    Import data in 9i
    Please let me know if below par file is correct for the export :
    BUFFER=560000
    COMPRESS=y
    CONSISTENT=y
    CONSTRAINTS=y
    DIRECT=y
    FEEDBACK=1000
    FILE=dat1.dmp, dat2.dmp, dat3.dmp (more filenames here)
    FILESIZE=2048GB
    FULL=y
    GRANTS=y
    INDEXES=y
    LOG=export.log
    OBJECT_CONSISTENT=y
    PARFILE=exp.par
    ROWS=y
    STATISTICS=ESTIMATE
    TRIGGERS=y
    TTS_FULL_CHECK=TRUE
    Thanks,
    Vinod Bhansali

    I recommend you to change some parameters and remove
    others:
    BUFFER=560000
    COMPRESS=y -- This will increase better storage
    structure ( It is good )
    CONSISTENT=y
    CONSTRAINTS=y
    DIRECT=n -- if you set that parameter in yes you
    can have problems with some objects
    FEEDBACK=1000
    FILE=dat1.dmp, dat2.dmp, dat3.dmp (more filenames here)
    FILESIZE=2048GB
    FULL=y
    GRANTS=y -- this value is the default ( It is
    not necesary )
    INDEXES=y
    LOG=export.log
    OBJECT_CONSISTENT=y -- ( start the database in restrict
    mode and do not set this param )
    PARFILE=exp.par
    ROWS=y
    STATISTICS=ESTIMATE -- this value is the default ( It is
    not necesary )
    TRIGGERS=y -- this value is the default ( It is
    not necesary )
    TTS_FULL_CHECK=TRUE
    you can see what parameters are not needed if you apply
    this command:
    [oracle@ozawa oracle]$ exp help=y
    Export: Release 9.2.0.1.0 - Production on Sun Dec 28 16:37:37 2003
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    You can let Export prompt you for parameters by entering the EXP
    command followed by your username/password:
    Example: EXP SCOTT/TIGER
    Or, you can control how Export runs by entering the EXP command followed
    by various arguments. To specify parameters, you use keywords:
    Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
    Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)
    or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
    USERID must be the first parameter on the command line.
    Keyword Description (Default) Keyword Description (Default)
    USERID username/password FULL export entire file (N)
    BUFFER size of data buffer OWNER list of owner usernames
    FILE output files (EXPDAT.DMP) TABLES list of table names
    COMPRESS import into one extent (Y) RECORDLENGTH length of IO record
    GRANTS export grants (Y) INCTYPE incremental export type
    INDEXES export indexes (Y) RECORD track incr. export (Y)
    DIRECT direct path (N) TRIGGERS export triggers (Y)
    LOG log file of screen output STATISTICS analyze objects (ESTIMATE)
    ROWS export data rows (Y) PARFILE parameter filename
    CONSISTENT cross-table consistency(N) CONSTRAINTS export constraints (Y)
    OBJECT_CONSISTENT transaction set to read only during object export (N)
    FEEDBACK display progress every x rows (0)
    FILESIZE maximum size of each dump file
    FLASHBACK_SCN SCN used to set session snapshot back to
    FLASHBACK_TIME time used to get the SCN closest to the specified time
    QUERY select clause used to export a subset of a table
    RESUMABLE suspend when a space related error is encountered(N)
    RESUMABLE_NAME text string used to identify resumable statement
    RESUMABLE_TIMEOUT wait time for RESUMABLE
    TTS_FULL_CHECK perform full or partial dependency check for TTS
    VOLSIZE number of bytes to write to each tape volume
    TABLESPACES list of tablespaces to export
    TRANSPORT_TABLESPACE export transportable tablespace metadata (N)
    TEMPLATE template name which invokes iAS mode export
    Export terminated successfully without warnings.
    [oracle@ozawa oracle]$
    Joel P�rez

  • EXPORT / IMPORT  TO/FROM SHARED BUFFER

    Hello all,
    I am facing a problem with the EXPORT/IMPORT to SHARED BUFFER statements.
    In my report program , I export data to the shared memory.
    I then call a transaction to park an accouting document.
    The BTE 2218 gets triggered in the process. Here the IMPORT works fine.
    Later, there is a standard function module which is called IN UPDATE TASK.
    Within this, the IMPORT statement fails.
    It works on one server but not on another.
    Notes :
    The IMPORT works in debugging mode but fails if I simply run.
    Another point is that the ID used for identifying the shared memory uses sy-uname.
    Can the visiblity of sy-uname in UPDATE TASK be controlled by settings ?
    Any ideas on this ?
    Please don't copy paste the help on SHARED BUFFER etc.
    Thanks in advance.

    Hi Mariano,
    the issue is to due to multiple servers present where SHARED MEMORY is specific for each application server.
    So we export data into shared memory in program A, we have to be sure, that program B or FM which is called in background or update task by program A runs on the same application server
    Here, the problem is when program A calls the program B or FM in background or update it’s a dynamic scheduling to all application server with have batch work processes and not the same application server that of calling program A always, so program B runs on another application server which has different shared  memory.
    Solution will be:-
    To Force program B to run on same application server as of calling program A by
    passing sy-host of calling program A to Function module “JOB_CLOSE” parameter
    name “TARGETSERVER”. OR
    Instead of using SHARED MEMORY we will use DATABASE.
            EXPORT itab FROM itab  TO DATABASE indx(ar) CLIENT sy-mandt ID job_number in programA where job number is unique.
            Then IMPORT itab TO itab FROM database indx(ar) CLIENT sy-mandt ID job_number  in program B Where job number is passed from program A to B.
            Then DELETE FROM DATABASE indx(ar) CLIENT sy-mandt ID job_number.
    Regards,
    Vignesh Yeram

  • Export/import to/from database

    hi,
    i do not know what is this for. i read the help but still no idea.
    i know to use export/import to/from memory id and also set/get but not export/import to/from database.
    1) what help says it stores data cluster? what is data cluster
    2) can have example of export/import to/from database?
    3) what is the different for export/import to/from memory id and database?
    thanks

    Hi,
    1) A data cluster is a set of data objects grouped together for the purpose of storage in a storage medium(Like database, local memory or shared meomory etc), which can only be edited using ABAP statements like EXPORT, IMPORT and DELETE.
    2) Suppose you want to export the data in a internal table to database.
    Here TABLE_ID is the identifer of your data cluster. You are exporting your data to a SAP table INDX and giving it an ID (TABLE_ID in this case and this ID you are giving it an area by name XY where your data will be stored). EXPORT tab = itab
      TO DATABASE indx(XY)
      CLIENT '000'
      ID 'TABLE_ID'.   
    You can get this data as follows.
    IMPORT tab = itab
      FROM DATABASE indx(xy)
      CLIENT '000'
      ID 'TABLE_ID'.
    3) The difference is simple when you use MEMORY ID the data you export is stored in the application server MEMORY where as in the case of DATABASE it is stored in the database.
    Regards,
    Sesh

  • EXPORT/IMPORT  to MEMORY

    Hi,
    I want to know if a parameter ID  of "export/import to memory" instruction is available in two differents session with different user's login?
    tks
    Carlos

    The use of the shared buffer may be of some interest to you.
    <b>From F1 help</b>
    <i>EXPORT obj1 ... objn TO SHARED BUFFER dbtab(ar) ID key.
    Additions:
    1. ... = f (for each field you want to export)
    2. ... FROM f (for each field you want to export)
    3. ... CLIENT g (before ID key)
    4. ... FROM wa (as last addition or after dbtab(ar))
    In an ABAP Objects context, a more severe syntax check is performed that in other ABAP areas. See Implicit field names not allowed in clusters and Table work areas not allowed.
    Effect
    Stores a data cluster in the cross-transaction application buffer.The specified objects obj1 ... objn (fields, structures, or tables) are stored as a single cluster in the buffer.
    The specified table dbtab must have a standard structure.
    The buffer area for the table dbtab is divided into various logically-related areas (ar, two-character ID).
    You can export a collection of data objects (data cluster) to an area of the buffer under a key of your own choosing (key field).
    You can import individual data objects from this collection using the IMPORT statement (as long as the data has not been deleted from the buffer).
    Notes
    In classes, you must always specify explicit names for the data objects. Addition 1 or addition 2 is therefore obligatory.
    In classes, you must always specify the work area explicitly. Addition 4 is therefore obligatory.
    The table dbtab that you specify after SHARED BUFFER must be declared under TABLES (except in addition 4).
    You cannot export the header line of an internal table. If you specify the name of an internal table with a header line, the system always exports the actual table data.
    You cannot export data, object, and interface references.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Example
    Exporting two fields and an internal table to the buffer with structure INDX:
    TABLES INDX.
    TYPES: BEGIN OF ITAB3_TYPE,
              CONT(4),
           END OF ITAB3_TYPE.
    DATA: INDXKEY LIKE INDX-SRTFD VALUE 'KEYVALUE',
          F1(4), F2 TYPE P,
          ITAB3 TYPE STANDARD TABLE OF ITAB3_TYPE WITH
                     NON-UNIQUE DEFAULT KEY INITIAL SIZE 2,
          WA_INDX TYPE INDX.
    Fill data fields before CLUSTR
    before the actual export
    INDX-AEDAT = SY-DATUM.
    INDX-USERA = SY-UNAME.
    Export data.
    EXPORT F1    FROM F1
           F2    FROM F2
           ITAB3 FROM ITAB3
           TO SHARED BUFFER INDX(ST) FROM WA_INDX ID INDXKEY.
    Addition 1
    ... = f (for each object you want to export)
    Effect
    Exports the contents of the field f and stores them under the specified name.
    Addition 2
    ... FROM f (for each field you want to export)
    Effect
    Exports the contents of field f and stores them under the specified name.
    Addition 3
    ... CLIENT g (before ID key)
    Effect
    The data objects are stored in client g (as long as the import/export table dbtab is client-specific).
    Addition 4
    ... FROM wa (as last addition or after dbtab(ar))
    Effect
    Use this addition if you want to store user data fields in the application buffer. Instead of the table work area, the system uses the specified work area wa. The specified work area must have the same structure as the table dbtab.
    Example
    DATA WA LIKE INDX.
    DATA F1.
    WA-AEDAT = SY-DATUM.
    WA-USERA = SY-UNAME.
    WA-PGMID = SY-REPID.
    EXPORT F1 = F1 TO SHARED BUFFER INDX(AR)
                   CLIENT '001' ID 'TEST'
                   FROM WA.
    Note
    Catchable runtime error
    EXPORT_BUFFER_NO_MEMORY: The EXPORT data cluster is too big for the application buffer. This error should not occur often, since the buffer uses a procedure similar to the LRU(Least Recently Used) procedure to monitor the buffer contents. However, if the error does occur, you can increase the profile parameter rsdb/obj/buffersize (see Profile Parameter Attributes), which may help.</i>
    Regards,
    Rich Heilman

  • EXPORT - IMPORT in BACKGROUND JOB

    Hello ABAP Gurus,
    There are two programs I am using.
    In 1st Program I am Exporting the Data to the ABAP Memory and then after that scheduling the second program in background from 1st Program through
    1 ) JOB_OPEN
    2 ) SUBMIT
    3 ) JOB_CLOSE
    Now I am trying to Import the data that is Exported in the second program which is scheduled in background.
    But I am not able to Import it.
    My question is that - Does EXPORT - IMPORT Statements works in such scenario when background job is scheduled.
    As it is working fine if, I only submits and does not put in the Background  JOB.
    Looking forward for the answer.
    Helpful answers will definately be awarded.
    Thanks in Advance
    Sudhanshu Garg

    Hi Sudhanshu,
    Export/import to memory uses ABAP memory which will not be accessible by the background job.
    No need of creating the structures in dictionary
    You can use the Table INDX for storing your data in the database.
    See the link below for an example.
    http://help.sap.com/saphelp_45b/helpdata/en/34/8e73a36df74873e10000009b38f9b8/content.htm

  • Export-import in 9.7

    Hello DB6 experts!
    I've made an export-import in a 9.7 database.
    The problem is that after this clean process, I can't use the statement:
    db2 alter tablespace tbs_name reduce MAX
    it gives me an error:
    SQL1763N  Invalid ALTER TABLESPACE statement for table space
    It's an error expected if the tablespace was created in a db version previous that 9.7.
    Although, sin I have made an export-import, I think that tablespace was created in 9.7 version.
    Do you know how can I get rid of this error?
    regards,
    Filipe Vasconcelos

    I was able to check the log sapinst_dev.log and there is the following statement:
    function createTablespaceCreateStatement <done>, return <create tablespace PR8#BTABD
      in nodegroup SAPNODEGRP_PR8 pagesize 16k  extentsize 2 prefetchsize automatic
      no file system caching
      dropped table recovery off;
    So these tablespaces were created in 9.7 during the import.
    thanks.
    regards,
    Filipe

  • OWB - issue in export  import

    hi,
    When i do an export – import from one OWB repository to another of an OWB mapping having a MERGE statement some of the columns are missing in the MERGE. I am using OWB 10g R2.
    Eg: if the MERGE is based on columns c1, c2, c3 after the export-import the mapping to a different environment the imported mapping is having the MERGE based on only c1
    Thanks

    There is Bug 5705198: LOADING PROPERTIES CHANGED AFTER MDL EXPORT/IMPORT (fixed in 10.2.0.4 OWB patchset), similar problem with lost MATCH COLUMN WHEN UPDATING ROW property during MDL import.
    Maybe it is your case.
    There is not workaround for this problem, only patching to OWB 10.2.0.4
    Regards,
    Oleg

  • Extents in Export/Import files

    I work on Oracle 8.1.7 with HP unix. I need to create a schema in QA exactly with the same tables as in Prod. (data in QA is a subset of Prod)
    Tables in prod have very big initial extents.
    I created a dump file by exporting the prod env.
    I imported the dump file (from prod) with the Index file option and got the SQL file. However, each 'create table' statement has the initial extent size included in it. I would have to manually change for 600 tables so that the initial extent is smaller for QA environment. I would like to use the default initial extent size for each table.
    I tried using the COMPRESS=Y and N options and could not spot any difference.
    Is there a way to completely avoid the extents or storage parameter in the export/import process ?
    Appreciate any help on this
    Thanks

    Hi Tushar,
    1. How do I pass this internal table to function module ?
       I assume u are creating your own Y/Z FM.
       Pass it thru TABLES parameter.
    2. When I am creating function module in se37 where do I define this iternal table type
       Define this in TABLES interface.
       What Type ?
       THE SAME TYPE WHICH HAS BEEN DEFINED
        WHILE PASSING IN THE USER-EXIT FUNCTION MODULE.
       IF U SEE THE FM OF THE USER-EXIT,
       U WILL COME TO KNOW.
    3.
    Where do I define error structure type (which is returned by function module to main program)? Is it in Export or table parameter during function module creation?
    Define it in TABLES interace. (not in export, import)
      (Since what u are going to return is an internal table)
      U can take for eg. BDCMSGCOLL.
      OR u can create your own Y/Z structure
    for the same purpose.
      (or u can use the structure type T100)
    I hope it helps.
    Regards,
    Amit M.

Maybe you are looking for

  • Where to upload xml files?

    In the installation doc. I found: "Login as a user with administrative privileges and upload the following xml files from $IFS_HOME/samplecode/oracle/ifs/examples/jspapps/insurance/xml directory, in the order given a) CreateDirectoryObjects.xml b)Ins

  • How to load another page in a frame?

    Hi! I have a frame with a page where is a button. I need to load another page in the frame when I click the button. How can I make it on the ADF? I work on JDeveloper 10.1.3.4.

  • Can't download apps on creative cloud

    i've downloaded the trial version for now, i'm trying to download the trial version apps on creative cloud but everytime i click "download", it just sits on the load page and doesn't do anything. i've tried closing the browser, restarting my laptop.

  • 3.5mm headphone jack snapped off inside headphone port.

    Hi guys, Last night the 3.5mm jack on the end of my headphones snapped off inside the macbook's headphone socket. Its about 4mm inside the port, and so is on the other side of the first set of copper 'bumps' (i think they are copper, they look it)...

  • Safari redirect hijack - may have iOS5 virus or trojan

    This morning, while clicking on a link on a website, I was redirected to a **** site. Not so unusual, eh? Except this was on my iPhone 4s, running iOS 5 and Safari, and the same link is good on my Mac desktops. Clearing data, history, caches, and a r