Interface Problems: DBA = Data Pump = Export Jobs (Job Name)

Hello Folks,
I need your help in troubleshooting an SQL Developer interface problem.
DBA => Data Pump => Export Jobs (Job Name) => Data Pump Export => Job Scheduler (Step):
-a- Job Name and Job Description fields are not visible. Well the fields are there but each of them just 1/2 character wide. I can't see/enter anything in the fields.
Import Wizard:
-b- Job Name field under the first "Type" wizard's step looks exactly the same as in Export case.
-c- Can't see any row under "Chose Input Files" section (I see just ~1 mm of the first row and everything else is hidden).
My env:
-- Version 3.2.20.09, Build MAIN-09.87
-- Windows 7 (64 bit)
It could be related to the fact that I did change fonts in the Preferences. As I don't know what is the default font I can't change it back to the default and test (let me know what is the default and I will test it).
PS
-- Have tried to disable all extensions but DBA Navigator (11.2.0.09.87). It didn't help
-- There are no any messages in the console if I run SQL Dev under cmd "sqldeveloper\bin\sqldeveloper.exe
Any help is appreciated,
Yury

Hi Yury,
a-I see those 1/2 character size text boxes (in my case on frequency) when the pop up dialog is too small - do they go away when you make it bigger?
b- On IMPORT the name it starts with IMPORT - if it is the half character issue have you tried making the dialog bigger?
c-I think it is size again but my dialog at minimum size is already big enough.
Have you tried a smaller font - or making the dialogs bigger (resizing from the corners).
I have a 3.2.1 version where I have not changed the fonts from Tools->Preferences->CodeEditor->Fonts appears to be:
Font Name: DialogInput
Font size: 12
Turloch
-SQLDeveloper Team

Similar Messages

  • Exporting whole database (10GB) using Data Pump export utility

    Hi,
    I have a requirement that we have to export the whole database (10GB) using Data Pump export utility because it is not possible to send the 10GB dump in a CD/DVD to the system vendor of our application (to analyze few issues we have).
    Now when i checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use normal export method. Also, will data pump reduce the size of the dump file so it can fit in a DVD or can we use Parallel Full DB export utility to split the files and include them in a DVD, is it possible.
    Please correct me if i am wrong and kindly help.
    Thanks for your help in advance.

    You need to create a directory object.
    sqlplus user/password
    create directory foo as '/path_here';
    grant all on directory foo to public;
    exit;
    then run you expdp command.
    Data Pump can compress the dumpfile if you are on 11.1 and have the appropriate options. The reason for saying filesize is to limit the size of the dumpfile. If you have 10G and are not compressing and the total dumpfiles are 10G, then by specifying 600MB, you will just have 10G/600MB = 17 dumpfiles that are 600MB. You will have to send them 17 cds. (probably a few more if dumpfiles don't get filled up 100% due to parallel.
    Data Pump dumpfiles are written by the server, not the client, so the dumpfiles don't get created in the directory where the job is run.
    Dean

  • DATA PUMP export error

    Hi all
    During data pump export (big table ) dmp has 289GB (FILESIZE=10G) I had error
    . . exported "TEST3"."INC_T_ZMLUVY_PARTNERI" 6.015 GB 61182910 rows
    . . exported "TEST3"."PAR_T_VZTAHY" 5.798 GB 73121325 rows
    ORA-31693: Table data object "TEST3"."INC_T_POISTENIE_PARAMETRE_H" failed to load/unload and is being skipped due to error:
    ORA-31617: unable to open dump file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp" for write
    ORA-19505: failed to identify file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp"
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    . . exported "TEST3"."PAY_T_UCET_POLOZKA" 5.344 GB 97337823 rows
    and export continued until :
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_02" completed with 1 error
    (it take 8hours)
    can You help me if dmp is now ok ? If impdp will conntiue after reading exp_polska03.dmp , (dump has 28 file exp_polska1 - exp_polska28)?Maximaly I 'm exporting this one table again.. is this solution ok ?
    thak Brano

    What is the expdp parameters used?
    what's the total export dump size?( Try estimate_only=y)
    ORA-31617: unable to open dump >file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp" for write
    ORA-19505: failed to identify >file "/u01/app/oracle/backup/STARINS/exp_polska03.dmphttp://systemr.blogspot.com/2007/09/section-oracle-database-utilities-title.html
    Check the above one.
    I guess the file is removed from the location or not enough permission to write.

  • Data Pump Export with remap tables

    Hello we have two database on different servers. both running same platform windows 2008 ORacle11r2. we have two stars(schema) on both server. both schema have partition table and every partition has it own tablespace. Star one is empty its name is Say "Msoon" i want to populate Msoon with ONLY DATA of Other star say M2 i have tried different command but every time i get error. here is my command in which i'm extracting one table from database
    expdp M1396_1447/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=quest_dg2.dmp tables=M1396_1447.M1396_1447_DG2 CONTENT= DATA_ONLY
    Now i want to import its data in Msoon star table MSOON02_DG2 which is same. just name is change but i get the following error
    impdp MSOON/Aa123456 DIRECTORY=data_pumpdir DUMPFILE=QUEST_DG22.dmp remap_table=M1396_1447.M1396_1447_DG2:MSOON02.MSOON02_DG2 CONTENT= DATA_ONLY
    Import: Release 11.2.0.1.0 - Production on Wed Jan 12 20:34:17 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MSOON"."SYS_IMPORT_FULL_12" successfully loaded/unloaded
    Starting "MSOON"."SYS_IMPORT_FULL_12":  MSOON/******** DIRECTORY=data_pump_dir DUMPFILE=QUEST_DG22.dmp logfile=MY_L.log remap_table=M1396_1447.M1396_1447_DG2:MSOON02.MSOON02_DG2 CONTENT= DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [15]
    TABLE_DATA:"M1396_1447"."MSOON02.MSOON02_DG2":"DEF_PART_M1396_1447_DG2"
    ORA-31603: object "MSOON02.MSOON02_DG2" of type TABLE not found in schema "M1396_1447"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "SYS.KUPW$WORKER", line 8171
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    00000003B2AD5BB0     18990  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8192  package body SYS.KUPW$WORKER
    00000003B2AD5BB0     18552  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      4105  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8875  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      1649  package body SYS.KUPW$WORKER
    00000003B29D51D0         2  anonymous block
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [15]
    TABLE_DATA:"M1396_1447"."MSOON02.MSOON02_DG2":"DEF_PART_M1396_1447_DG2"
    ORA-31603: object "MSOON02.MSOON02_DG2" of type TABLE not found in schema "M1396_1447"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "SYS.KUPW$WORKER", line 8171
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    00000003B2AD5BB0     18990  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8192  package body SYS.KUPW$WORKER
    00000003B2AD5BB0     18552  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      4105  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8875  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      1649  package body SYS.KUPW$WORKER
    00000003B29D51D0         2  anonymous block
    Job "MSOON"."SYS_IMPORT_FULL_12" stopped due to fatal error at 20:34:19
    Edited by: Oracle Studnet on Jan 12, 2011 7:36 AM

    i have some problem with data pump parallel processing. here is my statement
    expdp M1396_1447/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=SRV03Msoon02_jt%U.dmp logfile= jt.log tables=M1396_1447.M1396_1447_jt CONTENT= DATA_ONLY parallel=4 One server two i want to import it
    impdp MSOON02/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=SRV03Msoon02_jt%U.dmp logfile= jtJT.log REMAP_SCHEMA=M1396_1447:MSOON02 remap_table=M1396_1447_JT:MSOON02_JT CONTENT= DATA_ONLY parallel=4 i get following Error if i use parallel. if i omit parallel it take two long to import data. it took 2.5hr to insert 2.35Gb data in one partition of table.
    One more thing if i use parallel, all partitions of table containing no record imoprt successfully but all those partition that have data sized in GBs fail the following error. plz it is requested to help me out of this problem. here is log file of data pump
    Import: Release 11.2.0.1.0 - Production on Thu Jan 13 14:05:04 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MSOON02"."SYS_IMPORT_FULL_05" successfully loaded/unloaded
    Starting "MSOON02"."SYS_IMPORT_FULL_05":  msoon02/******** DIRECTORY=data_pump_dir DUMPFILE=SRV03 Msoon02_jt%U.dmp logfile= REMAP_SCHEMA=M1396_1447:MSOON02 remap_table=M1396_1447_JT:MSOON02_JT CONTENT= DATA_ONLY parallel=4
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20091201" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100101" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20091101" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    . . imported "MSOON02"."MSOON02_JT":"DEF_PART_M1396_1447_JT"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100201"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100301"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100401"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100501"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100601"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100701"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100801"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100901"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20101001"      0 KB       0 rows
    Job "MSOON02"."SYS_IMPORT_FULL_05" completed with 3 error(s) at 14:21:32Edited by: Oracle Studnet on Jan 13, 2011 1:20 AM

  • How-to list the contents of a Data Pump Export file?

    How can I list the contents of a 10gR2 Data Pump Export file? I'm looking at the Syntax Diagram for Data Pump Import and can't see a list-only option.
    Regards,
    Al Malin

    use the parameter SQLFILE in the impdp which writes all the sql ddl's to the specified file.
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm
    SQLFILE
    Default: none
    Purpose
    Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
    Syntax and Description
    SQLFILE=[directory_object:]file_name
    The file_name specifies where the import job will write the DDL that would be executed during the job. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY parameter, unless another directory_object is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten.
    Note that passwords are not included in the SQL file. For example, if a CONNECT statement is part of the DDL that was executed, it will be replaced by a comment with only the schema name shown. In the following example, the dashes indicate that a comment follows, and the hr schema name is shown, but not the password.
    -- CONNECT hr
    Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hr schema (in this case, the password is also hr), as follows:
    CONNECT hr/hr
    For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.
    Example
    The following is an example of using the SQLFILE parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See FULL.
    impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmpSQLFILE=dpump_dir2:expfull.sql
    A SQL file named expfull.sql is written to dpump_dir2.
    Message was edited by:
    Ranga
    Message was edited by:
    Ranga

  • Pre Checks before running Data Pump Export/Import

    Hi,
    Oracle :-11.2
    OS:- Windows
    Kindly share the pre-checks required for data pump export and import which should be followed by a DBA.
    Thanks

    When you do a tablespace mode export, Data Pump is essentially doing a table mode export of all of the tables in the tablespaces mentioned.  So if you have this:
    tablespace a    contains table 1
                                              table 2
                                             index 3a(on table 3)
    tablespace b   contains index 1a on table 1
                                             index 2a on table 2
                                            table 3
    and if you expdp tablespaces=a ...
    you will get table 1, table 2, index 1a, and index 2a.
    My belief is that you will not get table 3 or index 3a.  The way I understand the code to work is that you get the tables in the tablespaces you mention and their dependent objects, but not the other way around.  You could easily verify this to make sure.
    Dean

  • How can I use the data pump export from external client?

    I am trying to export a bunch of table from a DB but I cant figure out how to do it.
    I dont have access to a shell terminal on the server itself, I can only login using TOAD.
    I am trying to use TOAD's Data Pump Export utility but I keep getting this error:
    ORA-39070: Unable to open the log file.
    ORA-39087: directory name D:\TEMP\ is invalid
    I dont understand if its because I am setting the parameter file wrong or if the utility is trying to find that directory on the server whereas I am thinking its going to dump it to my local filesystem where that directory exists.
    I'd hate to have to use SQL Loader to create ctl files for each and every table...
    Here is my parameter file:
    DUMPFILE="db_export.dmp"
    LOGFILE="exp_db_export.log"
    DIRECTORY="D:\temp\"
    TABLES=ACCOUNT
    CONTENT=ALL
    (just trying to test it on one table so far...)
    P.S. Oracle 11g
    Edited by: trant on Jan 13, 2012 7:58 AM

    ORA-39070: Unable to open the log file.
    ORA-39087: directory name D:\TEMP\ is invalidDirectory here it should not be physical location, its a logical representation.
    For that you have to create a directory from SQL level, like create directory exp_dp..
    then you have to use above created directory as DIRECTORY=exp_dp
    HTH

  • Data Pump Export Parameters

    Hi,
    Can you please explain me about CONSISTENT and COMPRESS parameters in Data Pump Export.
    Thanks,
    Suresh Bommalata..

    Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality against the consistent parameter
    http://oracledatapump.blogspot.com/2009/04/mapping-between-parameters-of-datapump.html
    Edited by: user00726 on Aug 18, 2010 10:56 PM

  • Check for directory for Data Pump Export

    Hi
    In order to perform Data Pump Export, It is required to create the same as a pre-requisite.
    e.g :
    SQL> create directory expdp_dir as '/u01/backup/exports';
    I need to know if there is any way to know if the folder is already created.
    Please advice
    Regards,
    Dheeraj

    Dheeraj Kumar M wrote:
    I need to know if there is any way to know if the folder is already created.Do you mean see if OS folder exists and is accessible to oracle user? Try creating file in that directory using UTL_FILE. If it succeeds - you are OK. Another option is writing java SP to check OS folder existence and permissions.
    SY.

  • Data pump for HR Jobs and HR positions for v11i

    I am trying to use Datapump (v11i/11.5.2)
    to load the data for HR Jobs and Hr Positions. I am loading HR_PUMP_BATCH_HEADER and HR_PUMP_BATCH_LINE TABLEs with records having required valid values but I am getting error. I have succefully loaded the employee data using same method before.
    Have anybody done this ? PLase help.
    null

    I got the below from Metalink.
    Pl. see whether it is helpful for you, since I have little knowledge about HRMS.
    The HR_PUMP_BATCH_LINE_USER_KEYS table must be seeded with value in order for the package that follows to work. In some cases, the user must provide this value.
    I ran into this problem when trying to run the create_job_requirement API through the Data Pump. Within my PL/SQL block, this is how I passed the value to my variable:
    pv_id_flex_num_user_key := sel.sun_job_id| |sel.name| |sel.job_requirement_id| |sel.id_flex_num| |':ID FLEX NUM USER KEY';
    I then performed the following after calling the insert_batch_line procedure and passing all the parameter values:
    SELECT hpbl.batch_line_id,
    coj_hr_ci.devl_seq_s.nextval,
    coj_hr_ci.devl_seq_s.nextval
    INTO v_batch_line_id,
    v_sequence1,
    v_sequence2
    FROM hr_pump_batch_lines hpbl
    WHERE hpbl.pval058 = pv_id_flex_num_user_key;
    INSERT INTO hr_pump_batch_line_user_keys(batch_line_id,
    unique_key_id,
    user_key_id,
    user_key_value)
    VALUES(v_batch_line_id,
    50141,
    v_sequence2,
    pv_id_flex_num_user_key);
    null

  • Oracle 10 g Datbase export problem using Data Pump

    I am using Oralce 10 g on SOlaris 10 OS.I installed Oracle 10 G on Soalris suucessfully (without ASM).But when I tried to take the export using OEM faclity i.e:
    OEM>Maintennace>High Avaiablity >Data Movement >Export to Export File
    At last screen we got following error:-
    ORA-39006: internal error Exception : ORA-39006: internal error ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 2926 ORA-06512: at "SYS.DBMS_DATAPUMP", line 3162 ORA-06512: at line 2 .
    Please help me to sort it out.
    Reagrds
    R N sharma

    It refers to some issues with datapump utility.
    Check Note:402053.1 - Internal error during EXPDP ORA-39006

  • Data Pump Export error - network mounted path

    Hi,
    Please have a look at the data pump error i am getting while doing export. I am running on version 11g . Please help with your feedback..
    I am getting error due to Network mounted path for directory OverLoad it works fine with local path. i have given full permissions on network path and utl_file able to create files but datapump fail with below error messages.
    Oracle 11g
    Solaris 10
    Getting below error :
    ERROR at line 1:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3444
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3693
    ORA-06512: at line 64
    DECLARE
    p_part_name VARCHAR2(30);
    p_msg VARCHAR2(512);
    v_ret_period NUMBER;
    v_arch_location VARCHAR2(512);
    v_arch_directory VARCHAR2(20);
    v_rec_count NUMBER;
    v_partition_dumpfile VARCHAR2(35);
    v_partition_dumplog VARCHAR2(35);
    v_part_date VARCHAR2(30);
    p_partition_name VARCHAR2(30);
    v_partition_arch_location VARCHAR2(512);
    h1 NUMBER; -- Data Pump job handle
    job_state VARCHAR2(30); -- To keep track of job state
    le ku$_LogEntry; -- For WIP and error messages
    js ku$_JobStatus; -- The job status from get_status
    jd ku$_JobDesc; -- The job description from get_status
    sts ku$_Status; -- The status object returned by get_status
    ind NUMBER; -- Loop index
    percent_done NUMBER; -- Percentage of job complete
    --check dump file exist on directory
    l_file utl_file.file_type;   
    l_file_name varchar2(20);
    l_exists boolean;
    l_length number;
    l_blksize number;
    BEGIN
    p_part_name:='P2010110800';
    p_partition_name := upper(p_part_name);
    v_partition_dumpfile :=  chr(39)||p_partition_name||chr(39);
    v_partition_dumplog  :=  p_partition_name || '.LOG';
         SELECT COUNT(*) INTO v_rec_count FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
             IF v_rec_count != 0 THEN
               SELECT
               PARTITION_ARCHIVAL_PERIOD
               ,PARTITION_ARCHIVAL_LOCATION
               ,PARTITION_ARCHIVAL_DIRECTORY
               INTO v_ret_period , v_arch_location , v_arch_directory
               FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
             END IF;
         utl_file.fgetattr('ORALOAD', l_file_name, l_exists, l_length, l_blksize);      
            IF (l_exists) THEN        
             utl_file.FRENAME('ORALOAD', l_file_name, 'ORALOAD', p_partition_name ||'_'|| to_char(systimestamp,'YYYYMMDDHH24MISS') ||'.DMP', TRUE);
         END IF;
        v_part_date := replace(p_partition_name,'P');
            DBMS_OUTPUT.PUT_LINE('inside');
        h1 := dbms_datapump.open (operation => 'EXPORT',
                                  job_mode  => 'TABLE'
              dbms_datapump.add_file (handle    => h1,
                                      filename  => p_partition_name ||'.DMP',
                                      directory => v_arch_directory,
                                      filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
              dbms_datapump.add_file (handle    => h1,
                                      filename  => p_partition_name||'.LOG',
                                      directory => v_arch_directory,
                                      filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
              dbms_datapump.metadata_filter (handle => h1,
                                             name   => 'SCHEMA_EXPR',
                                             value  => 'IN (''HDB'')');
              dbms_datapump.metadata_filter (handle => h1,
                                             name   => 'NAME_EXPR',
                                             value  => 'IN (''SUBSCRIBER_EVENT'')');
              dbms_datapump.data_filter (handle      => h1,
                                         name        => 'PARTITION_LIST',
                                        value       => v_partition_dumpfile,
                                        table_name  => 'SUBSCRIBER_EVENT',
                                        schema_name => 'HDB');
              dbms_datapump.set_parameter(handle => h1, name => 'COMPRESSION', value => 'ALL');
              dbms_datapump.start_job (handle => h1);
                  dbms_datapump.detach (handle => h1);              
    END;
    /

    Hi ,
    I tried to generate dump with expdp instead of API, got more specific error logs.
    but on same path log file got create.
    expdp hdb/hdb DUMPFILE=P2010110800.dmp DIRECTORY=ORALOAD TABLES=(SUBSCRIBER_EVENT:P2010110800) logfile=P2010110800.log
    Export: Release 11.2.0.1.0 - Production on Wed Nov 10 01:26:13 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, Automatic Storage Management, OLAP, Data Mining
    and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/nfs_path/lims/backup/hdb/datapump/P2010110800.dmp"
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    Additional information: 3Edited by: Sachin B on Nov 9, 2010 10:33 PM

  • Data pump export with error

    Hi All,
    I am new to DBA and when I run expdp for the full database,I am getting the below error
    " ORA-39139: Data Pump does not support XMLSchema objects. TABLE_DATA:"APPS"."abc_TAB" will be skipped.
    I am facing issue with exp utility,so I have to use expdp.
    Kindly help me to avoid these errors.
    Thank,

    Thanks Srini for the Doc - A good one.
    We have scheduled a daily full export backup. Now this is having problems.
    -> exp expuser/exppaswd file=/ora/exp.dmp log=/ora/exp.log direct=Y statistics=none full=Y
    error=
    Table window348_TAB will be exported in conventional path.
    . . exporting table window348_TAB 0 rows exported
    Table wtpParameters350_TAB will be exported in conventional path.
    . . exporting table wtpParameters350_TAB 0 rows exported
    . exporting synonyms
    EXP-00008: ORACLE error 1406 encountered
    ORA-01406: fetched column value was truncated
    EXP-00000: Export terminated unsuccessfully
    Oracle requested to drop/disable the policy. But one policy assigned to many tables and one table having more than one policies.
    We haven't set any policy, all are default one.
    1. If I disable/drop policies how it will affect the application?
    2. How I can disable or drop it for all the objects in the database?
    3. How I can identify the policies - whether OLS or VPD
    Pelase help me to get a daily full export backup or any alternate solution for the same.
    Thanks,
    Asim

  • Data Pump Export issue - no streams pool created and cannot automatically c

    I am trying to use data pump on a 10.2.0.1 database that has vlm enabled and getting the following error :
    Export: Release 10.2.0.1.0 - Production on Tuesday, 20 April, 2010 10:52:08
    Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
    ORA-31626: job does not exist
    ORA-31637: cannot create job SYS_EXPORT_TABLE_01 for user E_AGENT_SITE
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT_INT", line 600
    ORA-39080: failed to create queues "KUPC$C_1_20100420105208" and "KUPC$S_1_20100420105208" for Data Pump job
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPC$QUE_INT", line 1555
    ORA-00832: no streams pool created and cannot automatically create one
    This is my script (that I currently use on other non vlm databases successfully):
    expdp e_agent_site/<password>@orcl parfile=d:\DailySitePump.par
    this is my parameter file :
    DUMPFILE=site_pump%U.dmp
    PARALLEL=1
    LOGFILE=site_pump.log
    STATUS=300
    DIRECTORY=DATA_DUMP
    QUERY=wwv_document$:"where last_updated > sysdate-18"
    EXCLUDE=CONSTRAINT
    EXCLUDE=INDEX
    EXCLUDE=GRANT
    TABLES=wwv_document$
    FILESIZE=2000M
    My oracle directory is created and the user has rights
    googling the issue says that the shared pool is too small or streams_pool_size needs setting. shared_pool_size = 1200M and when I query v$parameter it shows that streams_pool_size = 0
    I've tried alter system set streams_pool_size=1M; but I just get :
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-04033: Insufficient memory to grow pool
    The server is a windows enterprise box with 16GB ram and VLM enabled, pfile memory parameters listed below:
    # resource
    processes = 1250
    job_queue_processes = 10
    open_cursors = 1000 # no overhead if set too high
    # sga
    shared_pool_size = 1200M
    large_pool_size = 150M
    java_pool_size = 50M
    # pga
    pga_aggregate_target = 850M # custom
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    # vlm support
    USE_INDIRECT_DATA_BUFFERS = TRUE
    DB_BLOCK_BUFFERS = 1500000
    Any ideas why I cannot run data pump? I am assuming that I just need to set streams_pool_size but I don't understand why I cannot increase the size of it on this db. It is set to 0 on other databases that work fine and I can set it which is why I am possibly linking the issue to vlm
    thanks
    Robert

    SGA_MAX_SIZE?
    SQL> ALTER SYSTEM SET streams_pool_size=32M SCOPE=BOTH;
    ALTER SYSTEM SET streams_pool_size=32M SCOPE=BOTH
    ERROR at line 1:
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-04033: Insufficient memory to grow pool
    SQL> show parameter sga_max
    NAME                         TYPE      VALUE
    sga_max_size                    big integer 480M
    SQL> show parameter cache
    NAME                         TYPE      VALUE
    db_16k_cache_size               big integer 0
    db_2k_cache_size               big integer 0
    db_32k_cache_size               big integer 0
    db_4k_cache_size               big integer 0
    db_8k_cache_size               big integer 0
    db_cache_advice                string      ON
    db_cache_size                    big integer 256M
    db_keep_cache_size               big integer 0
    db_recycle_cache_size               big integer 0
    object_cache_max_size_percent          integer      10
    object_cache_optimal_size          integer      102400
    session_cached_cursors               integer      20
    SQL> ALTER SYSTEM SET db_cache_size=224M SCOPE=both;
    System altered.
    SQL> ALTER SYSTEM SET streams_pool_size=32M SCOPE=both;
    System altered.Lukasz

  • Data pump export single tables with specific criteria with the API

    Hello, I'm trying to export some table data from a schema using dbms_datapump API, but it gives me problems and it takes too much time to elaborate and I don't understand why! I want to export data only in a dmp file, that I will use later to import in an other schema. I'm using Oracle 10g R2 .
    I want to export data from TABLE1 and TABLE2, and ONLY the first 10 rows (this is for test now). I used the data_filter to write the subquery to filter the rows, and NAME_LIST to filter table names. I've set INCLUDE_METADATA to 0, to not export metadata. But it takes 10 minutes to run, and the output log says that there was an error (after 10 minutes??!):
    content of : table_dump.log
    Starting "MYSCHEMANAME"."SYS_EXPORT_TABLE_02": 
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    ORA-39166: Object IN ('TABLE1' was not found.
    ORA-39166: Object 'TABLE2') was not found.
    ORA-31655: no data or metadata objects selected for job
    Job "MYSCHEMANAME"."SYS_EXPORT_TABLE_02" completed with 3 error(s) at 15:58:47
    This is the code I use:
    DECLARE
      handle NUMBER;
      status VARCHAR2(20);
    BEGIN
      handle := DBMS_DATAPUMP.OPEN ('EXPORT', 'TABLE');
      dbms_datapump.add_file(handle => handle,filename => 'table_dump.log',directory => 'DATAPUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
      dbms_datapump.add_file(handle => handle,filename => 'table_dump.dmp',directory => 'DATAPUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
      dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR', 'IN (''MYSCHEMANAME'')');
      dbms_datapump.metadata_filter (handle, 'NAME_LIST', 'IN (''TABLE1'',''TABLE2'')');
      dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE1', 'MYSCHEMANAME');
      dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE2', 'MYSCHEMANAME');
      dbms_datapump.set_parameter(HANDLE => handle,NAME => 'INCLUDE_METADATA', VALUE => 0) ;
      dbms_datapump.START_JOB(handle);
      dbms_datapump.WAIT_FOR_JOB(handle, status);
    END;
    /Edited by: user10396517 on 27-feb-2012 9.17 - added the code formatting

    Welcome to the forums. When pasting code use the {  code  } tags for better readability. See the FAQ for other details. And always feel free to include all the ddl/dml for your test cases so we don't have to do much more than run your code to reproduce it ourselves.
    I've had very little luck with NAME_LISTs. Though I know you can do similar with NAME_EXPR:
    SQL> create table table1 as select * from dba_tables where rownum <= 20;
    Table created.
    SQL> create table table2 as select * from dba_tables where rownum <= 20;
    Table created.
    -- note 20 rows each.
    SQL> DECLARE
    handle NUMBER;
    status VARCHAR2(20);
      2    3    4  BEGIN
      5  handle := DBMS_DATAPUMP.OPEN ('EXPORT', 'TABLE');
      6  dbms_datapump.add_file(handle => handle,filename => 'table_dump.log',directory => 'DATA_PUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
      7  dbms_datapump.add_file(handle => handle,filename => 'table_dump.dmp',directory => 'DATA_PUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
      8  dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR', 'IN (''ANDY'')');
      9  dbms_datapump.metadata_filter (handle, name=> 'NAME_EXPR', value=> 'IN (''TABLE1'',''TABLE2'')');
    10  dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE1', 'ANDY');
    11  dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE2', 'ANDY');
    12  dbms_datapump.set_parameter(HANDLE => handle,NAME => 'INCLUDE_METADATA', VALUE => 0) ;
    dbms_datapump.START_JOB(handle);
    13   14  dbms_datapump.WAIT_FOR_JOB(handle, status);
    15  END;
    16  /
    PL/SQL procedure successfully completed.
    SQL> !cat /u01/app/oracle/admin/test1/dpdump/table_dump.log
    Starting "SYS"."SYS_EXPORT_TABLE_18":
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 128 KB
    . . exported "ANDY"."TABLE1"                             29.32 KB      10 rows
    . . exported "ANDY"."TABLE2"                             29.32 KB      10 rows
    Master table "SYS"."SYS_EXPORT_TABLE_18" successfully loaded/unloaded
    Dump file set for SYS.SYS_EXPORT_TABLE_18 is:
      /u01/app/oracle/admin/test1/dpdump/table_dump.dmp
    Job "SYS"."SYS_EXPORT_TABLE_18" successfully completed at 17:01:41 As for your time issues, nothing is preventing you from seeing what your datapump session is doing. If it is waiting on something, doing a full scan of something, etc.
    Good luck.

Maybe you are looking for

  • Problem with TextInput and RadioButton

    Hi, I've encountered a problem with TextInput and RadioButtons. I've created some text fields and radio buttons for a feedback form under a movie _root.feedback. But for some reason, I can't seem to enter text into the text field (no blinking cursor)

  • RMI client-side - how to encrypt,decrypt  in client-side

    I write javacard RMI style. My problem is... Source code below is work when it write in applet (card - side) but in client-side (reader -side) I copy it to client - side code and test to run, if fail --> throw exception 0x3 - javacard.security.Crypto

  • Smart Folders, Spotlight comments and Apple mail messages

    Hi, I have a simple question, but couldn't figure it out yet: I have a workflow that tags specific mail messages according to the specific mail inbox folders I assign them to (be it via incoming rules, or Act-On rules) by automatically adding specifi

  • How to get Podcasts not to play with songs?

    Hello: I just downloaded about 10 podcasts, and this is the first time that I have downloaded any podcasts. I noticed that when I started playing my music that the podcasts are also in my music lineup, and therefore they also play with the music alph

  • How to map fields

    hi experts, i have 3 constant fields, i want to map 3 constant fields to fund ,g/l account , functional area. can u give me the code please