Data pump export single tables with specific criteria with the API

Hello, I'm trying to export some table data from a schema using dbms_datapump API, but it gives me problems and it takes too much time to elaborate and I don't understand why! I want to export data only in a dmp file, that I will use later to import in an other schema. I'm using Oracle 10g R2 .
I want to export data from TABLE1 and TABLE2, and ONLY the first 10 rows (this is for test now). I used the data_filter to write the subquery to filter the rows, and NAME_LIST to filter table names. I've set INCLUDE_METADATA to 0, to not export metadata. But it takes 10 minutes to run, and the output log says that there was an error (after 10 minutes??!):
content of : table_dump.log
Starting "MYSCHEMANAME"."SYS_EXPORT_TABLE_02": 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-39166: Object IN ('TABLE1' was not found.
ORA-39166: Object 'TABLE2') was not found.
ORA-31655: no data or metadata objects selected for job
Job "MYSCHEMANAME"."SYS_EXPORT_TABLE_02" completed with 3 error(s) at 15:58:47
This is the code I use:
DECLARE
  handle NUMBER;
  status VARCHAR2(20);
BEGIN
  handle := DBMS_DATAPUMP.OPEN ('EXPORT', 'TABLE');
  dbms_datapump.add_file(handle => handle,filename => 'table_dump.log',directory => 'DATAPUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
  dbms_datapump.add_file(handle => handle,filename => 'table_dump.dmp',directory => 'DATAPUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
  dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR', 'IN (''MYSCHEMANAME'')');
  dbms_datapump.metadata_filter (handle, 'NAME_LIST', 'IN (''TABLE1'',''TABLE2'')');
  dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE1', 'MYSCHEMANAME');
  dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE2', 'MYSCHEMANAME');
  dbms_datapump.set_parameter(HANDLE => handle,NAME => 'INCLUDE_METADATA', VALUE => 0) ;
  dbms_datapump.START_JOB(handle);
  dbms_datapump.WAIT_FOR_JOB(handle, status);
END;
/Edited by: user10396517 on 27-feb-2012 9.17 - added the code formatting

Welcome to the forums. When pasting code use the {  code  } tags for better readability. See the FAQ for other details. And always feel free to include all the ddl/dml for your test cases so we don't have to do much more than run your code to reproduce it ourselves.
I've had very little luck with NAME_LISTs. Though I know you can do similar with NAME_EXPR:
SQL> create table table1 as select * from dba_tables where rownum <= 20;
Table created.
SQL> create table table2 as select * from dba_tables where rownum <= 20;
Table created.
-- note 20 rows each.
SQL> DECLARE
handle NUMBER;
status VARCHAR2(20);
  2    3    4  BEGIN
  5  handle := DBMS_DATAPUMP.OPEN ('EXPORT', 'TABLE');
  6  dbms_datapump.add_file(handle => handle,filename => 'table_dump.log',directory => 'DATA_PUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
  7  dbms_datapump.add_file(handle => handle,filename => 'table_dump.dmp',directory => 'DATA_PUMP_DIR',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
  8  dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR', 'IN (''ANDY'')');
  9  dbms_datapump.metadata_filter (handle, name=> 'NAME_EXPR', value=> 'IN (''TABLE1'',''TABLE2'')');
10  dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE1', 'ANDY');
11  dbms_datapump.data_filter(handle, 'SUBQUERY', 'WHERE rownum <= 10', 'TABLE2', 'ANDY');
12  dbms_datapump.set_parameter(HANDLE => handle,NAME => 'INCLUDE_METADATA', VALUE => 0) ;
dbms_datapump.START_JOB(handle);
13   14  dbms_datapump.WAIT_FOR_JOB(handle, status);
15  END;
16  /
PL/SQL procedure successfully completed.
SQL> !cat /u01/app/oracle/admin/test1/dpdump/table_dump.log
Starting "SYS"."SYS_EXPORT_TABLE_18":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
. . exported "ANDY"."TABLE1"                             29.32 KB      10 rows
. . exported "ANDY"."TABLE2"                             29.32 KB      10 rows
Master table "SYS"."SYS_EXPORT_TABLE_18" successfully loaded/unloaded
Dump file set for SYS.SYS_EXPORT_TABLE_18 is:
  /u01/app/oracle/admin/test1/dpdump/table_dump.dmp
Job "SYS"."SYS_EXPORT_TABLE_18" successfully completed at 17:01:41 As for your time issues, nothing is preventing you from seeing what your datapump session is doing. If it is waiting on something, doing a full scan of something, etc.
Good luck.

Similar Messages

  • DATA PUMP export error

    Hi all
    During data pump export (big table ) dmp has 289GB (FILESIZE=10G) I had error
    . . exported "TEST3"."INC_T_ZMLUVY_PARTNERI" 6.015 GB 61182910 rows
    . . exported "TEST3"."PAR_T_VZTAHY" 5.798 GB 73121325 rows
    ORA-31693: Table data object "TEST3"."INC_T_POISTENIE_PARAMETRE_H" failed to load/unload and is being skipped due to error:
    ORA-31617: unable to open dump file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp" for write
    ORA-19505: failed to identify file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp"
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    . . exported "TEST3"."PAY_T_UCET_POLOZKA" 5.344 GB 97337823 rows
    and export continued until :
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_02" completed with 1 error
    (it take 8hours)
    can You help me if dmp is now ok ? If impdp will conntiue after reading exp_polska03.dmp , (dump has 28 file exp_polska1 - exp_polska28)?Maximaly I 'm exporting this one table again.. is this solution ok ?
    thak Brano

    What is the expdp parameters used?
    what's the total export dump size?( Try estimate_only=y)
    ORA-31617: unable to open dump >file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp" for write
    ORA-19505: failed to identify >file "/u01/app/oracle/backup/STARINS/exp_polska03.dmphttp://systemr.blogspot.com/2007/09/section-oracle-database-utilities-title.html
    Check the above one.
    I guess the file is removed from the location or not enough permission to write.

  • How-to list the contents of a Data Pump Export file?

    How can I list the contents of a 10gR2 Data Pump Export file? I'm looking at the Syntax Diagram for Data Pump Import and can't see a list-only option.
    Regards,
    Al Malin

    use the parameter SQLFILE in the impdp which writes all the sql ddl's to the specified file.
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm
    SQLFILE
    Default: none
    Purpose
    Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
    Syntax and Description
    SQLFILE=[directory_object:]file_name
    The file_name specifies where the import job will write the DDL that would be executed during the job. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY parameter, unless another directory_object is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten.
    Note that passwords are not included in the SQL file. For example, if a CONNECT statement is part of the DDL that was executed, it will be replaced by a comment with only the schema name shown. In the following example, the dashes indicate that a comment follows, and the hr schema name is shown, but not the password.
    -- CONNECT hr
    Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hr schema (in this case, the password is also hr), as follows:
    CONNECT hr/hr
    For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.
    Example
    The following is an example of using the SQLFILE parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See FULL.
    impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmpSQLFILE=dpump_dir2:expfull.sql
    A SQL file named expfull.sql is written to dpump_dir2.
    Message was edited by:
    Ranga
    Message was edited by:
    Ranga

  • How Can I Export A Table With Its Entries ( values )

    Hi guys,
    How can i export a table with the entries that the table is populated with ??!!
    Best Regards,
    Fateh

    To export a table and its data you can do it a number of ways.. Do you have access to SQL Developer? If not you can do it through the APEX SQL Workshop..
    Under SQL Workshop, Utilities, Data Unload, To text (or XML, whichever you are more comfortable with..).. Select your schema, click next button, select the table, click next, select the columns you want exported (select ALL ITEMS in the select List), click next, on optionally enclosed by I usually enter a " (Sometimes I have long strings in columns), click the include column names checkbox to get the column names at the top of the document, click unload data button..
    To get the ddl (code to build table) Sql Workshop, utilities, generate ddl, create script, select schema, select next, click table checkbox, select next, click the checkbox of table you want to get ddl for...
    Thank you,
    Tony Miller
    Webster, TX
    There are two kinds of pedestrians -- the quick and the dead.
    If this question is answered, please mark the thread as closed and assign points where earned..

  • To upload a data into SAP Table with the help of RFC function in BODS

    Hi,
    Please provide me step-by-step solution to upload data into any SAP table with the help of RFC function in Data Services.
    I have created RFC function that upload data into SAP table. RFC Function contains one table that has same structure as my database table.
    In the data services how can i filled the table of RFC function, i am using this function in query transform of data services but it gives me error.
    I am also follow link http://wiki.sdn.sap.com/wiki/display/BOBJ/BusinessObjectsDataServicesTipsand+Tricks
    but it did not help me.
    Thanks,
    Abhishek

    Hi Abhishek,
    Did you import the function module in the SAP datastore first? When you open the SAP datastore, the function should be listed in the 'functions' section. If not, import it. Make sure your function is remote executable.
    Once the function is there, you can use it in a transformation. In 'Schema Out' right-click on 'Query' (top level) and choose 'New Function Call'. You can then select a datastore and a function in the datastore. The wizard will show you which output parameters are available. I believe you have to add at least one and can select as many as you like.
    After confirming your selection the function and the output parameters appear in Schema Out. You can then right-click on the function and choose 'Modify function call'. A popup will appear where you can specify the input parameters.
    I hope this helps.
    Jan.

  • Upload data to an internal table with Services of the Table Tool

    Hi,
    I'm trying to upload data to an internal table with the table with the new abap debugger but I can see the option in Services of the Table Tool. I see the information in this sap link and also I saw some screen shot in other pages.
    http://help.sap.com/saphelp_nw70ehp2/helpdata/en/49/2db60934e414d0e10000000a42189b/content.htm
    Is this option depends the SAP version or the patch version or I need to configure the debugger?

    Hi,
    I used the statement
    SPLIT i_file AT '|' INTO TABLE It_file.
    But it is overwriting each row but not appending to the internal table while passing do and enddo.
    Actually my requirement is download the pipe delimited text file and sent an XLS attachment to the
    distribution mail id's.
    I know sending the mails by using the FM.
    But I need all data from the file line by line and pass to the Mail sending function module.
    Can you help me in acheiving  this.
    Regards
    Jay

  • Exporting form data to a single table

    I have created a form in LiveCycle Designer ES 8.2  and simply want to export the data into a single table that I can then import into Access. It appears that in LiveCycle the row elements of a table are children of the table element and the column elements are children of the row elements.  I am wondering if there is an easy way to design a form in LiveCycle so that each cell of a table is a root element like the rest of the fields in the form.  I basically want to export one row of data from each form and append it to a larger table. Exporting the data as csv or tab-delimited would be sufficient, but neither of these appears to be an option.

    The only way I've found to do this is to open the form in Acrobat, click Forms, mouse over Manage Form Data, then click Export Data. Your only options are to save as .xml or .xdp. I saved my data as .xml. I was then able to import the .xml file into excel by clicking Data, mousing over xml, clicking Import... and browsing to the Acrobat generated .xml file. I know this is convoluted but it's the only way I've found to get the data into a spreadsheet. Acrobat exports the data as filename_data.xml, i.e. a form named contacts will export it's data as contacts_data.xml. Be aware, I've only tested this on a form that had a table in it and nothing else. If your table is part of a form with other fields in it I have no idea how the other data will look after exporting.

  • Export - only tables with data

    Hai Everybody!
    I have a database in Oracle, which is having hundreds of tables. When i export the database, all the tables will be exported into the .dmp file.
    i want to export only the tables, which are having atleast one row to the .dmp file.
    Can anyone help me by providing the solution.
    Thanks
    JD

    JayaDev(JD) wrote:
    version 10gIn 11g, there is feature called "deferred segment creation". It means, if you create a new table and do not insert any data into it, Oracle will not create segment for that table, hence it will not be displayed in dba_segments.
    That is why, if you try export that table with exp utility, it will not be exported.

  • Oracle 10g - Data Pump: Export / Import of Sequences ?

    Hello,
    I'm new to this forum and also to Oracle (Version 10g). Since I could not find an answer to my question, I open this post in hoping to get some help from the experienced users.
    My question concerns the Data Pump Utility and what happens to sequences which were defined in the source database:
    I have exported a schema with the following command:
    "expdp <user>/<pass> DIRECTORY=DATA_PUMP_DIR DUMPFILE=dumpfile.dmp LOGFILE=logfile.log"
    This worked fine and also the import seemed to work fine with the command:
    "impdp <user>/<pass> DIRECTORY=DATA_PUMP_DIR DUMPFILE=dumpfile.dmp"
    It loaded the exported objects directly into the schema of the target database.
    BUT:
    Something has happened to my sequences. :-(
    When I want to use them, all sequences start again with value "1". Since I have already included data with higher values in my tables, I get into trouble with the PK of these tables because I used sequences sometimes as primary key.
    My question go in direction to:
    1. Did I something wrong with Data Pump Utility?
    2. How is the correct way to export and import sequences that they keep their actual values?
    3. When the behaviour described here is correct, how can I correct the values that start again from the last value that was used in the source database?
    Thanks a lot in advance for any help concerning this topic!
    Best regards
    FireFighter
    P.S.
    It might be that my english sounds not perfect since it is not my native language. Sorry for that! ;-)
    But I hope that someone can understand nevertheless. ;-)

    My question go in direction to:
    1. Did I something wrong with Data Pump Utility?I do not think so. But may be with the existing schema :-(
    2. How is the correct way to export and import
    sequences that they keep their actual values?If the Sequences exist in the target before the import, oracle does not drop and recreate it. So you need to ensure that the sequences do not already exist in the target or the existing ones are dropped before the import.
    3. When the behaviour described here is correct, how
    can I correct the values that start again from the
    last value that was used in the source database?You can either refresh with the import after the above correction or drop and manually recreate the sequences to START WITH the NEXT VALUE of the source sequences.
    The easier way is to generate a script from the source if you know how to do it

  • How to export a table with BLOBs

    Hi,
    when I have to export a table with its own data, I generally export it by using the export on UNIX.
    I've never tried to export a table with a BLOB. Is it possible in the same manner or what else?
    Thanks!

    Hi mark;
    Please see below which could be helpful for your issue:
    Export and Import of Table with LOB Columns (like CLOB and BLOB) has Slow Performance [ID 281461.1]
    Master Note for Data Pump [ID 1264715.1]
    Regard
    Helios

  • UDI-00018: Data Pump client is incompatible with database version 11.2.0.1

    Hi
    I am trying to import data in Oracle 11g Release2(11.2.0.1) using impdp utitlity and getting below errror
    UDI-00018: Data Pump client is incompatible with database version 11.2.0.1.0
    Export dump has taken in database with oracle 11g Release 1(11.1.0.7.0) and I am trying to import in higher version of the database. Is there any parameter I have to set to avoid this error?

    AUTHSTATE=compat
    A__z=! LOGNAME
    CLASSPATH=/app/oracle/11.2.0/jlib:.
    HOME=/home/oracle
    LANG=C
    LC__FASTMSG=true
    LD_LIBRARY_PATH=/app/oracle/11.2.0/lib:/app/oracle/11.2.0/network/lib:.
    LIBPATH=/app/oracle/11.2.0/JDK/JRE/BIN:/app/oracle/11.2.0/jdk/jre/bin/classic:/app/oracle/11.2.0/lib32
    LOCPATH=/usr/lib/nls/loc
    LOGIN=oracle
    LOGNAME=oracle
    MAIL=/usr/spool/mail/oracle
    MAILMSG=[YOU HAVE NEW MAIL]
    NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
    NLS_DATE_FORMAT=DD-MON-RRRR HH24:MI:SS
    ODMDIR=/etc/objrepos
    ORACLE_BASE=/app/oracle
    ORACLE_HOME=/app/oracle/11.2.0
    ORACLE_SID=AMT6
    ORACLE_TERM=xterm
    ORA_NLS33=/app/oracle/11.2.0/nls/data
    PATH=/app/oracle/11.2.0/bin:.:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/oracle/bin:/usr/bin/X11:/sbin:.:/usr/local/bin:/usr/ccs/bin
    PS1=nbsud01[$PWD]:($ORACLE_SID)>
    PWD=/nbsiar/nbimp
    SHELL=/usr/bin/ksh
    SHLIB_PATH=/app/oracle/11.2.0/lib:/usr/lib
    TERM=xterm
    TZ=Europe/London
    USER=oracle
    _=/usr/bin/env

  • How can I use the data pump export from external client?

    I am trying to export a bunch of table from a DB but I cant figure out how to do it.
    I dont have access to a shell terminal on the server itself, I can only login using TOAD.
    I am trying to use TOAD's Data Pump Export utility but I keep getting this error:
    ORA-39070: Unable to open the log file.
    ORA-39087: directory name D:\TEMP\ is invalid
    I dont understand if its because I am setting the parameter file wrong or if the utility is trying to find that directory on the server whereas I am thinking its going to dump it to my local filesystem where that directory exists.
    I'd hate to have to use SQL Loader to create ctl files for each and every table...
    Here is my parameter file:
    DUMPFILE="db_export.dmp"
    LOGFILE="exp_db_export.log"
    DIRECTORY="D:\temp\"
    TABLES=ACCOUNT
    CONTENT=ALL
    (just trying to test it on one table so far...)
    P.S. Oracle 11g
    Edited by: trant on Jan 13, 2012 7:58 AM

    ORA-39070: Unable to open the log file.
    ORA-39087: directory name D:\TEMP\ is invalidDirectory here it should not be physical location, its a logical representation.
    For that you have to create a directory from SQL level, like create directory exp_dp..
    then you have to use above created directory as DIRECTORY=exp_dp
    HTH

  • How to encrypt column of some table with the single method ?

    How to encrypt column of some table with the single method ?

    How to encrypt column of some table with the single
    method ?How to encrypt column of some table with the single
    method ?
    using dbms_crypto package
    Assumption: TE is a user in oracle 10g
    we have a table need encrypt a column, this column SYSDBA can not look at, it's credit card number.
    tha table is
    SQL> desc TE.temp_sales
    Name Null? Type
    CUST_CREDIT_ID NOT NULL NUMBER
    CARD_TYPE VARCHAR2(10)
    CARD_NUMBER NUMBER
    EXPIRY_DATE DATE
    CUST_ID NUMBER
    1. grant execute on dbms_crypto to te;
    2. Create a table with a encrypted columns
    SQL> CREATE TABLE te.customer_credit_info(
    2 cust_credit_id number
    3      CONSTRAINT pk_te_cust_cred PRIMARY KEY
    4      USING INDEX TABLESPACE indx
    5      enable validate,
    6 card_type varchar2(10)
    7      constraint te_cust_cred_type_chk check ( upper(card_type) in ('DINERS','AMEX','VISA','MC') ),
    8 card_number blob,
    9 expiry_date date,
    10 cust_id number
    11      constraint fk_te_cust_credit_to_cust references te.customer(cust_id) deferrable
    12 )
    13 storage (initial 50k next 50k pctincrease 0 minextents 1 maxextents 50)
    14 tablespace userdata_Lm;
    Table created.
    SQL> CREATE SEQUENCE te.customers_cred_info_id
    2 START WITH 1
    3 INCREMENT BY 1
    4 NOCACHE
    5 NOCYCLE;
    Sequence created.
    Note: Credit card number is blob data type. It will be encrypted.
    3. Loading data encrypt the credit card number
    truncate table TE.customer_credit_info;
    DECLARE
    input_string VARCHAR2(16) := '';
    raw_input RAW(128) := UTL_RAW.CAST_TO_RAW(CONVERT(input_string,'AL32UTF8','US7ASCII'));
    key_string VARCHAR2(8) := 'AsDf!2#4';
    raw_key RAW(128) := UTL_RAW.CAST_TO_RAW(CONVERT(key_string,'AL32UTF8','US7ASCII'));
    encrypted_raw RAW(2048);
    encrypted_string VARCHAR2(2048);
    BEGIN
    for cred_record in (select upper(CREDIT_CARD) as CREDIT_CARD,
    CREDIT_CARD_EXP_DATE,
    to_char(CREDIT_CARD_NUMBER) as CREDIT_CARD_NUMBER,
    CUST_ID
    from TE.temp_sales) loop
    dbms_output.put_line('type:' || cred_record.credit_card || 'exp_date:' || cred_record.CREDIT_CARD_EXP_DATE);
    dbms_output.put_line('number:' || cred_record.CREDIT_CARD_NUMBER);
    input_string := cred_record.CREDIT_CARD_NUMBER;
    raw_input := UTL_RAW.CAST_TO_RAW(CONVERT(input_string,'AL32UTF8','US7ASCII'));
    dbms_output.put_line('> Input String: ' || CONVERT(UTL_RAW.CAST_TO_VARCHAR2(raw_input),'US7ASCII','AL32UTF8'));
    encrypted_raw := dbms_crypto.Encrypt(
    src => raw_input,
    typ => DBMS_CRYPTO.DES_CBC_PKCS5,
    key => raw_key);
    encrypted_string := rawtohex(UTL_RAW.CAST_TO_RAW(encrypted_raw)) ;
    dbms_output.put_line('> Encrypted hex value : ' || encrypted_string );
    insert into TE.customer_credit_info values
    (TE.customers_cred_info_id.nextval,
    cred_record.credit_card,
    encrypted_raw,
    cred_record.CREDIT_CARD_EXP_DATE,
    cred_record.CUST_ID);
    end loop;
    commit;
    end;
    4. Check credit card number script
    DECLARE
    input_string VARCHAR2(16) := '';
    raw_input RAW(128) := UTL_RAW.CAST_TO_RAW(CONVERT(input_string,'AL32UTF8','US7ASCII'));
    key_string VARCHAR2(8) := 'AsDf!2#4';
    raw_key RAW(128) := UTL_RAW.CAST_TO_RAW(CONVERT(key_string,'AL32UTF8','US7ASCII'));
    encrypted_raw RAW(2048);
    encrypted_string VARCHAR2(2048);
    decrypted_raw RAW(2048);
    decrypted_string VARCHAR2(2048);
    cursor cursor_cust_cred is select CUST_CREDIT_ID, CARD_TYPE, CARD_NUMBER, EXPIRY_DATE, CUST_ID
    from TE.customer_credit_info order by CUST_CREDIT_ID;
    v_id customer_credit_info.CUST_CREDIT_ID%type;
    v_type customer_credit_info.CARD_TYPE%type;
    v_EXPIRY_DATE customer_credit_info.EXPIRY_DATE%type;
    v_CUST_ID customer_credit_info.CUST_ID%type;
    BEGIN
    dbms_output.put_line('ID Type Number Expiry_date cust_id');
    dbms_output.put_line('-----------------------------------------------------');
    open cursor_cust_cred;
    loop
         fetch cursor_cust_cred into v_id, v_type, encrypted_raw, v_expiry_date, v_cust_id;
    exit when cursor_cust_cred%notfound;
    decrypted_raw := dbms_crypto.Decrypt(
    src => encrypted_raw,
    typ => DBMS_CRYPTO.DES_CBC_PKCS5,
    key => raw_key);
    decrypted_string := CONVERT(UTL_RAW.CAST_TO_VARCHAR2(decrypted_raw),'US7ASCII','AL32UTF8');
    dbms_output.put_line(V_ID ||' ' ||
    V_TYPE ||' ' ||
    decrypted_string || ' ' ||
    v_EXPIRY_DATE || ' ' ||
    v_CUST_ID);
    end loop;
    close cursor_cust_cred;
    commit;
    end;
    /

  • Interface Problems: DBA = Data Pump = Export Jobs (Job Name)

    Hello Folks,
    I need your help in troubleshooting an SQL Developer interface problem.
    DBA => Data Pump => Export Jobs (Job Name) => Data Pump Export => Job Scheduler (Step):
    -a- Job Name and Job Description fields are not visible. Well the fields are there but each of them just 1/2 character wide. I can't see/enter anything in the fields.
    Import Wizard:
    -b- Job Name field under the first "Type" wizard's step looks exactly the same as in Export case.
    -c- Can't see any row under "Chose Input Files" section (I see just ~1 mm of the first row and everything else is hidden).
    My env:
    -- Version 3.2.20.09, Build MAIN-09.87
    -- Windows 7 (64 bit)
    It could be related to the fact that I did change fonts in the Preferences. As I don't know what is the default font I can't change it back to the default and test (let me know what is the default and I will test it).
    PS
    -- Have tried to disable all extensions but DBA Navigator (11.2.0.09.87). It didn't help
    -- There are no any messages in the console if I run SQL Dev under cmd "sqldeveloper\bin\sqldeveloper.exe
    Any help is appreciated,
    Yury

    Hi Yury,
    a-I see those 1/2 character size text boxes (in my case on frequency) when the pop up dialog is too small - do they go away when you make it bigger?
    b- On IMPORT the name it starts with IMPORT - if it is the half character issue have you tried making the dialog bigger?
    c-I think it is size again but my dialog at minimum size is already big enough.
    Have you tried a smaller font - or making the dialogs bigger (resizing from the corners).
    I have a 3.2.1 version where I have not changed the fonts from Tools->Preferences->CodeEditor->Fonts appears to be:
    Font Name: DialogInput
    Font size: 12
    Turloch
    -SQLDeveloper Team

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

Maybe you are looking for

  • IPad won't sync ALL of my songs

    I Have over 2800 songs on my computer .  I was able to sync all of those to my new iPod .  But I am not able to sync all of them to my iPad . It will only sync about 800. i have checked all the same categories in iTunes but it won't sync ... Can some

  • Adobe Acrobat Pro 9 has stopped working

    After a couple of years, message comes up informing me that I cannot use and should uninstall and re-install or contact Adobe Customer Support.  Have tried the uninstall/re-install avenuse without success.  Does anyone have any idea.  Acrobat 9 is in

  • How to create a Quick Step to create an appointment but keep the formatting

    I created a Quick Step in Outlook 2013 to create an appointment from a selected email.  The process works except the appointment is created with only the text from the email; all formatting and graphics are lost.  I guess this is the expected behavio

  • What audio interfaces are compatible with the iMac/Intel computer?

    I see that MOTU has Intel-compatible drivers for its high-end audio interfaces. Are there any other audio interfaces--USB or Firewire--that are at present compatible with the Intel Duo Core machines that are under $300? I've checked with Tascam and P

  • How to debug a stored procedure in sql server 2012 ?

    whenever I do a change to a stored procedure and then run the alter proc again after that I place a break point on the exec stored procedure and try to step into the code  SQL management studio starts stepping into an older version of the stored proc