Imp/expdp(11 to 10 and 10 to 11

hello guys,
previously, i have a exported full database on 11g and imported it on 10g as a user u1..
now,
again i need export the full database on 10g and import it on the 11g...
now since so many changes(error corrections & validation in procedures,views,function.....) have been made on 10g...
1)
i already have user u1 on 11g database from which full export was done....
if i import it on 11g again it, will it replace the existing invalidate procedures on 11g with the procedures which have been validated and corrected on 10g or will it give the error "ora-31684 object already exists "while importing hence not replacing the invalid objects with the validated objects on 10g ?
2)
do i need to rename the user u1 on 11g and then importing it after the full export is done on 10g with user u1 so that it while importing it does not find the same objects,hence importing all the objects?
waiting your suggetions
thank you very much

Duplicate thread
Re: imp/expdp(11 to 10 and 10 to 11)

Similar Messages

  • Speeding up the expdp job with PARALLEL and FILESIZE parameters

    Daily we are going to back up 6 schemas with a total size of 80 GB.
    From oracle documentation I gather that PARALLEL servers work well when we split the dump file because each slave process can work with a separate file.
    But I am not sure how many parallel processes should be spawned and the how many files this dump file has to be split?
    The expdp command we are planning to use
    expdp userid=\'/ as sysdba\' SCHEMAS = schema1,schema2,schema3,schema4,schema5,schema6  DUMPFILE=composite_schemas_expdp.dmp LOGFILE=composite_schemas_expdp.log  DIRECTORY=dpump_dir2 PARALLEL=3Related info:
    11.2.0.2
    Solaris 10 (x86_64) running on HP Proliant Machine
    8 CPU with 32gb RAM
    SQL > show parameter parallel
    NAME                                 TYPE        VALUE
    fast_start_parallel_rollback         string      LOW
    parallel_adaptive_multi_user         boolean     TRUE
    parallel_automatic_tuning            boolean     FALSE
    parallel_degree_limit                string      CPU
    parallel_degree_policy               string      MANUAL
    parallel_execution_message_size      integer     16384
    parallel_force_local                 boolean     TRUE
    parallel_instance_group              string
    parallel_io_cap_enabled              boolean     FALSE
    parallel_max_servers                 integer     32
    parallel_min_percent                 integer     0
    NAME                                 TYPE        VALUE
    parallel_min_servers                 integer     0
    parallel_min_time_threshold          string      AUTO
    parallel_server                      boolean     TRUE
    parallel_server_instances            integer     2
    parallel_servers_target              integer     32
    parallel_threads_per_cpu             integer     2
    recovery_parallelism                 integer     0

    I just did a test export of a 16gb schema (schema size confirmed from dba_segments).
    I didn't specify the FILESIZE parameter. Instead I specified PARALLEL = 5 with DUMPFILE=tstschema_expdp%U.dmp hoping that expdp will create 5 different dumpfiles for the 5 slave processes to work with. But expdp created only 4 files.
    Export: Release 11.2.0.1.0 - Production on Mon Aug 6 10:30:57 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "EXPIMP_USER"."SYS_EXPORT_SCHEMA_01":  expimp_user/******** SCHEMAS = TSTSCHEMA DUMPFILE=tstschema_expdp%U.dmp LOGFILE=test_expdp.log DIRECTORY=dpump_dir PARALLEL=5
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 9.089 GB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    . . exported "TSTSCHEMA"."FTB_RES_RCS"                      14.75 MB   13860 rows
    Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "TSTSCHEMA"."CASR_HAP_DTL"                      824.9 MB 1613143 rows
    . . exported "TSTSCHEMA"."JOHN"                                  0 KB       0 rows
    . . exported "TSTSCHEMA"."DCVERSO"                           7.210 GB 7502854 rows
    Master table "EXPIMP_USER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for EXPIMP_USER.SYS_EXPORT_SCHEMA_01 is:
      /u05/bkp_DUMP/tstschema_expdp01.dmp
      /u05/bkp_DUMP/tstschema_expdp02.dmp
      /u05/bkp_DUMP/tstschema_expdp03.dmp
      /u05/bkp_DUMP/tstschema_expdp04.dmp
    Job "EXPIMP_USER"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:07:36My conclusion:
    If you want to keep Number of dumpfiles to be generated and Number of Parallel processes to be N (same) then use FILESIZE
    ie.
    Say you want to export an 80gb schema and if you setting PARALLEL = 5 , it is better to specify FILESIZE after doing the arithmetic.
    Divide 80/5 = 16 . So put the FILESIZE = 16g so that expdp will create 5 different files to 5 different 5 slave processes to work with.

  • IMP: When will go HTTP and When we will go for SOAP????

    Hey Experts,
    Here i have requirement like that, I want to syn the date from SAP to external applications eg.., dot net So here which adapter i need to use here ,Either HTTP or SOAP??
    When we will go for Http and when we will go SOAP ? and which suits for dot net applications and which suits for java kind of applications ...
    Can you explain me what is the difference between this two adapter and which is the best one???
    Thanks in Advance
    Regards
    JS

    Sarathy,
    So here which adapter i need to use here ,Either HTTP or SOAP??
    It completely depends on the receiving system. I prefer web services, so if they can receive web services, then go for SOAP.
    When we will go for Http and when we will go SOAP ? and which suits for dot net applications and which suits for java kind of applications ...
    It completely depends on the scenario. In system integration, you first identify the scenario, and then choose the adapter which best suits the scenario.
    On .NET applications, it's easier to create / maintain web services.
    Can you explain me what is the difference between this two adapter and which is the best one???
    Please do a search on SDN / SAP Help, you will find plenty of blogs / forum postings on this topic.
    Regards,
    Neetesh

  • How to give filter condition in expdp to filter parent and child records?

    Hi,
    I have a requirement of export schema. Assume there is a account details table and account transactions table under schema.They have parent child relation exist. if i filter accounts from account details the corresponding transactions in account transactions table should be filterd.How to achieve this?
    Regards,
    Venkat Vadlamudi
    Mobile:9850499800

    Try with USER_DEPENDIENCIES
    select * from user_dependencies

  • Differences in 9i and 10g imp

    Hi all,
    I've just done an import from production into two dev databases. One which is 9i and the other 10g. Comparing the two logs, I got the following error on the 10g, but not on 9i which to me seem strange:
    ORA-02299: cannot validate (XXX.<table__name>) - duplicate keys found.
    Not sure why this has happenend on 10g but not on 9i as the imports were done exactly the same.
    Would appreciate any comments on this
    Thanks

    If you run:
    $ oerr ora 2299
    02299, 00000,"cannot validate (%s.%s) - duplicate keys found"
    // *Cause: an alter table validating constraint failed because the table has
    // duplicate key values.
    // *Action: Obvious
    You'll see that imp has attempted to validate and enable a constraint after importing the data and has encountered duplicate data on a unique constrained column or group of columns.
    I'm suspecting if it worked in one database and not another, you've specified ignore=y and there is already data in the 10g database. But as suggested, please provide the imp command used.
    Cheers,
    Stuart.

  • IMPDP and EXPDP

    Is there an option to be specified in EXPDP(Data Pump Export) and IMPDP(Data Pump import) to export/import the packages, procedures from one user/schema to the other with in the same database?

    You can use include parameter and the remap schema parameter.
    Expdp user/password include=procedure,packages ...
    If you have a loopback dblink, then you can do it all pin one command without creating a dump file. There is no expdp required. Do this
    Impdp user/password network_link=your_loopback_dblink include=package,procedure remap_schema=old:new schemas=old ...
    Hope this helps
    Dean

  • Some questions on RMAN and others

    hi,
    I have some doubts and need some clarifications to clear my doubts......thanks in advance
    can data be copied from one db say A to another db say B, if A is running on Windows 32bit OS and db B is on Solaris 64bit
    can I have a primary db on 10.2.0.4 and physical standby for this db on 11g ??
    I know RMAN can exclude tablespace but can we exclude tables like in dataguard %table_name% ....I know we can't just wanted to confirm
    Can I restore one specific tablespace from PROD to test ????
    I have out of date TEST db and have added additional datafiles and PROD, how can I update TEST without recreating the entire db

    can data be copied from one db say A to another db say B, if A is running on Windows 32bit OS and db B is on Solaris 64bit
    Yes you can do it either through transportable tablespace or using exp/imp (expdp/impdp in 10g)
    can I have a primary db on 10.2.0.4 and physical standby for this db on 11g ??
    No. this is not possible
    I know RMAN can exclude tablespace but can we exclude tables like in dataguard %table_name% ....I know we can't just wanted to confirm
    didn't understand the question. please elabrote.
    Can I restore one specific tablespace from PROD to test ????
    You cannot restore but you can move using transportable tablespace feature.
    I have out of date TEST db and have added additional datafiles and PROD, how can I update TEST without recreating the entire db
    you can add those datafiles in TEST.

  • How to export a user and their schema from one 10g database to another?

    Hi,
    I would like to export a user and their entire schema from one 10g database to another one. How do I do this?
    thx
    adam

    If you want to export a user and the schema owned to the user, and import to the same user in a different database, or a different user in the same database, you can use the exp and imp commands as described in the Utilities manual.
    These commands are very versatile and have a lot of options - well worth learning properly. To give you a simplistic shortcut, see below - I create a user 'test_move', create some objects in the schema, export, create a new user in the database 'new_move' and import.
    oracle@fuzzy:~> sqlplus system/?????
    SQL*Plus: Release 10.2.0.1.0 - Production on Sat Mar 11 21:46:54 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL> create user test_move identified by test_move;
    User created.
    SQL> grant create session, resource to test_move;
    Grant succeeded.
    SQL> connect test_move/test_move
    Connected.
    SQL> create table test (x number);
    Table created.
    SQL> insert into test values (1);
    1 row created.
    SQL> exit
    Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    oracle@fuzzy:~> exp system/????? file=exp.dmp owner=test_move
    Export: Release 10.2.0.1.0 - Production on Sat Mar 11 21:48:34 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user TEST_MOVE
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user TEST_MOVE
    About to export TEST_MOVE's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    . about to export TEST_MOVE's tables via Conventional Path ...
    . . exporting table                           TEST          1 rows exported
    . exporting synonyms
    . exporting views
    . exporting stored procedures
    . exporting operators
    . exporting referential integrity constraints
    . exporting triggers
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh groups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    . exporting statistics
    Export terminated successfully without warnings.
    oracle@fuzzy:~> sqlplus system/?????
    SQL*Plus: Release 10.2.0.1.0 - Production on Sat Mar 11 21:49:23 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL> create user new_move identified by new_move;
    User created.
    SQL> grant create session, resource to new_move;
    Grant succeeded.
    SQL> exit
    Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    oracle@fuzzy:~> imp system/????? file=exp.dmp fromuser=test_move touser=new_move
    Import: Release 10.2.0.1.0 - Production on Sat Mar 11 21:50:12 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
    . importing TEST_MOVE's objects into NEW_MOVE
    . . importing table                         "TEST"          1 rows imported
    Import terminated successfully without warnings.
    oracle@fuzzy:~>                                                       If moving between databases, remember to set the SID properly before the import. If keeping the same userid, skip the from/to stuff in the import.
    There are many variations on the theme ...
    You can simplify this. You can select tables individually. You can use a parameter file. You can transport all the constraints and data. You can skip the data and only move the definitions. You can get some help (imp/exp help=yes).
    And, if it's all 10g, there is a new and improved facility called expdp/impdp (dp = data pump) which has a lot more capability as well, including direct transfer (no intermediate file) together with suspend/restart. Also documented in the Utilities manual.

  • Migrate exp/imp into data pump

    Hi Experts,
    we use exp/imp to exp data 150G and works to support stream in past time.
    As I know, that data pump will speed up exp.
    How to migrate ex/imp syntax into data pump?
    my imp/exp as
    exp USERID=SYSTEM/tiger@test OWNER=tiger FILE=D:\Oraclebackup\CLS\exports\test.dmp LOG=D:\Oraclebackup\test\exports\logs\exportTables.log OBJECT_CONSISTENT=Y STATISTICS=NONE
    imp USERID=SYSTEM/tiger FROMUSER=tiger TOUSER=tiger CONSTRAINTS=Y FILE=test.dmp IGNORE=Y COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    Thanks
    Jim

    You are right - expdp more faster and useful than classic exp utility
    There are several features in EXPDP
    - may do only local on current server
    - you must create directory object in database &
    grant read,write priveleges to user
    For Example:
    create directory dump as 'd:\export\hr';
    grant read,write on directory dump to hr;
    Then we may do export:
    expdp hr/hr DIRECTORY=dump DUMPFILE=test.dmp LOGFILE=exportTables.log
    After export we will see two files in directory 'd:\export\hr'
    Other features see from expdp help=y & Oracle Documentation
    Edited by: Vladandr on 15.02.2009 22:07

  • Differences between bapi's and bdc

    Hi Frendz,
    What are the most imp differences between BAPI's and BDC ?
    Points for sure...
    Thanks in advance
    Vijaya

    Hi Vijaya,
    A few inputs from net
    BAPI is used only when it is available for the particular  transaction like Delivery Sales order. but BDC can be used for any 
    transaction which have screen and fields.
    BAPI is directly updated the database instead BDC run through the screen flow.
    So BAPI can't handle all the flow logic checking and enhancement put by programmer to faciliate the user requirement.
    BAPI is a higher end usage for tranfering the data from SAP to non-SAP and vice-versa. for ex: if we are using VB application,where in that we want to connect to SAP and retireve the data,and then change and update the data in SAP for that purpose we can use that.
    Apart from that, we can also use it for Uploading/Downloading the data from SAP to Non-SAP like BDC, provided we have an existing  BAPI for that. 
    BAPI function modules will also do all the checks required for data integrity like Transactions for BDC.
    There is one more advantage using BAPI instead of BDC.
    When we go for upgradation, there might be pozzibility to change the screen elements for transactions depending on the requirement. In that case,our BDC pgm may or may not work (depending on the screen changes they have made). Unless and until we prepare new BDC we cant use the old BDC pgm. But in BAPI, SAP promises that they are going to keep the old BAPI and for new functionality they will provide an upgraded BAPI. Until we write a new BAPI pgm, we can use the exisitng BAPI pgm.
    Source: sap-img.com
    this somehow summarzes what i wanted to convey and hence have put this extract.. hope it helps to your question.
    Br,
    Sri
    Award points for helpful answers

  • Delete millions of rows and fragmentation

    Hi Gurus,
    i have deleted 20 lak rows from a table which has 30 lak rows,
    and i wanted to release the fragmented space , please tell me the procedure other than exp/imp or alter table move
    and also the recommended way to do this prod env... (coalesce /alter move etc.. )
    db version is 11.2
    Thanks for great help
    Raj

    870233 wrote:
    Hi Gurus,
    i have deleted 20 lak rows from a table which has 30 lak rows,
    and i wanted to release the fragmented space , please tell me the procedure other than exp/imp or alter table move
    and also the recommended way to do this prod env... (coalesce /alter move etc.. )
    db version is 11.2
    Thanks for great help
    RajInstead of Deleting 2 Million rows out of 3 Million, I would suggest Creating a Temporary table with data that should be retained and Dropping the original table.
    I believe, this will amount to lesser work.
    Steps would be like below:
    1. Create table your_table_temp as select * from your_table where condition to retain records;
    2. Drop table your_table;
    3. Alter table your_table_temp rename to your_table;
    You might as well want to exploit the advantage provided by the NOLOGGING while loading your temp table.

  • DDL Query for ANY EVALUATION CONTEXT and ...

    Hi there,
    this is a cross posting from SQL: maybe I will have more luck here:
    Problem: there is a grant in my DB:
    GRANT CREATE ANY EVALUATION CONTEXT to "MYNAME";
    Which DDL can I use to get this grant:
    SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','MYNAME') from dual;
    SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT', 'MYNAME') FROM DUAL;
    SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', 'MYNAME') FROM DUAL;
    SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT', 'MYNAME') FROM DUAL;
    are NOT working. BTW: same for
    GRANT DROP ANY EVALUATION CONTEXT to "MYNAME";
    GRANT CREATE ANY EVALUATION CONTEXT to "MYNAME";
    GRANT EXECUTE ANY EVALUATION CONTEXT to "MYNAME";
    Also I am searching for the DLL for
    CREATE OR REPLACE CONTEXT CONTEXT_COE USING MYNAME.PKG_CONTEXT
    Any ideas are very welcome!!!!
    Thanks,
    H

    Hi H,
    It looks like a bug to me (or at least a feature). I got the same behaviour as you. I tried an expdp of the schema and then an impdp to a sqlfile and the text of the sqlfile does not contain any mention of evaluation contexts. expdp uses dbms_metadata to create all it's ddl so it looks like dbms_metatdata does not handle this particulat type of grant.
    Looks like you might have to generate some ddl based on the contents of DBA_SYS_PRIVS. Or maybe it's fixed in 12c......
    Regards,
    Harry
    http://dbaharrison.blogspot.com

  • Java ME SDK 3.4 problem running IMP-NG Midlets

    Hello!
    I downloaded and installed the Java ME SDK 3.4 and I'm having trouble running a IMP-NG JAD. I don't know if it's a environment problem or a SDK bug...
    If I make a JAD IMP-NG, I get a IllegalArgumentException when launching, even if we run using the Device Selector window, right-click on IMPNGDevice1. If I make it a IMP-1.0, it runs on simulator, but doesn't show in right-click on IMPNGDevice1... It seems to be checking the version (IMP-NG is 2.0, and simulator is 1.0?) but I don't see what's wrong.
    Thanks for any help!
    Stacktrace:
    Installing suite from: file:///C:/devel/ecl_ng2/workspace/Hello1/.mtj.tmp/emulation/Hello1.jad
    TRACE: <at java.lang.IllegalArgumentException>,
    java.lang.IllegalArgumentException
    - com.sun.midp.installer.Version.initFromComponents(), bci=149
    - com.sun.midp.installer.Version.<init>(), bci=22
    - com.sun.midp.installer.Version.createFromString(), bci=33
    - com.sun.midp.installer.Version.compare(), bci=6
    - com.sun.midp.installer.Installer.matchVersion(), bci=81
    - com.sun.midp.installer.Installer.isSupportedProfile(), bci=23
    - com.sun.midp.installer.Installer.matchProfile(), bci=212
    - com.sun.midp.installer.Installer.installStep10(), bci=19
    - com.sun.midp.installer.Installer.performInstall(), bci=188
    - com.sun.midp.installer.Installer.resumeInstallation(), bci=7
    - com.sun.midp.installer.MidpInstaller$StartAction.run(), bci=10
    - com.sun.j2me.security.AccessController.doPrivileged(), bci=12
    - com.sun.midp.installer.MidpInstaller$InstallThread.run(), bci=9
    - java.lang.Thread.run(), bci=5
    JAD
    MIDlet-Version: 1.0.0
    MIDlet-Vendor: MIDlet Suite Vendor
    MIDlet-Jar-URL: Hello1.jar
    MicroEdition-Configuration: CLDC-1.1
    MIDlet-1: Hello1,,mobi.v2com.zion.test.Hello1
    MicroEdition-Profile: IMP-NG-2.0
    MIDlet-Name: Hello MIDlet Suite
    Midlet
    package mobi.v2com.zion.test;
    import javax.microedition.midlet.MIDlet;
    import javax.microedition.midlet.MIDletStateChangeException;
    import com.oracle.util.logging.Level;
    import com.oracle.util.logging.Logger;
    public class Hello1 extends MIDlet {
      private Logger logger = Logger.getLogger(getClass().getName());
      public Hello1() {
      // TODO Auto-generated constructor stub
      protected void destroyApp(boolean arg0) throws MIDletStateChangeException {
      // TODO Auto-generated method stub
      protected void pauseApp() {
      // TODO Auto-generated method stub
      protected void startApp() throws MIDletStateChangeException {
      logger.log(Level.INFO, "Hello1");
      System.out.println("Hello!");

    I had the same problem and I got it to work by removing the "-2.0" in MicroEdition-Profile line in JAD file "by hand":
    MicroEdition-Profile: IMP-NG
    But it's not possible to achieve this using the combo box to select the Microedition Profile. Only changing the file in text mode.

  • Doubts in smartforms plz provide smartform notes imp topics

    hi experts
    i am new to smartforms some interviews they are asking about smartforms could plz tell me imp topics in smart forms and send good notes to create smartforms from scrach, some imp faq's.
    my mailid is [email protected]

    hi Prasanna,
    SmartForms : some links
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCSRVSCRSF/BCSRVSCRSF.pdf
    http://www.sapgenie.com/abap/smartforms.htm
    http://www.sap-img.com/smartforms/sap-smart-forms.htm
    http://help.sap.com/saphelp_46c/helpdata/en/a5/de6838abce021ae10000009b38f842/frameset.htm
    http://help.sap.com/printdocu/core/Print46c/en/Data/htm/english.htm
    http://www.sap-img.com/smartforms/smart-001.htm
    http://www.sap-img.com/smartforms/smartform-tutorial.htm
    http://www.sap-img.com/smartforms/smart-002.htm
    http://www.sapgenie.com/abap/smartforms.htm
    http://www.sap-img.com/smartforms/sap-smart-forms.htm

  • Expdp over network fails.

    I have a test1 and test2 db's on two different servers. I want to run expdp from test2 db and get the export of a table from test1. I have created a db link and used network_link in expdp command. Granted read,write on directory to user with which i am connecting to db.
    Using expdp over a network_link fails with this error. For the same user I have tried it locally on test2 db and it works fine.
    Is there any other step to be done ?
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-39087: directory name DUMPS is invalid

    expdp un/pwd*@test1db* job_name=EXP_test1 directory=dumps network_link=test1.dummy.COM dumpfile=dp.dmp log
    file=dp.LOG tables=a.tableThis is the problem. When running export using NETWORK_LINK to test1, you need to connect to test2 not test1.
    The idea is your connect to test2 first , test2 then connect to test1 using database link and bring the data over network and dump to local directory.
    In your command it's actually running export against test1, then in that case, test1 need to have directory called dumps defined.

Maybe you are looking for

  • How do I add PE8 to a new computer if I did not deactivate the other before uninstalling?

    Hello:   I have PE8 on my desktop.  Last summer I installed it on my wife's laptop.  Later, I uninstalled it from her computer but did not deactivate it first.  I now want to install it to my new laptop using the same serial number, but the software

  • New micro sd card turn write protected same day

    I bought an sd card and formatted it with my blackberry curve 8520 and it formatted. I started copying music t it. After, I tried deleting something i did not want anymore and the card gave me errors. I used my pc and it said the card was write-prote

  • Cannot publish to Mobile Me Gallery

    I am unable to update a MobileMe gallery using iPhoto9. I have an iMac G5 and I'm running iPhoto 9 and OS x 10.5.6.. Prior to upgrading to iPhoto 9 I published several galleries to MobileMe. I just attempted to upload another photo to one of the gall

  • Properties: Metadata: Print or Web?

    What is the difference in choosing Advanced Metadata, either Print or Web? The documents we are working with will become downloadable digital content. Does either Print or Web place restrictions on a document, or does the choice of Web compress it? T

  • Working with HD video in FCE

    Okay, here goes nothing... I've worked pretty well with FCE on my Macbook Pro when using SD video. Just got a new Canon Vixia hf11 and all of a sudden, I'm at a loss with trying to work with HD video. From reading through these forums, I think I need