How to Restart a Datapump Export Job

hi experts,
I have 10g on Windows.
C:\Documents and Settings\jbates>set oracle_sid = ultradev
C:\Documents and Settings\jbates>expdp system/ThePassword ATTACH=DP_EXPORT_ULTRADEV_SSU
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 04 August, 2009 10:04:
41
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT", line 438
ORA-31638: cannot attach to job DP_EXPORT_ULTRADEV_SSU for user SYSTEM
ORA-31632: master table "SYSTEM.DP_EXPORT_ULTRADEV_SSU" not found, invalid, or i
naccessible
ORA-00942: table or view does not exist
When I run select * from dba_datapump_jobs the job does exist and has no attached sessions and the State is Not Running.
How can I attach and restart this job? I'm missing something very simple I'm sure.
Thanks, John

Probably you started the job as different user than system, and this is why, you receive:
ORA-31638: cannot attach to job DP_EXPORT_ULTRADEV_SSU for user SYSTEM
ORA-31632: master table "SYSTEM.DP_EXPORT_ULTRADEV_SSU" not found, invalid, or i
naccessible
ORA-00942: table or view does not existWith kind regards
Krystian Zieja

Similar Messages

  • Attach datapump export job

    Hi Guys,
    I am using Oracle 10g Release 2 on Solaris.
    I have database that is 1.5 TB and I am doing datapump export of this database of which datapump estimate is 500GB.
    Now after the 300GB export the server crashed.
    Will I be able to attach the data pump export job and continue from 300GB after database startup?
    NB I am using the parameter flashback_time for data consistency.
    Please Help !!!!!!!!!!!!!!
    Thanks.

    Thanks for the reply...
    I tried to attach the job after the database startup and here is what I get:
    expdp \"/ as sysdba\" attach=SYS_EXPORT_FULL_01Export: Release 10.2.0.2.0 - 64bit Production on Saturday, 30 July, 2011 17:50:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39002: invalid operation
    ORA-39068: invalid master table data in row with PROCESS_ORDER=-59
    ORA-39150: bad flashback time
    ORA-00907: missing right parenthesis
    I guess I just have to restart the job as i cannot attach that job...
    Thanks...

  • Datapump - export job problem

    Just started playing with this new feature of 10g. I created a new export job through Enterprise Manager - database control. Now when I tried to delete it, its giving me an error message, which is as follows,
    Error
    The specified job, job run or execution is still active. It must finish running, or be stopped before it can be deleted. Filter on status 'Active' to see active executions
    I stopped this process successfully so many times (even I don't remember now, how many times) through database control but when I try to again delete run, it gives me the same error message.
    I logged on SQLPlus and checked that this process is still active as it has an entry into DBA_DATAPUMP_JOBS view. I deleted the corresponding table and the entry is gone from the view but when I checked in the database control, the job execution is still there with a status of "Stop Pending"
    Can somebody help me in this, I mean how can I delete that job from the database control. If you need any other information to help me, I am more than willing to provide the same.
    The job is owned by system. My platform is Windows XP Professional.
    Any help is greatly appreciated as I am doing different things for last 2 days with no success.
    Regards,

    Hi Bhargava,
    What do you get when you execute this block -
    set serverout on;
    declare
    myhandle number;
    begin
    myhandle := dbms_datapump.attach('JOB_NAME','JOB_OWNER');
    dbms_output.put_line(myhandle);
    dbms_datapump.detach(myhandle);
    end;
    If this block executes without error and prints out a number, then you can try to stop the job with this block:
    declare
    myhandle number;
    begin
    myhandle := dbms_datapump.attach('JOB_NAME','JOB_OWNER');
    dbms_output.put_line(myhandle);
    dbms_datapump.stop_job(myhandle, 1,0,0 );
    end;
    Here is an article with more information on the pl/sql API to dbms_datapump:
    http://www.devx.com/dbzone/Article/30355
    Here is the dbms_datapump documentation:
    http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_datpmp.htm
    -Natalka
    http://toolkit.rdbms-insight.com

  • How to ZIP Oracle Datapump export backup file

    Hello All,
    My customer is asking to give him the production data dump to the following path \\138.90.17.56\OMNISAFE.
    I really don't understand his requirement and he also wants me to zip the export backup file. How do I do that, Do you know any unix command to zip backup files.
    thanks and regards
    cherry

    1013498 wrote:
    Well Thanks for your reply.....my oracle version is 11.2.0.3.b and if we have the compression option can you please elaborate how to do that......
    It's in the documentation.  See Data Pump Export
    let us say my expdp file is abc.dmp...should I give the command gzip abc.dmp or any different.
    Let me google that for you
    One more question what does teh customer mean by production data dump to the following path \\138.90.17.56\OMNISAFE. please explain
    How do we know what the customer means?  Why don't you ask him?
    That said, it looks like a url to an ip address and a defined folder at that ip address.  Again, if the customer wants you to send them a file, you need to be working with said customer on the mechanics of accessing their system.
    All that said ....
    Learning how to look things up in the documentation is time well spent investing in your career.  To that end, you should drop everything else you are doing and do the following:
    Go to tahiti.oracle.com.
    Locate the link for your Oracle product and version, and click on it.
    You are now at the entire documentation set for your selected Oracle product and version.
    BOOKMARK THAT LOCATION
    Spend a few minutes just getting familiar with what is available here. Take special note of the "books" and "search" tabs. Under the "books" tab (for 10.x) or the "Master Book List" link (for 11.x) you will find the complete documentation library.
    Spend a few minutes just getting familiar with what kind  of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what kind of information is available there.
    Do the same with the SQL Reference Manual.
    Do the same with the Utilities manual.
    You don't have to read the above in depth.  They are reference manuals.  Just get familiar with what is there to be referenced. Ninety percent of the questions asked on this forum can be answered in less than 5 minutes by simply searching one of the above manuals.
    Then set yourself a plan to dig deeper.
    - *Read a chapter a day from the Concepts Manual*.
    - Take a look in your alert log.  One of the first things listed at startup is the initialization parms with non-default values. Read up on each one of them (listed in your alert log) in the Reference Manual.
    - Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files. Go to the Network Administrators manual and read up on everything you see in those files.
    - *When you have finished reading the Concepts Manual, do it again*.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.

  • Enterprise Manager Job for Scripting DataPump Export for Oracle Database Running On MS Windows Server 2008

    Greetings,
    I would like an example of an Enterprise Manager Job that uses an OS Script for MS Windows that would effectively run a datapump export of my Oracle 11g database (11.2.0.3) running on a Windows 2008 server.  My OEM OMS is running on a Linux server with an Oracle 12c repository.  I'd like to be able to set environment variables for date and time, my export file name (that includes SID, and export date and time, job name, and other information pertinent to the datapump export.  Thus far, I have been unsuccessful with using the % delimiter around my variables.  Also, I have put the "cmd/c" as the "Interpreter" but I am not getting anywhere in a hurry :-( 
    Thanks  Million!!!
    Mike

    1. Try to reach server with IP )(bypath name resolution)
    2. Disabling IPv6 is not good idea
    3. What is server operating system and what is workstation operating system?
    4. Is this new or persistent problkem?
    5. If server and workstation has different SMB version, set higher to lower one (see Petri web for procedure)
    6. Uninstall AV with removal tool and test it without AV
    7. Use network monitor to diagnose network traffic.
    M.

  • How to restart/force run of APEX jobs?

    Our development server system date was temporarily set to a period in the future, which meant that our Oracle jobs were left with their 'next date' also set to a time in the future.
    For most of our jobs, we could simply log in as the schema owner and manually restart/run the affected jobs.
    With the APEX-based jobs, we has a problem in that they were owned by the 'flows' user and their password was unknown/random from installation.
    We got around the problem by using the 'su' method (see http://asktom.oracle.com/tkyte/Misc/su.html) to temporarily change the password.
    Is there an official way of restarting/force-running APEX jobs? By default, there are the mail queue flushing and session clearing jobs but I assume that any other jobs created within APEX will have similar issues?

    Hello,
    This is what you can do in SM37.
    Execute the list of batch jobs, when the result appears on the screen edit the ALV grid via CTRL+F7.
    Now add the following columns "TargetServ" and "Executing server".
    You will now have two extra columns in your result list.
    TargetServr contains the value of the application server where the job should run when you have explicitely filled it in.
    Often this is empty, this means that at runtime SAP will determine itself on which application server the job will run (depending of course where the BGD processes are defined).
    Executing server is filled in always for all executed jobs, this is the actual application server where the job has run.
    You can also add these two fields in your initial selection screen of SM37 by using the "Extended job selection" button.
    I hope this isusefull.
    Wim

  • How do I save my exported doc in order to send it as an email attachment?

    how do I save my exported doc in order to send it as an email attachment?

    Plug your ipod in and pull up itunes. Then click on the device name in the left hand side. Click the restore option. When it goes all the way through and says "your device has been restored and is restarting, once it restarts you want to choose the option that says "setup as a new device"

  • Where are device export jobs managed?

    I created a scheduled device export job from the CS/Device Management/Device summary page to run daily and create a csv file. This ran fine for several months, but then seemed to stop. I think we had an issue with the jrm process, long since resolved. During that time I created another scheduled export job. I think they are now conflicting with each other (export to the same file name). I was hoping to delete one of them, but am unable to determine where they are stored. Just for a test I created a third job, noted the jobID, but can't find that one either. They don't seem to be listed in the RME job browser. Where are these stored and how do I delete the extraneous jobs?
    Perhaps a related issue. When go to the System View of CW, there is a panel named Job Information Status. It always contains only the string 'Loading....' in red (along with Log Space Usage and Critical Message Window). Thoughts?.

    My guess is you have a lot of jobs on this system, and jrm is not returning fast enough.  I find that Firefox is a bit more tolerant to delays than IE.  If you can, try FF, and see if the job browser loads.  If so, purge some old jobs.
    Posted from my mobile device.

  • Datapump Export stops at "Estimate in progress...."

    Hi,
    I am facing an issue while doing Schema level Datapump Export in Oracle 10g. The export for a particular schema stops at "Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA" and more over it only spawns one worker(DW01) irrespective of the PARALLEL parameter value. For other schema the export works fine and even the table level export for the problematic schema.
    I am clue less, because the alert log does not show anything, can any one please advice....
    Here is how my Parfile looks like:
    userid=id/password
    directory=impdir
    parallel=2
    schemas=prod11sep12
    dumpfile=expC2P_20120925_%U.dmp
    logfile=expC2P_20120925.log
    job_name=expC2P_20120925
    tail -f expC2P_20120925.log
    bash-3.00$ expdp parfile=expC2P.par ESTIMATE=STATISTICS
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 26 September, 2012 16:44:30
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."EXPC2P_20120925": parfile=expC2P.par ESTIMATE=STATISTICS
    Estimate in progress using STATISTICS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Alert log:
    kupprdp: master process DM00 started with pid=38, OS id=15156
    to execute - SYS.KUPM$MCP.MAIN('EXPC2P_20120925', 'SYSTEM', 'KUPC$C_1_20120926164430', 'KUPC$S_1_20120926164430', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=46, OS id=15201
    to execute - SYS.KUPW$WORKER.MAIN('EXPC2P_20120925', 'SYSTEM');
    Thanks in Advance...

    Pl enable trace as per this MOS Doc to see if additional debug information can be gathered
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    HTH
    Srini

  • How do I cancel a rman job from command line?

    Hello to all,
    I am having some problems with a rman job that is running via OEM-GC. I get an error message regading the SPFILE and control file being locked due to another job accessing it. I found this in the RMAN ref guide:
    To determine which job is holding the conflicting enqueue:
    1. After you see the first RMAN-08512: waiting for snapshot controlfile enqueue message, start a new SQL*Plus session on the target database:
    % sqlplus sys/sys_pwd@prod1
    2. Execute the following query to determine which job is causing the wait:
    SELECT s.sid, username AS "User", program, module, action, logon_time "Logon", l.*
    FROM v$session s, v$enqueue_lock l
    WHERE l.sid = s.sid and l.type = 'CF' AND l.id1 = 0 and l.id2 = 2;
    You should see output similar to the following (the output in this example has been truncated):
    SID User Program Module Action Logon
    9 SYS rman@h13 (TNS V1-V3) backup full datafile: c1 0000210 STARTED 21-JUN-99
    Solution
    After you have determined which job is creating the enqueue, you can do one of the following:
    * Wait until the job creating the enqueue completes
    * Cancel the current job and restart it once the job creating the enqueue completes
    * Cancel the job creating the enqueue
    So with this in mind; How do I cancel the job from RMAN command line!!!
    This is my output by the way:
    SID User PROGRAM MODULE ACTION Logon ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
    475 SYS [email protected] (TNS V1-V3) backup full datafile 0000018 STARTED16 23-APR-09 00000023EB8D488 000000023EB8D4A8
    475 CF 0 2 4 0 113123 0
    Any help will be gratefully received
    Thanks to all that reply

    I used the following to get the spid and killed the process on the OS.
    set linesize 120
    col sid for 999
    col username for a14 trunc
    col osuser for a18 trunc
    col spid for 99990
    col logon_time for a12
    col status for a9 trunc
    col machine for a26 trunc
    col running for a10 trunc
    select s.sid
    , s.username
    , s.osuser
    , s.machine
    , s.status
    , p.spid spid
    , to_char( logon_time, 'Mon dd@hh24:mi') logon_time
    , rtrim (s.module)||decode( nvl(length( rtrim(s.module)),0),0,'',' ')|| upper(s.program) running
    from v$session s
    , v$process p
    where ( p.addr = s.paddr ) and s.type!='BACKGROUND'
    and upper(s.program) not like '%CJQ0%' and s.program is not null and s.username is not null
    order by s.sid;

  • Export Job credentials

    I created an Export job in Oracle Enterprise Manager (listed in the Jobs Activity page off the main OEM database instance home). The job is apparently "owned" by the system user and I recently changed the password for that user. Now when the job runs, it gives me an ORA-01017: invalid username/password error.
    In the past, the only way i've found to get around this is to delete all instances of the export job and recreate it with the new password for the System user. Is there any way to alter the password associated with the job so I don't have to rebuild it?
    Thanks in advance.

    I've tried changing it on the credentials tab for the Job but OEM says "This job does not require any credentials."
    Also, we do use RMAN, but also use datapump export to aid in rebuilding our development environment as well as to use in recovery if RMAN backups don't work.

  • How to restart an application

    My fellow WD explorers,
    Can anybody please tell me how to restart a WD application?
    What I want is the following: if I'm on a certain view within a WD application and want a button which should start the application again from scratch...
    So, everything should be initialized as if the application was started for the 1st time.
    Thanks in advance for your recommendations.
    Wouter Heuvelmans

    Hi
    The best thing you can do is you create an exit outbound plug to your window
    and give the parameter Url of type string and give your application name while
    firing the outbound plug.
    this method is used to get the url of your application
    CALL METHOD cl_wd_utilities=>construct_wd_url
    EXPORTING
    application_name = '<your application name>
    IMPORTING
    out_absolute_url = lv_url.
    /people/thomas.szcs/blog/2006/07/18/basic-concepts-150-url-semantics
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7015b1f9-535c-2910-c8b7-e681fe75aaf8
    Regards
    Abhimanyu L
    Message was edited by:
            Abhimanyu Lagishetti

  • Restarting an aborted background job in CC

    One of our CC Static Text Upload daily jobs is showing as being in a current state of 'abort'.  I'm not sure how it got that way as I did not abort it.  I can only assume it ran into a problem when it last tried to run, maybe while server maintenance was being done and the SAP Adaptor was turned off.
    Anyway, need to know how to restart this job and get it running again on schedule.  I tried doing a 'disable' and then 'enable', but it still shows as 'abort' and is not running.
    We are using GRC 5.2 SP14.
    Thanks.

    Hi Bob,
    Normally you get "Abort" message when J2ee engine stops.
    Server restart will stop all jobs currently running and will not be effect any of your jobs that were scheduled to run in future.
    To get the aborted job re-run, just reschedule it again with "Immediate" option.
    Best Regards,
    Sirish Gullapalli.

  • Scheduled Export Job

    Hi,
    We need to create an export dump file on a daily basis. However, the dump file should only be updated and not created multiple times.
    How can we create a scheduled (Data Pump) export job? We are running 10gR2 on Windows Server 2003.
    Thanks!

    Hello,
    when you say the file should be "updated" - do you mean it should be overwritten ?
    If yes, maybe one solution could be to create the dump job as it should run every night and just put it in a .cmd file (as you write you are on a windows server) and run this .cmd file as a "Scheduled Task" in the Windows Operating System.
    Kind regards

  • Error while Datapump EXPORT

    Hi,
    I scheduled one datapump(EXPDP) job at OS level. It is showing the below error.
    Can anybody help me on the below error.
    My OS : SunOS usa0300uz1078 5.10 Generic_144488-17 sun4u sparc SUNW,Sun-Fire-15000
    DB Version : 10.2.0.4.0
    ****** Beginning expdp of fndb1 on Mon Dec 12 05:45:00 EST 2011 ******
    **ld.so.1: expdp: fatal: libclntsh.so.10.1: open failed: No such file or directory**
    ****** Ending export of fndb1 on Mon Dec 12 05:45:00 EST 2011 ******
    **COMPRESSING dump files**
    ****** Ending of Compression Mon Dec 12 05:45:00 EST 2011 ******
    Thanks
    Raj

    Hi raj;
    Please see:
    OS Command Job Fails Calling Sqlplus or expdp [ID 1259434.1]
    Hope it helps,
    Regard
    Helios

Maybe you are looking for