DBMS_DATAPUMP.STOP_JOB procedure is not halting the job

Dear All,
We have created a application using .NET which a user can backp his schema (Fires datapump procedurein the backend). But we have noticed that DBMS_DATAPUMP.STOP_JOB is actually not stopping the job after few minutes of start time, will stop only if it is fired within few minutes.
Following is how i called the Datapump procedure:
DBMS_DATAPUMP.STOP_JOB(0,1,0)

post the code you used to start the job. It should be similar to the following, where it assigns the handle to a variable.
declare
h1 NUMBER;
begin
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE', job_name => 'TESTING_DP_JOB', version => 'COMPATIBLE');
in this case the h1 variable holds the handle. I'm sure there is a view that can be used as well to get the handle after the job has been started.

Similar Messages

  • Job Cancelled with an error "Data does not match the job def: Job terminat"

    Dear Friends,
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job   Started                                                                               
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)                           
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.                                   
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.                                    
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)  
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)    
    8.Job cancelled after system exception
    ERROR_MESSAGE                                                
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised. 
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?
    Edited by: dharmendra gali on Jan 28, 2008 6:06 AM

    Dear Friends,
    _Urgent : Please work on this ASAP _
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job Started
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
    8.Job cancelled after system exception
    ERROR_MESSAGE
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?

  • Background Job cancelling with error Data does not match the job definition

    Dear Team,
    Background Job is getting cancelled when I run a Job on periodically but the same Job is executing perfectly when I run it manually(repeat scheduling) .
    Let me describe the problem clearly.
    We have a program which picks up files from an FTP server and posts the documents into SAP. We are scheduling this program as a background Job daily. This Job is running perfectly if the files contain no data. But if the file contains data the JOb is getting cancelled with the following messages.
    And also the same Job is getting executed perfectly when repeat scheduling is done ( even for files with data).
    Time     Message text                                                                       Message class Message no. Message type
    03:46:08 Job PREPAID_OCT_APPS2_11: Data does not match the job definition; job terminated        BD           078          E
    03:46:08 Job cancelled after system exception ERROR_MESSAGE                                      00           564          A
    Please help me in resolving this issue.
    Thanks in advance,
    Sai.

    hi,
    If you have any GUI function modules used in the program of job
    you cannot run it in background mode.

  • Regarding "Data does not match the job definition; job terminated"

    Dear Friends,
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job Started
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
    8.Job cancelled after system exception
    ERROR_MESSAGE
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?

    Hi dharmendra
    Good Day
    How are you
       I am facing the same problem which you have posted
    By any chance have you got the soultion for this
    If so please let me know the solution for this.
    Thanks in advance.
    Cheers
    Vallabhaneni

  • Azure Compute Nodes not running the job

    I have an on-premise head node. I have joined 10 Azure Compute Nodes via the cloud service and storage account. I have uploaded the dlls directory to storage account and synced using hpcsync all the compute nodes. The Azure compute nodes are still not
    running the job. I see in the HPC Job Manager that the Cores In Use = 0. How should I resolve this issue?

    Hello,
    We are researching on the query and would get back to you soon on this.
    I apologize for the inconvenience and appreciate your time and patience in this matter.
    Regards,
    Azam khan

  • Could not execute the job

    Hi,
    when i execute the job a a window appear with the massage" ERROR: could not execute the job .Error returned was 1
         MESSAGE is : Could not open command file...
    And i can't find from where it comes any suggestion?

    When today i execute the job i had this list of error
    13860    15384    REP-100109        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100109        27/05/2014 08:22:10       Cannot save <History info> into the repository. Additional database information: <SQL submitted to ODBC data source
    13860    15384    REP-100109        27/05/2014 08:22:10       <SIGSIRDDB01\SQLSIRDBD> resulted in error <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object
    13860    15384    REP-100109        27/05/2014 08:22:10       'dbo.AL_HISTORY_INFO' in database 'DS_REP' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded
    13860    15384    REP-100109        27/05/2014 08:22:10       files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files
    13860    15384    REP-100109        27/05/2014 08:22:10       in the filegroup.>. The SQL submitted is <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME",
    13860    15384    REP-100109        27/05/2014 08:22:10       "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100109        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100109        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100109        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100109        27/05/2014 08:22:10       TXT') >.>.
    13860    15384    REP-100112        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100112        27/05/2014 08:22:10       Cannot save <History info> for repository object <>. Additional database information: <Cannot save <History info> into the
    13860    15384    REP-100112        27/05/2014 08:22:10       repository. Additional database information: <SQL submitted to ODBC data source <SIGSIRDDB01\SQLSIRDBD> resulted in error
    13860    15384    REP-100112        27/05/2014 08:22:10       <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object 'dbo.AL_HISTORY_INFO' in database 'DS_REP'
    13860    15384    REP-100112        27/05/2014 08:22:10       because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup,
    13860    15384    REP-100112        27/05/2014 08:22:10       adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.>. The SQL submitted is
    13860    15384    REP-100112        27/05/2014 08:22:10       <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME", "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100112        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100112        27/05/2014 08:22:10       TXT') >.>.>.
    13860    15384    REP-100112        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100112        27/05/2014 08:22:10       Cannot save <History info> for repository object <>. Additional database information: <Cannot save <History info> into the
    13860    15384    REP-100112        27/05/2014 08:22:10       repository. Additional database information: <SQL submitted to ODBC data source <SIGSIRDDB01\SQLSIRDBD> resulted in error
    13860    15384    REP-100112        27/05/2014 08:22:10       <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object 'dbo.AL_HISTORY_INFO' in database 'DS_REP'
    13860    15384    REP-100112        27/05/2014 08:22:10       because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup,
    13860    15384    REP-100112        27/05/2014 08:22:10       adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.>. The SQL submitted is
    13860    15384    REP-100112        27/05/2014 08:22:10       <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME", "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100112        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100112        27/05/2014 08:22:10       TXT') >.>.>.
    and thank u.
    Sincerly

  • HT4236 I successfully synched photos from my mac book iphoto to my new iphone 5 (icloud did not do the job).  Now these photos appear on my phone in three places--iphoto, last import and photo library.  Does that mean that they take up three times the mem

    I successfully synched photos from my mac book iphoto to my new iphone 5 (icloud did not do the job).  Now these photos appear on my phone in three places--iphoto app, last import, and photo library.  Does that mean that they take up three times the memory?  Will that affect iCloud?  Sorry for the end-user questions!!

    HHi, thank you for the reply. I have checked my iPad and iPhone and neither has iCloud Photo Library (Beta) enabled. Turned off in both. Photostream is turned on.
    i tried to sort it out  by dragging all the photos to Events on the Mac and then deleting them from iCloud - (left hand side of iPhoto under the section 'Shared'). the photos now show up in Events. I did force quit but the issue remains. The message reads ' photos are bing imported to the library. Please wait for import to complete.'
    i can't empty iPhoto trash either. The message read "Delete error. Please wait for import to complete.'
    WHen I was moving the photos to the Events I always had a message about duplicates - to the effect that the photos already existed, did I want to import them? I clicked on Yes, import all duplicates. But when it showed the images - duplicates side by side - one showed the photo and the other was blank.
    I really don't know what to do! And I don't know how to handle my iOS devices. Is it to do with the large number of photos? Any help, advice appreciated.

  • Not clearing the job queue

    Hello everyone,
    We've been experiencing some oddities in our database and we hope to find some help around here. The scenario is as follows:
    Our job queue shows 13 pages of jobs, updated in the last 5 days, in which we find lots of jobs - actually most of them from 3 days ago - with the progress showing "transcoding clip" and the status as "RUN".
    When we try to cancel any of those jobs we get an error message that says: "Error cancelling - this job is currently not running".
    We have tried flushing the event queue from Terminal using the following commands:
    fcsvr_client flushresponsequeue
    fcsvr_run psql px pxdb -c "vacuum analyze verbose;"
    but none of them solved our problem. We cannot clear the event tables in the db (pxevent and pxeventresponse). We have even tried a more desperate solution by accessing the db via Navicat and manually deleting each and every row (!!!), visually clearing the tables.
    BUT on the next day they did reappear and on the day after double in quantity (i.e. empty table --next day--> 4K rows --next day--> +4K rows and so on...).
    So now we have 18,826 entries in our table which seems to grow indefinitely (and for no apparent reason).
    We fear that our problem may get worse and cause us bigger problems. Any help would be very appreciated.
    Details of the system: FCS V1.5.2 / Mac OS X (10.5.8)
    Thank you in advance,

    Hey guys, thanks for the replies.
    the issues with the "pxevent" and "pxeventresponse" tables seems to have been solved BUT the issue with the Job queue remains the same.
    Old (1 day old or more) jobs are showing status as RUN although when you try to delete it you get an error message which says "error cancelling - this job is currently not running" despite the fact it is shown as running. When you "get info" on any of those, you can see in the logs that it has failed and retried, then failed again (on the retry). This condition should make the status change from RUN to FAIL but it is not what's happening.
    All this ends up putting some recently added jobs on a WAIT status which ends up kind of locking my queue (therefore preventing jobs from running, timming them out and, consequently, failing them). If I go to the "pxjobs" table and delete all entries it solves the symptom (as it was well said by one of you) but the problem doesn't cease to happen.
    I can force the waiting jobs to run now but they end up failing too.
    Any thoughts on that? I have some print screens but don't know if I can post them here (newbie on this forum, sorry ).
    Thanks for your attention

  • Not executing the job automatically scheduled in dba_scheduler_jobs

    Hi Friends,
    I have scheduled a job in dba_scheduler_jobs ,it should run on every 15 mins intervel , But it is not happening.If I trigger the job by manually .. like exec dbms_scheduler.run_job('SNAP15MIN_JOB') , executing successfully.
    please fiind the below info...
    Oracle s/w : Oracle Standared Edition 10.2.0.4
    Used cript to create the job :
    begin
    dbms_scheduler.create_job(
    job_name => 'snap15min_JOB',
         START_DATE => SYSDATE ,
    job_type => 'PLSQL_BLOCK',
    job_action => 'begin statspack.snap;end;',
    repeat_interval => 'SYSDATE + 10/1440',
    enabled => true,
    comments => 'statspack 15mins job');
    end;
    SQL> select JOB_NAME,REPEAT_INTERVAL,STATE from dba_scheduler_jobs;
    JOB_NAME REPEAT_INTERVAL STATE
    SNAP15MIN_JOB SYSDATE + 10/1440 SCHEDULED
    please help on this .
    Thanks
    Mahesh

    Hi,
    Please select start_date, LAST_START_DATE, next_run_date for the job.
    Also see this post for more tips
    Answers to "Why are my jobs not running?"
    Hope this helps,
    Ravi.

  • EPMA 9.3.1 - Redeploying an application did not finish the job in EPMA

    I was redeploying an application in the EPMA library. The refresh completed in Essbase, but the job did not complete in the EPMA console - we are not able to access the application - message is "Error - Service was unable to process request--> there is currently no application deployment pending for application "IncStmt". As a result the application export status for the last export cannot be updated. (org.apache.axis.axisfault). I cannot do anything to the application - this error comes up immediately when you click on the application. Any ideas on how I can clear it?

    I did try changing the deploying time out but in the application library i still cannot redeploy my application.I even tried creating a new application and tried to deploy it but it too didnt deploy and shows the status as deployment pending.Is there any other way out.
    It shows:
    Detail : Initiating Product Action...
    Inspecting Deployment History...
    Generating Headers and Callback Information...
    Generating Application Data...
    Status : Running
    Progress : 4%
    but its not getting deployed for a really long time .Please Help
    Edited by: user8667776 on Jul 28, 2009 10:15 PM

  • Compressor does not start the job after submit button clicked.

    Hello All,
    FCP 6.0/ Compressor 3.0.5
    I am trying to export a section of a FCP timeline sequence (set with in and out) thorugh Compressor, into a couple of video formats (DVD and a custom H.264). After I set the destination and make surte that the source file is in place, I press "Submit" button and again "submit" from pop-up window and nothing happens. No usual window with the job processing appears, like it ussually does. I restarted programs and computer and no change. Compressor quitted unexpectedly few times within the same process of initiating export from FCP. I tried test export from a different FCP project and the compressing does not start either after pressing "submit".
    I did not have this problem before. I would greatly appreiciate any suggestions.
    Best to all,
    Izabela

    Dear ToddNashville,
    Thank you so much! This was it! It was processing but did not see the process as the "ghost" icons of the new exporte files in destinations would not appear immiediately as I thought they usually do. Once they are completed or 1/2 way through they appear.
    I did not try the batchlist before. It is so clear as far as whats running and how much time left! I hope all will process fine.
    Thank you!

  • 3G DOES NOT DO THE JOB

    I keep my phone on EDGE the 3G is not working for me. The signal shows full strength and 3G but actually I can not place any call or receive any. The voice is distortion, hardly understand what the other person is saying. The provider is saying that the phone is the problem not the network. And if I want the phone change I only have to go to Apple. Apple is saying that I have to go to Rogers and the rumba is going on an don and on.

    The provider is saying that the phone is the problem not the network.
    Rogers is lying to you. Several other Rogers customers have posted here that Rogers told them that it was indeed the network, and they were working on it.
    You might search these forums and send that poster an Email and see if they
    will share the name/location of the Rogers store that told them this so you
    could at least go back to your outlet and tell them to call their own
    company people and get their story straight.

  • I have a Mac OS X 10.6.7 and Firefox 4. Other browser are easily connecting to the net but Firefox. It keep trying to connect to my wireless network but not doing the job. Please give some advice..

    This happens when I first start the computer. Sometimes it connects to the wireless network, and during the day is fine.

    user8744713 wrote:
    I have downloaded Instant Client libraries (basic, SDK and SQL files, named instantclient-basic-10.2.0.4.0-macosx-x64.zip , instantclient-sqlplus-10.2.0.4.0-macosx-x64.zip and instantclient-sqlplus-10.2.0.4.0-macosx-x64.zip ) to Mac OS X 10.6.2 server and unzipped them as instructed in the manual here: http://download.oracle.com/docs/cd/B19306_01/install.102/e12121/inst_task.htm#BABJGGJH
    Manual then instructs to run runInstaller command. This command however is not being recognized by the system (' No such file or directory' error is thrown) which to my opinion means one of two things: either command is misspelled in the manual or the command file that runs it is missing. I tried it with or without a dot and slash and in small case letters, nothing works.
    When I installed similar libraries on Linux I had no problems but all was required there was to place libraries in the proper directory, no command was required.
    Maybe same can be done here as well but I am not sure where to place some of the files. I see .h files that should go to /usr/include but see no .so files that on Linux go to /usr/lib. There are some files with .dylib and other .*lib extensions but I'm not sure if they should be copied to /usr/lib or not. And then there are some other files I have no idea what to do with.
    Please help!!!No, you can't just drop a few files into a directory and have a working installation (well, I guess technically you could. But technically you could bail all the water out of Lake Superior with a teacup ...)
    Sounds like you've covered PATH and current directory issues. What about permissions? What OS account are you using for this installation? (you should have created an account called "oracle" and made it's primary group "dba"). Does that account have execute permissions?
    >
    Thanks in advance!
    Edited by: user8744713 on Mar 26, 2010 10:17 AM

  • My Adobe ExportPDF Annual Subscription Does Not Do the Job

    In between starting the subscription I needed cardiac care and now am trying to use the subscription to no avail. Either it is garbled text or in text boxes or not usable. I no longer want the hassle. I will use other means to get the PDF into a usable format. How do I get out of this annual subscription. Deduct a months service charge if that works but refund the other portion. Thanks for any help in this process of canceling.
    MattGZatkalik

    Hi Matt,
    I'm so sorry to hear about your heart troubles. I hope you're on the mend. I've canceled your subscription, and because you canceled within the first 30 days, I was able to process a full refund for you. The transaction ID is AD014270071 for your reference.
    Be well.
    Sara

  • "can not start a job" issue in AWM

    Hi ALL,
    I am maintaining my cube from PLSQL with following options
    1. buildtype = "BACKGROUND"
    2. trackstatus = "true"
    3. maxjobqueues = 3
    i get following error when i see the "olapsys.xml_load_log" table
    ***Error Occured: Failed to Build(Refresh) DB_OWNER.MY_ANALYTICAL_WORKSPACE Analytic Workspace. Can not start a job.
    Can anybody explain when and why this error occurs? I have wasted a lot of time searching for this issue, but have found no person facing such issue.
    Hi Keith, it will be great if you can answer this one.
    My database version is 10.2.0.4.0 and AWM version is also 10.2.0.3.0
    Kind Regards,
    QQ
    Message was edited by:
    dwh_10g

    Applies to:
    Oracle OLAP - Version: 10.1 to 11.1
    This problem can occur on any platform.
    Symptoms
    - We have an AW maintenance / refresh script or procedure that contains BuildType="BACKGROUND", so that the AW maintenance task will be sent to the Oracle Job queue.
    - When we execute the AW maintenance / refresh script or procedure, we do not get any errors in the foreground, the script/procedure has been executed successfully.
    - However when we look into the build/refresh log (see <Note 351688.1> for details) we see that the maintenance/refresh task failed with:
    13:29:39 Failed to Submit a Job to Build(Refresh) Analytic Workspace <schema>.<AW Name>.
    13:29:39 ***Error Occured in BUILD_DRIVER
    - In the generated SQL trace for the session of the user who launches the AW build/refresh script or procedure, we see that ORA-27486 insufficient privileges error occurred at creation of the job.
    We see from the relevant bit of the SQL trace that err=27486 occured while executing the #20 statement which is 'begin DBMS_SCHEDULER.CREATE_JOB ...', and the statement is parsed and tried to be executed as user having uid=91:
    PARSING IN CURSOR #20 len=118 dep=2 uid=91 oct=47 lid=91 tim=1176987702199571
    hv=1976722458 ad='76dd8bcc'
    begin
    DBMS_SCHEDULER.CREATE_JOB(:1, 'plsql_block', :2, 0, null, null, null,
    'DEFAULT_JOB_CLASS', true, true, :3); end;
    END OF STMT
    PARSE
    #20:c=1000,e=1100,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1176987702199561
    EXEC #20:c=65990,e=125465,p=10,cr=1090,cu=3,mis=1,r=0,dep=2,og=1,tim=
    1176987702325167
    ERROR #20:err=27486 tim=465202984
    Cause
    User who tries to create a job (executes DBMS_SCHEDULER.CREATE_JOB() procedure) does not have the sufficient privileges.
    Solution
    1. Identify the user under which the job is supposed to be created. This user is not necessarily the same as the user who launched AW build/refresh script or procedure. Get the corresponding username from one of the %_USERS views e.g. from ALL_USERS.
    e.g.
    SELECT user_id,username FROM all_users WHERE user_id=91;
    2. Identify the system privileges currently assigned to the user by connecting as the user whom the privileges need to be determined, and execute:
    SELECT * FROM SESSION_PRIVS ORDER BY PRIVILEGE;
    3. Ensure that the CREATE JOB privilege is listed.
    The CREATE JOB privilege can be granted in various ways to a user:
    - via role OLAP_USER or via role OLAP_DBA (see view ROLE_SYS_PRIVS for what privs are assigned to a role):
    GRANT OLAP_USER TO <username>;
    - explicitly
    GRANT CREATE JOB TO <username>;

Maybe you are looking for

  • Proxy Object for external WSDL--Not using XI

    Dear Collegues, URL OF web service: <u>http://www.nanonull.com/TimeService/TimeService.asmx</u> I am trying to create proxy object from <b>se80->enterprise services->create proxy then is selecting url/http and giving the above address(url).</b> i get

  • Can the Audigy 2 ZS support 2 speakers systems at on

    I have my new media center PC set up with the 5. analog system up, running and sounding great. Does anyone know if it is possible to run a second set of speakers (digital speakers via the digital output) at the same time in adjacent room?

  • From the context menu I want to always search using Google dot CA, not dot COM

    I have already edited "keyword.URL" in about:config This didn't work for the context menu when you highlight a term to search and use the "Search Google for... It works for typing the search term directly into the URL address bar, but not the search

  • Cancellation? Phase out of Verizon?

    Duplicate post - please see: https://community.verizonwireless.com/message/1033286#1033286 Message was edited by: Admin Moderator

  • POR which contains rejection from vendor causes errors in the EBP

    Hi @all, are there a solution within SRM 5.0 & EBP 3.5 that PORs that containing rejections from the vendor regarding a PO doesn't run into errors within the EBP. Thats not covered by the POR workflows as far as I know. Probably someone knows more...