Schedule Job : Backup Encryption hasnot finish completely.

Dear Mr/Ms,
           SQL Server 2008 has backup encryption schedule job for 02 databases, but it has just backed up 01 database in several time.
Access is denied due to a password failure [SQLSTATE 42000] (Error 3279)  BACKUP DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013).  The step failed.
I have enough space for backup file.
Please help me solving this problem completely.
Thanks and best regards,
  Phong Eximbank.

Have you tried with PASSWORD='Password123'
instead of mediapassword?
Passwords can be used for either media sets or backup sets:
Media set passwords protect all the data saved to that media. The media set password is set when the media header is written; it cannot be altered. If a password is defined for the media set, the password must be supplied to perform any append or restore
operation.
You will only be able to use the media for SQL Server backup and restore operations. Specifying a media set password prevents a Microsoft Windows NT® 4.0 or Windows® 2000 backup from being
able to share the media.
Backup set passwords protect only a particular backup set. Different backup set passwords can be used for each backup set on the media. A backup set password is set when the backup set is written to the media. If a password is defined for the backup set,
the password must be supplied to perform any restore of that backup set.
https://technet.microsoft.com/en-us/library/aa173504%28v=sql.80%29.aspx?f=255&MSPPError=-2147217396

Similar Messages

  • NT Backup fail by automatic scheduling Job.

    Hi ,
    I am experience one strange thing that I scheduled windows NT backup tool to make the backup onto my IBM Tape Drive Ultrium (800 GB ) .
    sometime the backup automatically completed and some time I check and it shows nothing then I  Start manually job and this time it completed.
    My current configuration is Windows Server 2003 Standard edition with IBM Tape Drive .
    Even I set-up the unmanaged client facility ( /um) in the script so that existing backup if it will found then create new backup onto this existing one of the new day.
    Regards,
    Lalit Kumar Soni

    Hi,
    If there is any error message in the Event Log? You could try to install the hotfix in the KB article below to resolve the issue.
    A scheduled backup does not run after you reschedule the backup by using NTBackup.exe in Windows Server 2003
    https://support.microsoft.com/en-us/kb/902389?wa=wsignin1.0
    Best Regards,
    Mandy 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Automated Monitoring Scheduled Job not completing - PC

    Hello Experts,
    When we have scheduled a automated monitoring job, it is not getting executed completely and the status is shown as "In Progress". Also when we open the "Job Step Log" tab of the scheduled job in Automated Monitoring it is blank
    What could be the reason for it?
    Regards,
    Ramakrishna Chaitanya

    Hi
    Try with Sox export role. Asynchronous mode?

  • Is there any way to backup Azure SQL regularly by scheduled job ?

    Is there any way to backup Azure SQL regularly  by scheduled job  ?

    there's really no equivalent of a SQL Server-like type of backup that you schedule using SQLAgent, however you can have scheduled exports, see this
    link: 
    Note that Azure SQL Databases also has built-in backups for self-service restores, see this
    link

  • Waiting for scheduling job to complete. Job ID...and it never completes

    I have created my first Publication. It's a pretty basic single source document, single Dynamic Recipients list publication.
    When I run it in Test mode (I haven't tried it in non-Test mode yet), the document(s) are never delivered.
    When I view the Log File, everything looks fine. The second and last entry in the log says
    "- The global delivery rule for this publication was met; publication processing will now begin."
    The Status Message I see in the Publication History is:
    "Waiting for scheduling job to complete. Job ID:3,879, name:Unviewed Invoices Report (Copy), kind:CrystalReport in Pending state (FBE 60509) [0 recipients processed.] "  and it never changes. I've tried this several times, creating new publications with the same source documents, and the same thing always happens.
    Any help would be appreciated.

    All of the servers were configured correctly.
    It turned out to be my source document, a report, had some field in it that BOE didn't like.
    I removed a bunch of fields and added them back, one by one, and couldn't make the error occur again.
    Go figure.

  • Scheduled jobs are not running DPM 2012 R2

    Hi,
    Recently upgraded my dpm 2012 sp1 to 2012 R2 and upgrade went well but i got 'Connection to the DPM service has been lost.(event id:917 and other event ids in the eventlog errors ike '999,997)'. Few dpm backups are success and most of the dpm backups consistenancy
    checks are failed.
    After investigating the log files and found two SQL server services running in the dpm 2012 r2 server those are 'sql server 2010 & sql server 2012 'service. Then i stopped sql 2010 server service and started only sql server 2012 service using (.\MICROSOFT$DPM$Acct).
    Now 'dpm console issue has gone (event id:917) but new issue ocurred 'all the scheduled job are not running' but manully i can able to run all backup without any issues. i am getting below mentioned event log errors 
    Log Name:      Application
    Source:        SQLAgent$MSDPM2012
    Date:          7/20/2014 4:00:01 AM
    Event ID:      208
    Task Category: Job Engine
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      
    Description:
    SQL Server Scheduled Job '7531f5a5-96a9-4f75-97fe-4008ad3c70a8' (0xD873C2CCAF984A4BB6C18484169007A6) - Status: Failed - Invoked on: 2014-07-20 04:00:00 - Message: The job failed.  The Job was invoked by Schedule 443 (Schedule 1).  The last step to
    run was step 1 (Default JobStep).
     Description:
    Fault bucket , type 0
    Event Name: DPMException
    Response: Not available
    Cab Id: 0
    Problem signature:
    P1: TriggerJob
    P2: 4.2.1205.0
    P3: TriggerJob.exe
    P4: 4.2.1205.0
    P5: System.UnauthorizedAccessException
    P6: System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal
    P7: 33431035
    P8: 
    P9: 
    P10: 
    Log Name:      Application
    Source:        MSDPM
    Date:          7/20/2014 4:00:01 AM
    Event ID:      976
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      
    Description:
    The description for Event ID 976 from source MSDPM cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event: 
    The DPM job failed because it could not contact the DPM engine.
    Problem Details:
    <JobTriggerFailed><__System><ID>9</ID><Seq>0</Seq><TimeCreated>7/20/2014 8:00:01 AM</TimeCreated><Source>TriggerJob.cs</Source><Line>76</Line><HasError>True</HasError></__System><Tags><JobSchedule
    /></Tags></JobTriggerFailed>
    the message resource is present but the message is not found in the string/message table
    plz help me to resolve this error.
    jacob

    Hi,
    i would try to reinstall DPM
    Backup DB
    uninstall DPM
    Install DPM same Version like before
    restore DPM DB
    run dpmsync.exe -sync
    finished
    Seidl Michael | http://www.techguy.at |
    twitter.com/techguyat | facebook.com/techguyat

  • How can I see all scheduled jobs which have steps of a given user?

    Hello,
    I like to see all scheduled jobs which have steps of a given user. It is not importent which user has planned the job, I just want to check the usernames wich are used for accomplishing the respective step (-> field "AUTHCKNAM").
    In table " tbtcp" I can see the AUTHCKNAM for the jobsteps, but these table just contains the jobs which are in the state "finished/completed".
    In table "tbtcs" I can see the scheduled jobs, but there I can't see the AUTHCKNAM
    Do you know a table where I can see ALL Jobs or the scheduled ones and the AUTHCKNAM?
    Thanks for your help!
    Kind Regards
    Lisa

    Hi,
    thanks for your answers.
    I also tried the table "tbtco". There I can see all jobs, but there arent entrys in the column "AUTHCKNAM". Its empty....
    Maybe there is an other table?
    Kind Regards,
    Lisa

  • DPM 2010 Scheduled Jobs Disappear rather than Run

    I have a situation where I have a DPM server that appears to be functioning fine, but none of the scheduled jobs run.  No errors are given, there are no Alerts, and there is nothing in the Event log (Application and System) which indicates a failure. 
    All my Protection Groups show a green tick to indicate that they are fine, but the last successful backup for all of them is Friday the 8th of February.
    If I go to Monitoring and Jobs I see the jobs scheduled, but when the time comes for the job to run, it does not go into the "All jobs in progress", it just merely disappears, like thus:
    And a few minutes later,
    As you can see, the jobs disappear from the queue, and the total number of jobs decreases accordingly.  These jobs do not go into any of the other 3 Statusses (Completed, Failed or In Progress), they just disappear without a trace.
    There is some unallocated space, albeit not much (Used space: 21 155,05 GB Unallocated space: 469,16 GB). If space was an issue I would expect to see errors to indicate this.
    DPM 2010 running version 3.0.8193.0 (hotfix rollup package 6) using remote instance of SQL 2008 which is functioning fine.  I have tried stopping/starting the services, and even rebooted the server twice.  The remote instance of SQL server is using
    a domain account as its service account.  There are no pending Windows updates, i.e. it is fully up-to-date.
    The System Center Data Protection Manager 2010 Troubleshooting Guide (July 2010) does not show how to troubleshoot this particular probelm.
    Does anybody know how to resolve this issue or which logs might help me troubleshoot it?

    OK,
    Did you change the SQL Agent user account ?
    If so, DPM enters the SQL Agent account name into the registry and later we check that account each time the DPM engine launches.  The internal interfaces to DPM are secured using this account so the account name needs to match the account the SQL Agent
    is using. 
    Step 1
    In the registry HKLM\Software\Microsoft\Microsoft Data Protection Manager\Setup alter  both
    SqlAgentAccountName and SchedulerJobOwnerName keys to reflect the SQL Agent user account being used.
    Step 2
    Update DCOM launch and access permissions to match what was granted to the Microsoft$DPM$Acct account.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • ODI 11g Scheduled job stays in running state

    Hello All ,
    In ODI operator in the integration process ,there arer 3 scenarios scheduled .
    When the first is completed it switches to second ,but the next step/process keep running and never comes to an end,though the corresponding backend activity has been completed(which the scenario is supposed to do).
    Which ultimately results into timeout.
    Please suggest a solution for this.
    Thanks in advance ,
    ABHIJEET

    Hi,
    Actually your syntax is exactly right for running a single run-immediate job.
    it looks like the slave running your job may have crashed or terminated unexpectedly. You should do a stop_job (force=>true) on your job and you should look through your job slave trace files j0trc for reasons why your job slave did not finish the job. You can also try grepping your job slave traces for the job name 'TEST_JOB' to find the right trace.
    Hope this helps,
    Ravi.

  • Scheduling jobs in SM36

    Hi All,
    I have a requirement of scheduling job C in such a way that it runs after job A and job B has completed successfully.
    Job A and B are independent of each other and has to be scheduled at the same time.
    In SM36, there is no option to set the starting condition for job C such that it runs after A & B.
    One can only specify 1 job as the preceding job, not 2.
    Please help.

    Hello,
    I'm having a similar problem. I've managed to plan several jobs in sequence.
    12:00:00 - Job1 Starts - its configured to run everyday at the same time
    Job2 starts after Job1 finish
    Job3 starts after Job2 finish
    Job4 starts after Job3 finish
    The first time the Job1 runs all the other jobs run OK.
    However, the second time only the Job1 runs. After noting this malfunction I could not find in the system the escalonation of Job2, Job3 and Job4, they have simply disappear.
    Is there any logical explication for this.
    I've been searching and testing, but I've haven't found any explication for this behaviour.
    Can anyone help me.
    May be I'm doing something worn, but I can't see what.
    Regards,
    Rui Romba

  • DS run longer with scheduled job as compare to manual run

    I have scheduled a job through Management Console (MC) to run everyday once at 
    certain time. After some time, maybe after 15 days of running, the execution 
    time has increased to double from 17 mins to 67 mins by one time jump. After 
    that, the job kept running and spent 67 mins to complete.
    The nature of the job is to generate around 400 output flat files from a 
    source db2 table. Under efficient running time, 1 file took around 2 seconds to 
    generate. But now it became taking 8 seconds to generate one file. The data 
    volume and nature of the source table didn't change, so that was not the root 
    cause of increasing time.
    I have done several investigations and the results as such:
    1) I scheduled again this job at MC to run for testing, it would take 67 mins 
    to complete. However, if I manually run this job thorough MC, it would take 
    17 mins efficient time to run.
    2) I replicated this job to a second job as a copy. Then I scheduled this 
    copied job at MC to run, it would take 67 mins to run. But if I manually run 
    this job through MC, it will take 17 mins to run.
    3) I created another test repo and load this job in. I scheduled the job to 
    run at this new repo, it would take 67 mins to run. If I manually run the job 
    through MC, it only took 17 mins to run.
    4) Finally, I manually executed the job through unix job scripts command, 
    which is one of the scheduled job entry in the cron file, such as 
    ./DI__4c553b0d_6fe5_4083_8655_11cb0fe230f4_2_r_3_w_n_6_40.sh, the job also would take 17 
    mins to run to finish.
    5) I have recreated the repo to make it clean and reload back the jobs and 
    recreated again the schedule. Yet it still took 67 mins to run scheduled job.
    Therefore, the conclusion is why it takes longer time to run by scheduling 
    method as compare to manually running method?
    Please provide me a way to troubleshoot this problem. Thank you.
    OS : HPUX 11.31
    DS : BusinessObjects Data Services 12.1.1.0
    databasee : DB2 9.1

    Yesterday we had done another test and indirectly made the problem to go 
    away. We changed the generated output flat file directory from current directory 
    of /fdminst/cmbc/fdm_d/bds/gl to /fdminst/cmbc/fdm_d/bds/config directory to 
    run, to see any difference would make. We changed the directory pointing 
    inside Substitution Parameter Configurations windows. Surprisingly, job had 
    started to run fast and completed in 15 minutes and not 67 minutes anymore.
    Then we shifted back and pointed the output directory back to original 
    /fdminst/cmbc/fdm_d/bds/gl and the job has started to run fast ever since and all 
    completed in 15 minutes. Even we created ad hoc schedule to run and it was 
    still running fast.
    We not sure why it was solved by shifting directory away and shifting back, 
    and whether this had to do with BODS problem or HP Unix system environment 
    problem. Nonetheless, the job is started to run normally and fast now as we 
    test.

  • All scheduled jobs started suddenly to fail with ORA-01031: insufficient pr

    I have a setup of gridcontrol 11g on windows 2008. I've been running successful for weeks until the other day when all my scheduled jobs started to fail with
    ORA-01031: insufficient privileges (rman jobs)
    ERROR: Invalid username and/or password (sql scripts).
    I verified for the accounts being locked, pw correct, expired / grace period, etc.
    all good when I try from the command line. running an rman backup from the command line works great, but the same one through the job scheduler fails with ORA-01031.
    I've dropped the jobs completely, recreated them again. same thing.
    I'm using preferred credentials and dropped and recreated them. same thing.
    I don't know where else to look. Only grid control scheduled jobs fail, but all of them do.
    I'm using SQLNET.AUTHENTICATION_SERVICES= (NTS) in my sql.ne (always have).
    we use a windows domain server / domain authentication for logging into boxes. I haven't changed any of my passwords.
    I am probably looking in the wrong places. Anybody able to help?

    Hi,
    ERROR: Invalid username and/or password (sql scripts).This error message clearly tells that password is incorrect and "insufficient privileges" shows that perhaps you are using SYS user for running the jobs and password for user SYS is incorrect.
    If you are specifying passwords in your scripts, i don't think you need to set preferred credentials.
    Try following
    1) Remove preferred credentials
    2) Don't check password of user sys by locally logging in (because if Os authentication is on, even wrong password will allow you loging into the database and you will think that password is correct). Try connecting database using SYS user from a remote machine and check whether it accepts your password and make sure you have same password for your jobs and in sql scripts
    3) If still problem, just for test, remove SQLNET.AUTHENTICATION_SERVICES= (NTS) and try
    Salman

  • Troubled to get scheduler job to run?

    Hi all friends:
    I'm having trouble getting any scheduler jobs (here, troubled job name is CUSTMASTER_CHANGES_01) to actually run.
    when
    sql>select job_name,state,enabled,retry_count,failure_count,run_count,restartable,start_date,repeat_interval,job_class
    from all_scheduler_jobs;
    JOB_NAME STATE ENABL RETRY_COUNT FAILURE_COUNT RUN_COUNT RESTA START_DATE REPEAT_INTERVAL JOB_CLASS
    CUSTMASTER_CHANGES_01 SCHEDULED TRUE 0 0 0 FALSE 14-JAN-08 09.46.14.6 FREQ=SECONDLY;I SCANNER_JO
    72965 AM AMERICA/NEW NTERVAL=5 B_CLASS
    _YORK
    for job 'CUSTMASTER_CHANGES_01'
    we can see RUN_COUNT 0 and restartable false.
    I upped slave processes to 5. dbms_scheduler.run_job('CUSTMASTER_CHANGES_01') works, but it's still not executing on the schedule
    when run as sysdba
    SQL> exec dbms_scheduler.run_job('CUSTMASTER_CHANGES_01');
    ERROR at line 1:
    ORA-27475: "SYS.CUSTMASTER_CHANGES_01" must be a job
    ORA-06512: at "SYS.DBMS_ISCHED", line 150
    ORA-06512: at "SYS.DBMS_SCHEDULER", line 441
    ORA-06512: at line 1
    to resolve that,
    we found
    When you create your job by using dbsm_scheduler, it has a parameter called ‘auto_drop’, is by default, =’true’, like, auto_drop => TRUE
    see below;
    dbms_scheduler.create_job(
    job_name IN VARCHAR2,
    job_type IN VARCHAR2,
    job_action IN VARCHAR2,
    number_of_arguments IN PLS_INTEGER DEFAULT 0,
    start_date IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
    repeat_interval IN VARCHAR2 DEFAULT NULL,
    end_date IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
    job_class IN VARCHAR2 DEFAULT 'DEFAULT_JOB_CLASS',
    enabled IN BOOLEAN DEFAULT FALSE,
    auto_drop IN BOOLEAN DEFAULT TRUE,
    comments IN VARCHAR2 DEFAULT NULL);
    The job you manually run once and then it drops itself when you test.
    so I am specifiying auto_drop to be false, but it is showing it has true as the attribute.
    DBMS_SCHEDULER.CREATE_JOB(
    job_name => scanner.scanner_name,
    job_type => 'PLSQL_BLOCK',
    job_action => 'BEGIN IF EEG_SCAN.GET_RUNNING_JOBS_COUNT('
    ||''''||UPPER(scanner.scanner_name)||''''
    ||') < 2 THEN '||scanner.scanner_proc_name||'; END IF; END;',
    repeat_interval => 'FREQ=SECONDLY;INTERVAL=5',
    job_class => c_job_class_name,
    auto_drop => FALSE,
    enabled => true
    scheduler job is still not working as expected.....?
    Can you help me for this??
    thanks a lot in advance.
    Message was edited by:
    jerrygreat
    Message was edited by:
    jerrygreat

    Hi,
    There are a few other limits you could check .
    Make sure that you have not exceeded the maximum number of sessions or the maximum number of processes or the maximum number of scheduler jobs
    select * from dba_scheduler_global_attribute;
    and
    select name,value from v$parameter where name like '%process%';
    select name,value from v$parameter where name like '%session%';
    Also check how many jobs are currently running
    select count(*) from dba_scheduler_running_jobs;
    select count(*) from dba_jobs_running ;
    select count(*) from v$session ;
    One of these limits may need to be increased.
    The run_job succeeds because it runs in the current session by default, if you use use_current_session=>false, does it still work ?
    Also auto_drop only drops the job when it has completed e.g. past its end_date or exceeded its max_runs.
    Finally note that there is a dedicated forum for dbms_scheduler located here
    Scheduler
    Hope this helps,
    Ravi.

  • Checking status of scheduled jobs in Hyperion System 9.3.1 ....

    Hi All,
    We have Hyperion System 9 9.3.3. We have a Daily job scheduled every morning to check the status of the scheduled jobs that run daily. One of the jobs has a status message as below.
    Failure - Internal error.ContainerCache.populate: The object "0000011386a6823b-0000-a99f-0a103a4a" is not found. It either does not exists, or it may be inaccessible.
    But when I check the job status in View Job Status > Job Scheduler module, I see the report is still running. It's just that this report is taking longer to run and completes some time after the Daily Job Status check report runs.
    In this case, shouldn't the status should say something like the job is still running instead of "Failure" which is misleading?
    Thanks in advance.
    Z

    Hi,
    I have the same problem with Oracle EPM 11.1.2:
    com.sqribe.transformer.ObjectNotFoundException: ContainerCache.populate: The object "0000012f0710ca12-0000-a4ad-c0a80003" is not found. It either does not exists, or it may be inaccessible.
    Who gives me the right answer?
    Thanks in advance, Ron

  • OIM 11g-Error During Creating a new Scheduler Job

    Hi Experts,
    I am trying to create a new scheduled job. I imported the job by using the 'weblogicImportMetadata.sh'.
    But when I click on the task lookup while creating a job using UI, I am getting the following error in the logs. Please let me know if anyone faced this error before and how can it be resolved.
    <Oct 22, 2012 1:59:22 PM EDT> <Error> <oracle.adfinternal.view.faces.config.rich.RegistrationConfigurator> <BEA-000000> <ADF_FACES-60096:Server Exception during PPR, #6
    javax.servlet.ServletException: java.lang.AssertionError
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:341)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:27)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.help.web.rich.OHWFilter.doFilter(Unknown Source)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:205)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:106)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:447)
    at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:447)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:271)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:177)
    at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.iam.platform.auth.web.PwdMgmtNavigationFilter.doFilter(PwdMgmtNavigationFilter.java:122)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.iam.platform.auth.web.OIMAuthContextFilter.doFilter(OIMAuthContextFilter.java:109)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:176)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
    at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
    at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
    at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:94)
    at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
    at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:136)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3715)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused By: java.lang.AssertionError
    at org.apache.myfaces.trinidad.component.ChildArrayList.__removeFromParent(ChildArrayList.java:191)
    at org.apache.myfaces.trinidad.component.ChildArrayList.add(ChildArrayList.java:53)
    at org.apache.myfaces.trinidad.component.ChildArrayList.add(ChildArrayList.java:69)
    at org.apache.myfaces.trinidad.component.ChildArrayList.add(ChildArrayList.java:33)
    at oracle.iam.consoles.faces.render.canonic.UIValue$UIEntitySelector.search(UIValue.java:1670)
    at oracle.iam.consoles.faces.render.canonic.UIValue$UIEntitySelector.access$2400(UIValue.java:1467)
    at oracle.iam.consoles.faces.render.canonic.UIValue$EntitySelectorQueryListener.processQuery(UIValue.java:1787)
    at oracle.adf.view.rich.event.QueryEvent.processListener(QueryEvent.java:67)
    at org.apache.myfaces.trinidad.component.UIXComponentBase.broadcast(UIXComponentBase.java:675)
    at oracle.adf.view.rich.component.UIXQuery.broadcast(UIXQuery.java:108)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:92)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:361)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:96)
    at oracle.adf.view.rich.component.fragment.UIXInclude.broadcast(UIXInclude.java:102)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:93)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:361)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:96)
    at oracle.adf.view.rich.component.fragment.UIXInclude.broadcast(UIXInclude.java:96)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:902)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:313)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:186)
    at javax.faces.webapp.FacesServlet.service(FacesServlet.java:265)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:27)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.help.web.rich.OHWFilter.doFilter(Unknown Source)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:205)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:106)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:447)
    at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:447)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:271)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:177)
    at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.iam.platform.auth.web.PwdMgmtNavigationFilter.doFilter(PwdMgmtNavigationFilter.java:122)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.iam.platform.auth.web.OIMAuthContextFilter.doFilter(OIMAuthContextFilter.java:109)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:176)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
    at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
    at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
    at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:94)
    at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
    at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:136)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3715)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)

    I am afraid you might have made some undesirable changes in the /db/tasks.xml..... And don't go via plugin route... Instead
    Please restore the backup of /db/tasks.xml...
    The better alternative would be:
    (1) Create a separate /db/ABCDMyCustomScheduler.xml (Remember the /db part... This should be the folder...
    /home/oracle/Oracle/Middleware/MYPROJECTMDSFILES/import/db/ABCDMyCustomScheduler.xml
    Then in the weblogicImportMetadata.sh, import path should be till:
    /home/oracle/Oracle/Middleware/MYPROJECTMDSFILES/import
    If it does not begin with /db, the job will not appear when you would try to create new job for it via Web GUI
    (2) Upload Jar.sh as Scheduler task...
    (3) Restart and all that...

Maybe you are looking for