Compression terminated since no scheduled job/worklist RSCOMP1 was found
We are facing a strange problem when compresing profitability analysis cube. The job fails with the message "Compression terminated since no scheduled job/worklist RSCOMP1 was found". Same error appears even after repeated attempts.
We are presently in BI 7.0. This is the first time we are trying to compress this cube after upgrade from 3.1C. Compression works fine for cubes in the areas like logistics, finance etc.. Tried various means but only in vain.
If some one faced this problem and solved it can you share that with me.
Hi,
See the following OSS Notes
Note 962144 - Termination occurs during condensing of an aggregate
Note 985774 - Compressing non-cumulative cubes creates incorr ref points
Note 998450 - Adjustments for condenser in Support Package 11
Note 995967 - Problem with delta change run - condensing
Note: Delete the Indexs and Create the Indexs and then Compress the Cube.
Thanks
Reddy
Similar Messages
-
Scheduled job throws Class Not Found error when executing Java class
Hi,
I have written a java class to carry out a file upload to an external site and put the class on our server.
I also wrote a script to call this java class, passing inthe relevant parameters.
If i call the script using the ./ syntax from SSH it runs file logged in as Root and Oracle.
I the setup a scheduled job to call this script but the job fails with the error...
STANDARD_ERROR="Exception in thread "main" java.lang.NoClassDefFoundError: HttpsFileUpload Caused by: java.lang.ClassNotFoundException: HttpsFileUpload at java.net.URLClassLoader$1.run(URLClassLoader.java:"I cannot understand why it is raising the error if it runs from SSH.
O/S = Red Hat Enterprise Linux ES, oracle version = 10.2.0.1.0
Any help or guidance would be appreciated
Thank you in advance
Graham.
Edited by: gpc on Feb 4, 2009 12:46 PMHi,
See this link for some tips if you haven't yet :
Guide to External Jobs on 10g with dbms_scheduler e.g. scripts,batch files
I can think of two things. Your script may not be able to run as the nobody user (by default external jobs run as the nobody user). Or your script might require that some environment variable be set (by default environment variables are not set in external jobs).
Maybe you need to set the CLASSPATH variable in your script before calling java ?
Hope this helps,
Ravi. -
Enabling Recovery and scheduled jobs
We have a job that is scheduled to run every three hours and we have it set up to enable recovery. The job has about 10 dataflows in it. If the job fails and is not corrected should the next run of it in three hours pick up from the point it failed or would it start at the beginning? It appears to be starting again at the beginning.
In some cases the job is not actually failing (i.e. no red X) but we get a warning even though the trace log message is "Process terminated because its parent had died.". There is no error log but the job does stop. I am wondering if because the system does not see it as an error the next run would just start new vs recovering.Arun - We tried the following test.
A test job has 3 data flows in which 2nd Data Flow does not work as it has authorization issue .
Scheduled a test job where work flows are normal(No recovery unit set) and execution parameter ‘Enable Recovery’ was set . Scheduled the job to repeat every 7 minutes.
First run as per the Schedule ,failed due to 2nd data flow issue.
Second run as per the schedule (2nd Data flow was corrected), Recovered, and started from the failed step that is 2nddata flow.
Third run and Fourth Run as per the Schedule (7 and 14 minutes later) but no dataflows were executed.
It looks like it uses the recovery information from the 2nd run (which completed the first run successfully) and then ran nothing. I would have expected that the job would have run from the first dataflow since the prior job had completed succesfully.
Any thoughts on how we can set this up in an automated fashion so that if a run is recovered and completes successfully the next run will start from the beginning? -
Schedule Jobs with multiple steps via ABAP Program
Hi,
I need to schedule multiple programs via background jobs on a daily basis. Since all these jobs are to be run as a single job, the various programs have to be run as steps in a major job.
I am however not very clear on how this can be done via an ABAP program ( the idea of a program is that various parameters to be passed to each program in the step can be entered dynamically rather than via variants).
I am using the JOB_OPEN and JOB_CLOSE functions and submitting the programs with selection screen parameters dynamically passed to create a job. I need to do this for various programs as a job step (WITHOUT Variants being defined).
Can anyone suggest any ideas for this? I have tried out JOB_SUBMIT Function but am not very confident I know what exactly it is for as it asks for a variant.
Thanks very much,
PreetHi Preet,
just to be sure: you know, that variants can be dynamical, too?
It's quite usual to assign dynamical current date, but it's also possible to add / subtract value and even define own functionality.
Maybe it's easier to implement a dynamical selection and handle static jobs.
If you try to plan a job (online or with JOB_SUBMIT), you have to use variants - you can create (or change) them dynamical in beforehand. Only SE38, F8, F9 is creating a temporary variant, so that no saved variant is necessary.
But if you end up creating variants dynamical, you can change one existing variant, too. Then you can use a static job definition (with periodical starting rule).
So: have a look, if dynamic variants are enough, otherwise change variants per job.
Regards,
Christian -
Error when scheduling job (JOB_SUBMIT) when execute PC in WAD
Dear BIers,
I execute a Process Chain in commond button of WAD, however, I got the error message:
Job BI_PROCESS_DTP_LOAD could not be scheduled. Termination with returncode 8
Returncode '8' means ' Error when scheduling job (JOB_SUBMIT).
Any suggestions are appreciated.
B.R
GeraldDear Raju,
Have you checked SM21? Maybe you have a problem with your TemSe object. If this is the case a basis guy should know how to handle it.
Greetings,
Stefan -
Scheduled jobs do not run as expected after upgrading to 10.2.0.3 or 4
FYI, we've had a ticket open for several days because our scheduled jobs (dbms_scheduler) would no longer run as scheduled after an upgrade to 10.2.0.4 on HPUX, couldn't find the solution by searching in metalink, nor did I find it here or using Google - obviously I wasn't searching correctly. There is a note id that references a set of steps that appears to have resolved our problem. I am putting this note out here so that if you encountered the same difficulty you may come across this note earlier in your troubleshooting efforts rather than later. The full title of the note is: 'Scheduled jobs do not run as expected after upgrading to either 10.2.0.3 or 10.2.0.4'. The Doc ID is: 731678.1.
Thanks - our ticket should be getting closed out on it (our dba will be updating it), the scheduler has been running reliably since we took the steps in the doc mentioned.
-
Request vs Actual Start Date in Scheduler Jobs
Hi, I want to run a PL/SQL block, all days, at 1:00 AM
I've defined the following scheduler job, in a new (and empty) Job Class:
<font face="Courier New">
SQL> select job_name, repeat_interval, job_class, job_priority from dba_scheduler_jobs where job_class = 'MY_NOCHE_JOB_CLASS';
JOB_NAME...........: UNIFICA_CLIENTES
REPEAT_INTERVAL....: FREQ=DAILY;BYHOUR=1;BYMINUTE=0;BYSECOND=0
JOB_CLASS..........: MY_NOCHE_JOB_CLASS
JOB_PRIORITY.......: 1
</font>
However, the process is starting later, for example:
<font face="Courier New">
Request Start Date....: Nov 10, 2010 1:00:00 AM -02:00
Actual Start Date.....: Nov 10, 2010 4:06:33 AM -02:00
</font>
Why is the difference between Request Start Date and Actual Start Date ?
Is there any way to enforce the start of the Job at any time ?
OS: Solaris 10 8/07 s10s_u4wos_12b SPARC
DB: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
Thanks in advanceThe database is not down since at least 2 months
<font face="Courier New">
SQL> select LAST_START_DATE from DBA_SCHEDULER_JOBS where job_class = 'MY_NOCHE_JOB_CLASS';
LAST_START_DATE
────────────────────────────────────
17-NOV-10 04.21.00.447083 AM -02:00
</font>
Any advice ? -
Schedule Job : Backup Encryption hasnot finish completely.
Dear Mr/Ms,
SQL Server 2008 has backup encryption schedule job for 02 databases, but it has just backed up 01 database in several time.
Access is denied due to a password failure [SQLSTATE 42000] (Error 3279) BACKUP DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013). The step failed.
I have enough space for backup file.
Please help me solving this problem completely.
Thanks and best regards,
Phong Eximbank.Have you tried with PASSWORD='Password123'
instead of mediapassword?
Passwords can be used for either media sets or backup sets:
Media set passwords protect all the data saved to that media. The media set password is set when the media header is written; it cannot be altered. If a password is defined for the media set, the password must be supplied to perform any append or restore
operation.
You will only be able to use the media for SQL Server backup and restore operations. Specifying a media set password prevents a Microsoft Windows NT® 4.0 or Windows® 2000 backup from being
able to share the media.
Backup set passwords protect only a particular backup set. Different backup set passwords can be used for each backup set on the media. A backup set password is set when the backup set is written to the media. If a password is defined for the backup set,
the password must be supplied to perform any restore of that backup set.
https://technet.microsoft.com/en-us/library/aa173504%28v=sql.80%29.aspx?f=255&MSPPError=-2147217396 -
Log Info REASON="Job slave process was terminated"
hi,
i'm using oracle 10g, i have scheduled a job in scheduler, the job status is "STOPPED" and shows the log details as Details REASON="Job slave process was terminated". wht can be the reason?
how to resolve it ?
Edited by: user10745179 on Jun 8, 2009 3:28 AMWas it an external job?
What is the platform and exact version?
Ronald
http://ronr.blogspot.com -
DS run longer with scheduled job as compare to manual run
I have scheduled a job through Management Console (MC) to run everyday once at
certain time. After some time, maybe after 15 days of running, the execution
time has increased to double from 17 mins to 67 mins by one time jump. After
that, the job kept running and spent 67 mins to complete.
The nature of the job is to generate around 400 output flat files from a
source db2 table. Under efficient running time, 1 file took around 2 seconds to
generate. But now it became taking 8 seconds to generate one file. The data
volume and nature of the source table didn't change, so that was not the root
cause of increasing time.
I have done several investigations and the results as such:
1) I scheduled again this job at MC to run for testing, it would take 67 mins
to complete. However, if I manually run this job thorough MC, it would take
17 mins efficient time to run.
2) I replicated this job to a second job as a copy. Then I scheduled this
copied job at MC to run, it would take 67 mins to run. But if I manually run
this job through MC, it will take 17 mins to run.
3) I created another test repo and load this job in. I scheduled the job to
run at this new repo, it would take 67 mins to run. If I manually run the job
through MC, it only took 17 mins to run.
4) Finally, I manually executed the job through unix job scripts command,
which is one of the scheduled job entry in the cron file, such as
./DI__4c553b0d_6fe5_4083_8655_11cb0fe230f4_2_r_3_w_n_6_40.sh, the job also would take 17
mins to run to finish.
5) I have recreated the repo to make it clean and reload back the jobs and
recreated again the schedule. Yet it still took 67 mins to run scheduled job.
Therefore, the conclusion is why it takes longer time to run by scheduling
method as compare to manually running method?
Please provide me a way to troubleshoot this problem. Thank you.
OS : HPUX 11.31
DS : BusinessObjects Data Services 12.1.1.0
databasee : DB2 9.1Yesterday we had done another test and indirectly made the problem to go
away. We changed the generated output flat file directory from current directory
of /fdminst/cmbc/fdm_d/bds/gl to /fdminst/cmbc/fdm_d/bds/config directory to
run, to see any difference would make. We changed the directory pointing
inside Substitution Parameter Configurations windows. Surprisingly, job had
started to run fast and completed in 15 minutes and not 67 minutes anymore.
Then we shifted back and pointed the output directory back to original
/fdminst/cmbc/fdm_d/bds/gl and the job has started to run fast ever since and all
completed in 15 minutes. Even we created ad hoc schedule to run and it was
still running fast.
We not sure why it was solved by shifting directory away and shifting back,
and whether this had to do with BODS problem or HP Unix system environment
problem. Nonetheless, the job is started to run normally and fast now as we
test. -
Where is the scheduled job?
This scheduled job starts at 22:00 every day on the Linux server. it creates snapshot on a remote storage via ssh, and here is what I found in the log about the job on the remote node:
Sun Nov 10 22:00:08 EST [remote_node: rshd_0:debug]: [email protected](linux server ip)[email protected](ip)_37397:IN:ssh2_37397:IN:ssh2 shell:SSH INPUT COMMAND is snap create hourly_11-10-2013_22-00
It seems root cron job is the only cron job under /var/spool/cron on this Linux server. However, I can not find the script which started at 22:00 at all, nor any script with such function. the following is the only line at 22:00pm in /var/cron/log
Nov 10 22:01:01 phoenix crond[24352]: (root) CMD (run-parts /etc/cron.hourly)
However, there is no any scripts with such function in /etc/cron.hourly neither.
Please help me out where this schedule job possibly located on the linux server.
Thank you in advance!Understand all you are saying here.
I have already talk to NetApp support. They are right that the action on the storage is definitely triggered by this linux server "phoenix". The following message has clearly stated that:
Sun Nov 10 22:00:25 EST [netappname: rshd_4:debug]: [email protected]_37412:IN:ssh2 shell:SSH INPUT COMMAND is snap create vol1 vol1_nightly_11-10-2013_22-00
159.3.99.146 is the ip for the linux server "phoenix".
However, they can not tell me more since it should be found out on the linux.
The problem is that I could not locate the script either by searching crontab jobs or any anacron as you explained.
root is only user who has cron job. I checked everysingle line, not only there are no such scripts, also there are no any jobs will ge kicked at 22:00 as the time when the action is started.
I also searched /etc/cron.hourly. there is only one script, and it does nothing about that action.
so, i don't have a clue about what and where the script possible is... -
BI Publisher Bursting sending Email even after I delete schedule job
I am facing some weird issue in BI Publisher. We have a BI Publisher report and we are using Bursting query to burst the report to destination Email. It works fine when i run manually. It also worked fine when i schedule it first time for particular time. few days ago client wanted to change the scheduling time. Since there is no way to modify the scheduling time, i deleted the old schedule jobs from BI Publisher as well as i truncated the following tables from DB where it stores scheduling job information.
XMLP_SCHED_JOB
XMLP_SCHED_OUTPUT
XMLP_SCHED_SUBSCRIPTION
I also created the new scheduling job based on new time. The issue is Clients are getting the emails in old time as well as new scheduling timel. I can see the new scheduling information in following table so no information of old scheduling time. How BI Publisher is still running the report on old time. I research a lot but still i am not able to find the place from where it is still sending the old emails. Can anyone please have any idea to resolve this?Did you delete the job schedule using the GUI first? Otherwise by manually deleting the records from XMLP_ tables
may somehow have corrupted the scehduling option.
There are other components for the scheduler which are not only saved in XMLP_ tables and these
are the QUARTZ tables.
As of release 11g there is not mechanism to manage BIP scheduler tables.
regards
Jorge -
Custom Schedule Job Issue - OIM 11g R2
Hi All,
I deployed a custom schedule task to assign a role to a user and created a schedule job for the same.
The problem is my job is executing twice even though I ran it for one time and getting exception saying Role already assigned..
please tell me how to make execute job only once on one execution
Thanks in advance
Praveen...sorry, for late reply...
I tried restarting servers... no luck....
1)java code
public class TerminateAbscondUsers extends TaskSupport
ODLLogger logger = ODLLogger.getODLLogger("OIMCP.SAPH");
UserManager usrMgr = Platform.getService(UserManager.class);
public HashMap getAttributes()
return null;
public void setAttributes()
public void execute(HashMap hashMap)
logger.info("Entered TerminateAbscondUsers:execute() method");
Long abscondDays = (Long)hashMap.get("AbscondDays");
logger.info("Abscond Days = "+ abscondDays);
List<User> resultUserList = getUserList();
disableUsers(resultUserList,abscondDays);
logger.info("Left TerminateAbscondUsers:execute() method");
public List<User> getUserList()
logger.info("Entered TerminateAbscondUsers:getUserList() method");
HashSet<String> reqAttr = new HashSet<String>();
reqAttr.add("User Login");
reqAttr.add("Status");
//reqAttr.add("EventDate");
reqAttr.add("DateOfAbsconding");
List<User> usrList = new ArrayList<User>();
SearchCriteria usrIDSearchCriteria = new SearchCriteria("Action","AB",SearchCriteria.Operator.EQUAL);
try
usrList = usrMgr.search(usrIDSearchCriteria,reqAttr,null);
logger.info("Absconded userList size ="+ usrList.size());
catch(UserSearchException use)
logger.info("UserSearchExecption = " + use.getMessage());
catch(Exception e)
logger.info("Exception TerminateAbscondUsers:getUserList() = " +Arrays.toString(e.getStackTrace()));
logger.info("Absconded usrList = " + usrList.toString());
logger.info("Left TerminateAbscondUsers:getUserList() method");
return usrList;
public void disableUsers(List<User> resultUserList, Long abscondDays)
logger.info("Entered TerminateAbscondUsers:disableUsers() method");
UserManagerResult localUserManagerResult1;
UserManagerResult localUserManagerResult2;
try
for(int i=0;i<resultUserList.size();i++)
String userLogin = resultUserList.get(i).getLogin();
logger.info("User Login = " + userLogin);
Long userKey = (Long)resultUserList.get(i).getAttribute("usr_key");
String strUsrKey = userKey.toString();
logger.info("User key = " + strUsrKey);
//Date abscondDate = (Date)resultUserList.get(i).getAttribute("EventDate");
Date abscondDate = (Date)resultUserList.get(i).getAttribute("DateOfAbsconding");
Date currentDate = new Date();
Long diffDate = (Long)((currentDate.getTime() - abscondDate.getTime())/(1000*60*60*24));
if( diffDate > abscondDays)
logger.info("diff date = " + diffDate);
User localUser = new User(strUsrKey);
localUser.setAttribute("End Date",(Object)currentDate);
localUser.setAttribute("Action","TE");
localUserManagerResult1 = usrMgr.modify(localUser);
logger.info("Set End Date operation status = " + localUserManagerResult1.getStatus());
localUserManagerResult2 = usrMgr.disable(userLogin,true);
logger.info("Terminate operation status = " + localUserManagerResult2.getStatus());
catch(ValidationFailedException vfe)
logger.info("ValidationFailedException = " + vfe.getMessage());
catch(UserDisableException ude)
logger.info("UserDisableException = " + ude.getMessage());
catch(NoSuchUserException nsue)
logger.info("NoSuchUserException = " + nsue.getMessage());
catch(UserModifyException ume)
logger.info("UserModifyException = " + ume.getMessage());
catch(Exception e)
logger.info("Exception TerminateAbscondUsers:getUserList() = " +Arrays.toString(e.getStackTrace()));
logger.info("Left TerminateAbscondUsers:disableUsers() method");
2) METADATA XML
<scheduledTasks xmlns="http://xmlns.oracle.com/oim/scheduler">
<task>
<name>Terminate Abscond Users</name>
<class>com.hdfclife.oracle.iam.customScheduler.user.TerminateAbscondUsers</class>
<description>Terminate Abscond Users</description>
<retry>5</retry>
<parameters>
<number-param required="true" helpText="No. of days since Absconded">AbscondDays</number-param>
</parameters>
</task>
</scheduledTasks>
Thanks in Advance... -
How to Schedule Jobs to only run during a time window
I have a long running task that needs to schedule jobs to process data.
I only want these scheduled jobs to start during a specific window of time each day, probably 10:00 PM to 6:00 AM.
If the scheduled jobs do not begin during the specified time frame, they must wait until the next day to start running.
Each scheduled job will only be executed once and then auto dropped.
How should I go about creating these scheduled jobs?Hi Jeff,
I agree that the documentation isn't clear enough about the purpose of windows.
You can indeed use windows for changing the resource plan, but you can also use them for scheduling your jobs.
I did a simple test in real-time to illustrate the latter.
At around 10.30 am today I created a table that will populated by a job:
CREATE TABLE TEST_WINDOW_TABLE(EVENT_DATE DATE);
Then, I created a window whose start_date is today at 10.40 am :
dbms_scheduler.create_window(
window_name =>'TEST_WINDOW',
resource_plan => NULL,
start_date => to_date('10/04/2014 10:40:00', 'dd/mm/yyyy hh24:mi:ss'),
repeat_interval => NULL,
duration =>interval '5' minute
You can see that this window doesn't have a resource plan, and its repeat interval is NULL (so it will be opened only once).
The window will stay open for 5 minutes.
Finally, I created a one-off job whose schedule is the previously created window:
DBMS_SCHEDULER.create_job (
job_name => 'TEST_WINDOW_JOB',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN insert into test_window_table values (sysdate); COMMIT; END;',
schedule_name => 'SYS.TEST_WINDOW',
enabled => true,
auto_drop => true
Checking the user_scheduler_job_log before 10.40 would return no rows, which mean the job hasn't started yet since the window was not open.
Now, from 10.40, it shows one entry:
SQL> select log_date, status from user_scheduler_job_log where job_name = 'TEST_WINDOW_JOB';
LOG_DATE STATUS
10/04/14 10:40:02,106000 +02:00 SUCCEEDED
The TEST_WINDOW_TABLE has also got the row:
SQL> select * from TEST_WINDOW_TABLE;
EVENT_DATE
10/04/2014 10:40:02
Voilà.
In your case, since you want to run the jobs daily between 10 pm and 6 am (duration of 8 hours), the window would look like this:
dbms_scheduler.create_window(
window_name =>'YOUR_WINDOW',
resource_plan => NULL,
repeat_interval => 'freq=daily;byhour=22;byminute=0;bysecond=0',
duration =>interval '8' hour
For your jobs, you may need to specify an end_date if you want to make sure the job gets dropped if it couldn't run in its window. -
Sqlplus script to disable alerts and scheduled jobs
Hi
We have a painful task after every clone to manually disable oracle alerts (all 1000 of them from alerts -alerts manager responsibility) and also disable scheduled concurrent jobs (again all 1000 scheduled jobs) manually checking all pending jobs.
its ok to have them in prod but not test.
Is there a sql script which can disable ALL alerts and another sql which stops all scheduled request jobs in concurrent requests screen.
Thanks
ajA similar query is used as part of the document listed above.
There is a need to change two sets of concurrent requests to prevent execution on a cloned instance.
1) Terminate 'Running' Requests
2) Set Pending jobs to 'On Hold'
1) Set Terminating or Running to Completed/Terminated:
UPDATE fnd_concurrent_requests
SET phase_code = 'C', status_code = 'X'
WHERE status_code ='T'
OR phase_code = 'R';2) Place Pending/(Normal/Standby) to On Hold:
SQL> UPDATE fnd_concurrent_requests
SET hold_flag = 'Y'
WHERE phase_code = 'P'
AND status_code in ('Q','I');
Maybe you are looking for
-
Really like iPhoto, and all the editing features, and use this program A LOT! HOWEVER....exceptionally frustrating and disappointing that this means, when the program keeps, freezing, crashing and losing editing work I have done, that it is time cons
-
SD Billing Document Archive Issue
Hi Guru, I have encountered an issue related SD Billing Document Archive. The issue is like below : In our system there is provision that we can Archive SD through ""Services For Object"" option in VF02. Generally the way we archive it is as below st
-
Itunes iphone not there under devices
In fact, the whold "device" catagory isn't there. I need to sync my itunes. I am set up on manuel sync b/c I don't want all of my music synced
-
Install Oracle 11g client on Windows 7
Hi, I am getting an error while installing Oracle 11g R1 client(11.1.0.7.0) on Windows 7 x32. Error: OUI-18001: The operating system 'Windows Vista Version 6.1' is not supported. Recommendation : install the recommended service pack.
-
I just updated my Apple TV 2nd generation, and upon completion it would no longer give access to Photo Stream. Instead I was told that I needed to accept the change in iCloud terms of service without letting me know how to do that on an iMac, since I