Frequent failures of BI_REQD* jobs

hi BI Experts,
In our BW Production
nowadays BI_REQD* jobs are failing frequently for d one or two cubes. 20-30 jobs failing every day (those two cubes are loaded with many infopackages)
these jobs complete their work of overlapping request deletion and get failed without raising subsequent events.
job log shows like dat: -
Job started
Step 001 started (program RSDELREQ1, variant &00, user ID ALEREMOTE)
Log: Program RSDELREQ1; Request REQU_; Status ; Action Start*
Delete is running: Data target Z, from 0 to 0*
FB RSM1_CHECK_DM_GOT_REQUEST called from PRG RSDELPART1; row 0*
Request '1; DTA 'Z; action 'D'; with dialog 'X'
Leave RSM1_CHECK_DM_GOT_REQUEST in row 70; Req_State ''
SQL: 03/17/2008 03:39:06 ALEREMOTE
ALTER TABLE "/BIC/FZ_PS_C04" DROP PARTITION
"/BIC/FZ_PS_C040000333725"
SQL-END: 03/17/2008 03:39:12 00:00:06
Delete was successful: Data target Z, from 1,* to 1,**
InfoCube request ID 00 is already scheduled: no aggregation possible*
ABAP/4 processor: MESSAGE_TYPE_X
Job cancelled
please reply

Hi,
I suspect that you have defined the ODS also as export datasource and you have deleted the related PSA (i.e. the change log of the ODS) so is not possible to delete a single request.
If the change log is empty you cannot delete a single request.
If you need to delete some data, you must take other ways like selective deletion or upload of data with negative image, but these methods doesn't give you any assurance to delete the same data of the request
If that is not true I think the problem is about the programs, check for OSS Notes.
Which BW Version are you on? (3.x)
Thanks
CK

Similar Messages

  • FREQUENT FAILURES AND ERRORS:PLZ

    can any one send some frequent failures and errors.with sol if possible.

    Prasanth
    Check these weblogs. Hope these helps.
    Thnaks
    Sat
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    Re: dataloading problems from r/3 ?

  • IDoc failure in background job

    Hi,
    I have developed a customized program and sending outbound IDocs to other SAP system using Function Module MASTER_IDOC_DISTRIBUTE. When I am running this program in background mode IDocs are failed with reason 'Entry in outbound table not found'.
    But when I am running the same program in foreground then IDocs are getting posted properly. Partner Profile and port are set properly.
    Please anyone could give me reason behind IDocs failure in background Job.
    Many Thanks.

    Hi,
    Message Number for the message is "Entry in outbound table not found' E0 - 400. This Mesasge used
    in the below programs.
    Programs asying that there is no Entry in EDP13(Partner Profile : Outbound(Technical Parameters)) Table. Debugg the Background job in SM37 by entering JDBG in the command field then it will go to debugging mode. Then put break point for "MESSAGE".
    Thanks,
    Naresh Mochi

  • How is BI_REQD*   job triggered ???

    Hi all,
    can any one help on this.
    In our BW system, BW  triggers a job whose name begins wih
    BIREQU* after each successful master data load . After this job another job which begins  with BI_REQD*(which runs the program RSDELREQ1 ) is triggered and the whole data is deleted.so how can i stop this job from triggering ?
    i am an abap developer, and this is an urgernt requrirement,can any one quickly help me on this?
    Regards
    Durga K

    Hi,
    BI_REQ is triggered normaly when an load is triggered. Its for Src sys extraction (for flat file, DataMart, & BAPI -> job runs in SAP BW). In your case it may be because you have given the deletion of data from targets in the InfoPackage level.
    Check RSA1 > InfoPackage > Data targets tab > 7 th column (Delete entire contents of data targets).
    This may not be the exact issue. Just try it out.
    Also if this is a part of chain check if there is a process to delete data after this load.
    Thanks,
    JituK

  • Failure of background Job

    Hi Friends,
    i have created back ground job in R3 in order to load delta records from out bound Queue to Delta Queue.
    The frequeny is once in 24hours with my user id.
    This load is OK since few months, I never faced any problem with this background job.
    Suddenly i have observed that this job was failed yesterday and no further information in Job log.
    and today this job is OK.
    The point is yesterday my user id was locked due to some other reasons and today my id is OK.
    So, i am expecting the background job is failed due to locking of my user id. But i am not sure.
    any one please let me know if this is the reason.
    Thanks in Advance.
    Tony

    Hi,
    That is correct. If the user id gets locked or invalidated, the job is fail. In cases like these the best solution is to create a generic id and use that for job scheduling.
    Cheers,
    Kedar

  • Frequent failures to burn DVDs

    I have a new Macbook Pro 13", June 2009 (unibody.) Almost every other DVD+R DL I burn from finder (burning files to a DVD) results in a failure (a "coaster.") This is my first mac, I've only ever used Windows PCs before this and I can't remember the last time I actually had a CD or DVD fail on any PC before this. It hasn't happened to me in many years, but I get them a lot on the Mac.
    The question is, is this normal? Or is there something wrong with my drive?
    I could certainly go in to visit a Genius and have them check it out, but before I do I wanted to check in here on expectations.
    It's a brand new laptop, so I know it's not dirty.
    Thanks

    I've had these discs for a very long time, so they aren't going back. I'll just bring them to work and/or use them with my external USB burner. They're still find for my other computers.
    Odd, I thought Memorex was supposed to be decent. Now I know. You know, you think you're avoiding the garbage and going with a good name brand only to get garbage. The whole point is to avoid media problems in the first place.
    Well now I know it's likely the media. For whatever reason the Macbook burner just doesn't like this media.
    Awesome, thanks for all of the help. I'll definitely go get some Verbatim (and stick to them.) Good advice on getting something I can return if they don't work.
    Thanks again for all of the responses.

  • Reasons for frequent failure of Ips

    Hi ,
    Can U help me in finding out why Info packages fail frequently ...
    How to minimize .. what could be the possible work arounds...
    Regards
    Snigdha

    hi,
       the infpackage may fail frewuwntly because of  following reason.
    1.improper data format in source.
    2.improper specification in the extraction tab wheter csv or ascii.
    3.improper use of update method .
    4.improper use of conversion routine
    solution.
    1.use data selection tab for seeing the field format to be enterd using lokup.
    2.use the proper file format specification in extrxciton tab comma fro csv
    3.use st22 or sm37 for background monitoring
    4.laod upto only psa if you fill the format is not corect.and then edit manulally
    pls assign points if helpful

  • Notify failure for a job crawling

    Hi,
    I would like to know if there's a way to easily notify me when the web crawling (Data synchronization ) failed in Ultra Search. I know it launched a dbms_job, but I don't seem to be able to see it when it failed in the dba_jobs. OEM doesn't seem to detect the failure too.
    Otherwise, I know I can email the log file, but I'm sure there is an easiest way to do it.
    Thank you

    You would be better posting this question on the UltraSearch forum.
    Ultra Search

  • R/3 Update Modes

    Hi All,
    We know that there are 3 update modes Direct Delta, Queued Delta and V3 update.
    Can somebody pls tell me what are the differences between these 3 and in which scenarios, we would prefer to use each of them?
    Also, do we use these methods only in Logistics? or in other modules also?
    Thanks in Advance,
    Regards,
    BIJESH

    Hi,
    Direct Delta:- In case of Direct delta LUWu2019s are directly posted to Delta Queue (RSA7) and we extract the LUWu2019s from Delta Queue to SAP BW by running Delta Loads. If we use Direct Delta it degrades the OLTP system performance because when LUWu2019s are directly posted to Delta Queue (RSA7) the application is kept waiting until all the enhancement code is executed.
    Queued Delta: - In case of Queued Delta LUWu2019s are posted to Extractor queue (LBWQ), by scheduling the V3 job we move the documents from Extractor queue (LBWQ) to Delta Queue (RSA7) and we extract the LUWu2019s from Delta Queue to SAP BW by running Delta Loads. Queued Delta is recommended by SAP it maintain the Extractor Log which us to handle the LUWu2019s, which are missed.
    Serialized V3 Update: - In case of Serialized V3 update LUWu2019s are posted to Update Queue (SM13), by scheduling the V3 job which move the documents from Extractor queue (LBWQ) to Delta Queue (RSA7) in a serialized fashion and we extract the LUWu2019s from Delta Queue to SAP BW by running Delta Loads. Since the LUWu2019s are moved in a Serial fashion from Update queue to Delta queue if we have any error document it doesnu2019t lift the subsequent documents and as it sorts the documents based on the creation time, there every possibility for frequent failures in V3 job and missing out the delta records. It also degrades the OLTP system performance as it forms multiple segments with respective to the change in the language.
    Un serialized V3 Update: - In case of Un serialized V3 update LUWu2019s are posted to Update Queue ( SM13 ), by scheduling the V3 job which move the documents from Extractor queue ( LBWQ ) to Delta Queue ( RSA7 ) and we extract the LUWu2019s from Delta Queue to SAP BW by running Delta Loads. Since the LUWu2019s are not moved in a Serial fashion from Update queue to Delta queue if we have any error document it considers the subsequent documents and no sorting of documents based on the creation time. It improves the OLTP system performance as it forms a segment for one language.
    Use LO job control /LBWE to schedule V3 jobs.
    LO-JOB CONTROL
    job controls
    LO-JOB CONTROL
    Delta Job Control
    LO Cookpit: when to set Job Control?
    Regarding Job Control in LBWE for Dueued Delta
    Job control
    job control
    Job Control
    Job control
    job control
    LBWQ and RSA7
    question on LBWQ and RSA7
    RSA7 Delta queues
    RSA7 Delta queues
    full update
    Regards
    TG

  • How to find out batch job failure and taking action:

    Normally We will monitor the batch jobs  through transaction code sm37 for job monitoring. In SM37 we will give a batch job name date and time  as  input. In the first step we will check the batch job for the reason failure or check the spool request for the batch job for failures an help in analyzing the error
    I understand from the my experience is that the batch may fail due to below reasons.
    1.,Data issues :             ex: Invalid character in quantity (Meins) field  >>>> We will correct the corresponding document with correct value or we will manually run or request the team to rerun the batch job by excluding  the problematic documents from the batch job variant  so that it may process other documents.
    2.Configuration issues : Materials XXXX is not extended for Plant >>>> we will contact the material master team or business to correct the data or we will raise sub contract call with support team to correct he data. Once the data been corrected and will request the team to rerun the batch job.
    3.Performance issues : Volume of the data being processed  by the batch job ,network problems.>>>Normally these kind of issues we will encounter during the month end process as there will lot of accounting transactions or documents being posted business hence it may cause the batch job failure as there is enough memory to complete the program or select queries in the program will timeout because of volume of the records.
    4.Network issues. : Temporary connectivity issues in other partner systems :Outage in other partner systems like APO or other system like GTS  will cause the batch job failure as Batch job not in position to connect other system to get the inforamtion and proceed for further steps.Nornmally we will check RFC destination status by running a custom program  weather connectivity between system are in progress or not. then intimate other partner system  for the further actions, Once the partner system comes online then we will intimate the team to restart or manually submit batch job.
    Some times we will create a manual job by transaction code SM36.

    I'm not sure what the question is among all that but if you want to check on jobs that are viewable via SM37 and started via SM36. The tables are TBTCP -Background Job Step Overview and TBTCO - Job Status Overview Table.
    You can use the following FM to get job details:
    GET_JOB_RUNTIME_INFO - Reading Background Job Runtime Data

  • How to check job failure reasons in Prime Infrastructure

    We have PI version 1.3. I applied a CLI template and I saw the job's last run status is Failure in the Jobs Dashboard. But I cannot see any detail information about why it failed. Is there anyway we can see the job failure reasons? Thanks.

    Thanks for the tip. Acutally I cannot see that small circle in jobs dashboard. I finally found out I need to click on the job, then click on History, then the small circle is there under the History.

  • Notification of job failure from GRC 5.2

    Hi everybody,
    Is there any way to have the system notify me when a batch job fails in GRC 5.2? I've got alerts configured, and we have a memory leak causing our Java instance to randomly reboot and occasionally kill background jobs; SAP is working on it with out Basis team. I would prefer not to have to check the jobs manually, but I didn't see anything in the job set up that would send a notification if a job fails.
    Thanks!
    Krysta

    Krysta,
    No such functionality available in GRC AC.
    Will suggest that schedule small and large number of job, so that failure of one job is minimum.
    Say separate job for user sync, and another one for role sync.....
    similarly where possible separate job per system (alway select system from search help, never enter mannually in RAR)
    check with SAP GRC which VIRSA_CC_??? table contain status of all the job of RAR (and similarly for other products),
    so that your developers can generate alerts based on that.
    hope it help
    regards,
    Surpreet

  • Background job monitoring - alert from second failure

    Hi All,
    There is a need for configure monitoring for background jobs.
    I have been enabled background job monitoring via SE16 and background job is visible in RZ20.
    The problem is that we want to have a critical alert from second failure of certain job, not from first failure.
    Seems that I need to create a new method and assigne it to background job in RZ20?
    Is there a documentation how to create own method? or is there a another way to implement this monitor?
    Best Regards,
    Jani Mäki

    Hi,
    Do you want to monitor second failure on a day?
    Do you know the schedule of second job run?
    If yes to these questions, you can try to monitor using BPM in solution manager.
    Feel Free to revert back.
    -=-Ragu

  • Frequent load failures

    what are the frequent failures of info packages and what are the reasons for those failures.
    please help me out.
    i need suggessions from everyone.

    Madhusudhan
    There are Many reasons for failure of Info packages depends on the individual issues some of them are like
    User ALEROMOTE Loacked for master data's
    Data Sourcs has to be replicated
    Activation Failures
    Error occures in data Selection Etc.
    Thanks
    Sat

  • Job cancelled after system exception ERROR_MESSAGE in DB13

    Hello All,
    When i opened the t-code DB13 i saw that this job "Mark tables requiring statistics update" is cancelled.
    JOB LOG:
    12.02.2011  22:00:16  Job started
    12.02.2011  22:00:16  Step 001 started (program RSDBAJOB, variant &0000000000085, user ID 80000415)
    12.02.2011  22:00:18  Job finished
    12.02.2011  22:00:18  Job started
    12.02.2011  22:00:18  Step 001 started (program RSADAUP2, variant &0000000000081, user ID 80000415)
    12.02.2011  22:01:26  Error when performing the action
    12.02.2011  22:01:26  Job cancelled after system exception ERROR_MESSAGE
    When check for the BGD Job in SM37 for this job i found the same error in the job log with the status cancelled.
    Job log overview for job:    DBA!PREPUPDSTAT_____@220000/6007 / 22001700
    12.02.2011 22:00:18 Job started
    12.02.2011 22:00:18 Step 001 started (program RSADAUP2, variant &0000000000081, user ID 80000415)
    12.02.2011 22:01:26 Error when performing the action
    12.02.2011 22:01:26 Job cancelled after system exception ERROR_MESSAGE
    I couldn't find any logs in SM21 of that time also no dumps in ST22.
    Possible reason for this error:
    I have scheduled the job Check database structure (only tables) at some other time and deleted the earlier job which was scheduled during the bussiness hours which caused performance problem.
    So to avoid performance issue i scheduled this job in the mid night by cancelling the old job which was scheduled during the bussiness hours.
    And from the next day i could see this error in DB13.
    Rest all the backups are running fine but the only job getting cancelled is for "Mark tables requiring statistics update"
    Could anyone tell me what should i do to get rid of this error?
    Can i schedule this "Mark tables requiring statistics update" again by deleting the old one?
    Thanks.
    Regards.
    Mudassir Imtiaz

    Hello Adrian,
    Thanks for your response.
    Every alternate day we used to have performance issue at 19:00hrs.
    Then when i checked what is causing this problem, i discovered that there was a backup "Check Database Structure (tables only)" scheduled at this time and it was mentioned that this backup may cause performance issue.
    Then i changed "Check Database Structure (tables only)" the time of this backup to 03:00hrs.
    The next day when i checked DB13 i found that one of the backups failed.
    i.e. "Mark Tables Requiring Statistics Update"
    Then i checked the log which i posted earlier with the error: "Job cancelled after system exception ERROR_MESSAGE"
    I posted this error here and then i tried to delete the jobs scheduled i.e. "Mark Tables Requiring Statistics Update" and then re-schedule it at the same time and interval.
    And then it started working fine.
    So i just got curious to know the cause of the failure of that job.
    Thanks.
    Regards,
    Mudassir.Imtiaz
    P.S There is one more thing which i would like to say which is not related to the above issue, and m sorry to discuss this in this thread.
    I found a few Bottlenecks in ST04 with Medium and High priority.
    Medium: Selects and fetches selectivity 0.53%: 122569 selects and fetches, 413376906 rows read, 2194738 rows qualified.
    High: 108771 primary key range accesses, selectivity 0.19%: 402696322 rows read, 763935 rows qualified.
    There are a lot these.
    I would really appreciate if you tell me what is the cause for these Bottlenecks and how to resolve.
    Thanks a lot.

Maybe you are looking for

  • How can i only allow the activation of the JDialog on top?

    Hi to all, i have a problem with a mask swing Well, my problem is this: In a window, when i click on save button, it's open a customizing JDialog. But i would like how i can deny a click out of this JDialog, that is on top. In other words, in this si

  • Problem with Internal table

    HI, I need to DMBE2 value as with respective of Month, Year and Inception date. I need the output to be like HKONT  BLART PRCTR DMBE2(Month) DMBE2 (Year) DMBE3 (Year) Below is my code DATA : BEGIN OF gt_lkorr OCCURS 0,           hkont LIKE bsis-hkont

  • PSE 8.0 Backup problems

    Hi! I am trying to make backups of my catalogues to a separate hard drive. Two out of three catalogues goes ok, but the last one makes PSE to hang after 64%, no matter how many times I try. I have repaired the catalog, runned defrag + diskcheck on th

  • What can I do if my ipod was stolen?

    My ipod was stolen about a week ago and i dont have icloud compelety set up on it.

  • How to select the media drives in Imovie 08 to store the media.

    I have a Mac Pro with 4 hard-drives inside and i dont want iMovie 08 to use the boot drive to store the media files as it does right now. When i use iMovie 08 this create iMovie folder inside the Movie folder on the boot drive but i want this program