" Send Masterdata Changes to APO" Job failure

Hi All,
We are facing issues where there are changes in Material Description in R/3 and those changes were reverted again.
When these changes were sent to APO through background job, the job due to stuck queue. it shows the error "The ABAP/4 Open SQL array insert results in duplic ate database records.".We deleted the queue and rerun the job.
This is happening everyday.
Can you tell me what can we do to avoid queue failures?
Regards,
Denish

Denish
The same material should not be part of more than one active material integration model. I hope you have just one active material integration model.
The other thing is if the total number of materials in the active material IM is very less , then you can try deactivating and reactivating the model. But it is not a good idea to do it if you have a huge volume of materials in the IM. Also try this in the test system , test thoroughly and then do it in production system
Thanks
Saradha

Similar Messages

  • HP ePrint Home&Biz app not working. Print job failure:busy message appears on tablet

    I have an HP Laserjet P1102W and am trying to print from a Viewsonic tablet (android OS) using the HP ePrint Home&Biz app.  I can send an email to the printer via the tablet and it prints the document with no problems.
    I apologize in advance for the long message... 
    If I try and hit the PRINT key on the lower bar (which says 'HP Laserjet Professional P1102W (NPI7...host name), all that happens is blue & green LED's come on, green, blue flashes, green, blue flashes, green, blue flashes, then green and blue stay on steady.  Just before the blue and green come on steady at the end, the "print job failure:busy" message flashes on my tablet.
    I assume the sequence of LED flashes is the HP printer receiving a print command 3 times and can not perform the task because the printer thinks it is busy.
    After searching for this problem on the net, it would appear I am not the only person with this issue.  And yes, I have tried powering down my modem, printer, PC, and tablet and restarting all the above.  Not that the PC has anything to do with it, but I am desperate. 
    Does anyone have a viable working solution to this problem???

    Hello,
    Thanks for the post.  With this one, there is a firmware update available for the printer, and I've included a link below that with some excellent steps to check regarding this issue.  Good Luck!
    http://h10025.www1.hp.com/ewfrf/wc/softwareCategory?os=219&lc=en&cc=us&dlc=en&sw_lang=&product=41103...
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c02933944&cc=us&dlc=en&lc=en&product=4110396&tmp...
    I worked for HP but my posts and replies are my own....Thank you!
    *Say thanks by clicking the *Kudos!* which is on the left*
    *Make it easier for other people to find solutions, by marking my answer with (Accept as Solution) if it solves your issue.*

  • How to find out batch job failure and taking action:

    Normally We will monitor the batch jobs  through transaction code sm37 for job monitoring. In SM37 we will give a batch job name date and time  as  input. In the first step we will check the batch job for the reason failure or check the spool request for the batch job for failures an help in analyzing the error
    I understand from the my experience is that the batch may fail due to below reasons.
    1.,Data issues :             ex: Invalid character in quantity (Meins) field  >>>> We will correct the corresponding document with correct value or we will manually run or request the team to rerun the batch job by excluding  the problematic documents from the batch job variant  so that it may process other documents.
    2.Configuration issues : Materials XXXX is not extended for Plant >>>> we will contact the material master team or business to correct the data or we will raise sub contract call with support team to correct he data. Once the data been corrected and will request the team to rerun the batch job.
    3.Performance issues : Volume of the data being processed  by the batch job ,network problems.>>>Normally these kind of issues we will encounter during the month end process as there will lot of accounting transactions or documents being posted business hence it may cause the batch job failure as there is enough memory to complete the program or select queries in the program will timeout because of volume of the records.
    4.Network issues. : Temporary connectivity issues in other partner systems :Outage in other partner systems like APO or other system like GTS  will cause the batch job failure as Batch job not in position to connect other system to get the inforamtion and proceed for further steps.Nornmally we will check RFC destination status by running a custom program  weather connectivity between system are in progress or not. then intimate other partner system  for the further actions, Once the partner system comes online then we will intimate the team to restart or manually submit batch job.
    Some times we will create a manual job by transaction code SM36.

    I'm not sure what the question is among all that but if you want to check on jobs that are viewable via SM37 and started via SM36. The tables are TBTCP -Background Job Step Overview and TBTCO - Job Status Overview Table.
    You can use the following FM to get job details:
    GET_JOB_RUNTIME_INFO - Reading Background Job Runtime Data

  • Notification of job failure from GRC 5.2

    Hi everybody,
    Is there any way to have the system notify me when a batch job fails in GRC 5.2? I've got alerts configured, and we have a memory leak causing our Java instance to randomly reboot and occasionally kill background jobs; SAP is working on it with out Basis team. I would prefer not to have to check the jobs manually, but I didn't see anything in the job set up that would send a notification if a job fails.
    Thanks!
    Krysta

    Krysta,
    No such functionality available in GRC AC.
    Will suggest that schedule small and large number of job, so that failure of one job is minimum.
    Say separate job for user sync, and another one for role sync.....
    similarly where possible separate job per system (alway select system from search help, never enter mannually in RAR)
    check with SAP GRC which VIRSA_CC_??? table contain status of all the job of RAR (and similarly for other products),
    so that your developers can generate alerts based on that.
    hope it help
    regards,
    Surpreet

  • PPM Integration Model Activation did not send delta change data

    Hi all,
    I have an integration model of PPM's. I activated the PPM for the initial transfer.
    When I create a new prod. version, I created a new version of the model for PPM, above the previous active one.
    When I activated the new version of model, the system tried to send all data.
    I supposed that the system would only send the changed data, because the previous model is active.
    but the system sent all PPM's
    and it takes too long.
    Would you please answer why does the system react like that and what must we do?
    Mehmet

    Hi
    In order to check what is occuring, in your ERP system look at Table CIF_PPM_CHANGED, this will tell you the Material, Plant and Prod Vers of your PPM,s. It also has a field CHANGED to tell you if any changes have taken place and a field PROCESS, which should be set to "X" if the change has passed over to APO.
    Check the values in your table to see if this has been set. It should only send the delta records where there is a value in CHANGED and PROCESS=" ".
    Also check to ensure that people are not changing the Bills of Material or Routings etc, as this will also resend the PPM´s.
    Regards
    Ian

  • XML publisher job failure

    I got this type of message
    ACTION REQUIRED: XML Publisher Job Failure in qait2 Instance for Request 55086604.
    what should i do this type of case?

    Pl post details of OS, database and EBS versions. Did this ever work before ? If so, what changes have been made ?
    Can you post the contents of the xdo.cfg file ?
    Document Processor Errors With "oracle.xml.parser.v2.XMLParseException: '--' is not allowed in comme          (Doc ID 388388.1)
    HTH
    Srini

  • Regarding production job failure

    Hi Frnds,
    There is a production job failure when i check the logs iam finding the fallowing error.
    Restructuring of Database [Prepay] Failed(Error(1007045))
    Please let me know if you have any ideas.
    Thanks,
    Ram
    Edited by: KRK on Jun 23, 2009 12:34 PM

    Hi Glen,
    I have changed these factors to improve data loading time. After these changes job is being failed. I have tried by changing their cache to original values. But job is failing and here is the detail log please have a look and let me know where this is failing
    Iam using an ASO cube and iam building and loading cube through Essbase Integration services.
    Here are the details logs
    Received Command Get Database State
    Wed Jun 24 08:45:45 2009Local/Prepay///Info(1013210)
    User http://thomas.ryan set active on database Prepay
    Wed Jun 24 08:45:45 2009Local/Prepay/Prepay/thomas.ryan/Info(1013091)
    Received Command AsoAggregateClear from user http://thomas.ryan
    Wed Jun 24 08:45:45 2009Local/Prepay/Prepay/thomas.ryan/Error(1270028)
    Cannot proceed: the cube has no data
    Wed Jun 24 08:45:45 2009Local/Prepay///Info(1013214)
    Clear Active on User http://thomas.ryan Instance [1]
    Here we have designed the process such that it has to build dimensions first and the load data and then default aggregation takes place.
    Changes i have made are i have changed the fallowing settings
    1) Changed the Application pending cache size limit from 32mb to 64mb
    2) Changed the database data retrival buffers(buffer size and sort buffer size cache) from 10kb to 512kb.
    My system configuration details
    OS: windows 2003 server
    Ram: 4 gb
    What would be the right parameters to proceed with application taking all the points into consideration
    Please let me know if you have faced similar kind of issue or any ideas regarding this issue.
    Thanks,
    Ram

  • Sending Ciscowrks device credential Verification Job alters the device ARP table.

    Hi:
    I send a Device credential verification Job to two different devices  , a Cisco 3750 and a blade switch WS-CBS3120. I  configured SNMP write access to the switches. The SNMP write access in either SNMP v1 or SNMP V3 , tried both.
    The job is send from the Ciscoworks server.
    Once the job has completed, I do a show ip arp on the device and find a new entry with the IP address of the Ciscoworks  Server and the mac-address of the L2 next hop.
    We would not have  noticed this behavior had it not been that in the case that the  switch  next hop is an HSRP vlan on a nexus , the ARP entry entered into the switch is incorrect, and from then on the switch loses connection to Ciscoworks.
    The Mac-address that is entered by Ciscoworks , in the case of nexus is a statice mac defined on the Nexus for the Vlan in question , but it is NOT the HSRP default gateway MAC address. Therefore we lose connection between the switch and Ciscoworks. One has to manually clear the ARP table inorder to again reach the Ciscoworks.
    Questions:
    1. Why does Ciscoworks  insist on changing the ARP table?
    2. Is this ARP entry aged out or is it permeant as would an ARP entry which is entered through CLI  be  permeant ?   
    3. In the case of the Nexus connection, this ARP entry does not allow Ciscoworks and the device to communicate. This is not productive!
    Has any one come across this situation ? Any known fixes, workarround? I was not able to find a word about this on Cisco's site.
    Our Ciscoworks is at the following levels:
    CW Common services    3.3.0
    LMS portal                    1.2.0
    CW Assistent                1.2.0
    RME                             4.3.1
    Device fault manager      3.2.0
    IPM                              4.2.1
    Cisco View                    6.1.9
    campus Manager            5.2.1
    thanks   for any help
    Mickey

    CiscoWorks does not change the ARP table, at least not overtly.  A credential verification job will do the following things depending on what protocols are selected to test:
    SNMP RO : Fetches sysLocation.0
    SNMP RW : Sets sysLocation.0 to the value currently stored in sysLocation.0
    Telnet : Logs in using DCR username and password
    SSH : Logs in using DCR username and password
    Enable : Enters enable mode and verifies privilege level 15
    If one of these things are causing the ARP table to change, then there is something fishy in the device or network configuration.  I've never heard of such behavior relating to CiscoWorks before.

  • Utility data collection job Failure on SQL server 2008

    Hi,
    I am facing data collection job failure issue (Utility-data Collection) on SQL server 2008 server for, below is the error message as  :
    <service Name>. The step did not generate any output.  Process Exit Code 5.  The step failed.
    Job name is collection_set_5_noncached_collect_and_upload, as I gothrough the google issue related to premission issue but where exactly the access issues are coimng, this job is running on proxy account. Thanks in advance.

    Hi Srinivas,
    Based on your description, you encounter the error message after configuring data collection in SQL Server 2008. For further analysis, could you please help to collect detailed log information? You can check the job history to find the error log around the
    issue, as is mentioned in this
    article. Also please check Data Collector logs by right-clicking on Data Collection in the Management folder and selecting View Logs.
    In addition, as your post, the exit code 5 is normally a ‘Access is denied ’ code.  Thus please make sure that the proxy account has admin permissions on your system. And ensure that SQL Server service account has rights to access the cache folder.
    Thanks,
    Lydia Zhang

  • Send Program Change bug in Logic 7.2.2

    I recently got a Mac Pro 3.0, and installed Logic 7.2.2. In general things are
    going okay, but I recently found a bug and I wonder if anyone else has run
    into it. I have a fairly extensive studio setup (6 Gigasamplers, some hardware
    samplers, quite a few synths, etc.) and have been working on an older G5
    with few problems, still running 7.2.1. When I open up a sequence on the Mac Pro with Logic
    7.2.2, it checks all the Send Program Change boxes on every midi channel of
    every instrument in the environment. This plays havoc with my studio setup,
    since I don't often use Logic to send program changes. I have since built a
    new environment in Logic 7.2.2 with all the boxes unchecked, and I import
    that to fix the problem, but I have many sequences on the old G5 I will need
    to open on the new machine. Anyone else run into this? Any global fixes?
    Thanks...
    Mac Pro 3.0, G5 2.5   Mac OS X (10.4.7)   Logic 7.2.2, 7.2.1

    I've had strangeness with the global tracks during a mix. I had the "tempo" track causing notes to change pitch and cause midibog. Time signature works ok.

  • Auto alert mechanism for ATG scheduled job failure and BCC project failure

    Hello all,
    Could you please confirm if there are auto alert mechanisms for ATG scheduled job failure and BCC project failure?
    Waiting for reply.
    Thanks and regards,

    Hi,
    You need to write custom code to get alerts if an ATG Scheduler fails.
    For BCC project deployment monitoring, please refer to the below link in the documentation.
    Oracle ATG Web Commerce - Configure Deployment Event Listeners
    Thanks,
    Gopinath Ramasamy

  • Job Publication doesnt get changed data in Job Posting through workflow.

    Job Publication is not picking up the changed data in the Job Posting/Requisition through workflow.
    When I am changing the data in Job Posting and release it manually the changed data gets reflected in Job Publication but if I am releasing the Job Posting trough WF(automatically) then the Publication doesnt pick up the data.The workflows are working fine in the system still the problem exists.
    Thanks in advance for the reply.

    1-Log in portal with user id and pwd.
    2-Create a requisition initially.
    3-Create and release the Job posting(manually)
    4-Create and release the Job Publication(manually).
    5.Try editing the previous Job Posting and save it bu dont release the Job Posting manually.Now come to the personal page,when we again enter the same Job Posting ,the status of the Job Posting is set to "released" automatically by a Workflow.
    6.Now if we proceed for the Job Publication and try Displaying it,the edited changes in the Job POsting is not displayed.
    But if we have "released" the job posting manually then the changes are reflected in the Publication.
    The user wants to use the Workflow scenario and also wants the edited changes to be taken up by the Publication.
    Hope the following description helps out !
    thanks in advance.

  • Documents not sending saved changes

    I need help!  When opening a PDF document in Adobe Reader, I can make changes, save the data and email it to myself. When I open the document in my email, it does not save my changes. Please help,

    Tell us more details: operating system, email client, Reader version.
    How exactly are you proceeding?
    open a PDF from your local disk?  With Adobe Reader?
    make changes (what changes?), save it back to local disk?
    send the changed document via email?
    when you receive it, the changes are gone?

  • How to check job failure reasons in Prime Infrastructure

    We have PI version 1.3. I applied a CLI template and I saw the job's last run status is Failure in the Jobs Dashboard. But I cannot see any detail information about why it failed. Is there anyway we can see the job failure reasons? Thanks.

    Thanks for the tip. Acutally I cannot see that small circle in jobs dashboard. I finally found out I need to click on the job, then click on History, then the small circle is there under the History.

  • Changing supervisor name, changing the employee Job also

    Hi All,
    We are facing a problem in the Employee assignment.
    When we are changing employee supervisor details, employee job also getting changed to different job. Can any one having idea abt this change.
    Thanks and Regards,
    Joshna.

    Hi Avaneesh,
    There is no personalizations enabled for this Form. The changes are reflecting only for a particular group not for all the employees.
    We had implementated jobs only we don't implemented positions. Is any where we will link this group to effect this changes.
    Thanks
    Joshna.

Maybe you are looking for

  • Slingbox working with Airport Express but now Back To My Mac broken

    I had Back To My Mac working a couple of months ago. I have a Mac Mini connected to a Time Capsule on my Roadrunner cable modem network, and I had a Slingbox Pro (SBP) connected to my DSL Airport Extreme(n) network. I carry a MacBook Air with me on t

  • Error in sending MMS

    Dear all, I need your help & expertise. I am using Nokia N97. My problem is I can't send MMS (photos) from my downloaded photos..and error come up. "Error: Packet data connection not available."   Is it we only can send the photos from captured photo

  • Generate bar codes in mail forms

    Hi, I know how to insert Bar codes in smartforms, but I need to print the bar codes in Mail Forms, is that possible? Thanks a lot, Nuno Moreira

  • Modifying sql query in "region source" depending on hidden parameter?

    Hi, Is there a way to run 2 sql queries in "region source" depending on the hidden parameter passed? In other words, if I want to show all employees "select * from emp" when clicking on a "total" link as opposed to "select * from emp where dept=:xxx"

  • The dvd or cd sharing is missing in settings/sharing. where can i find it?

    I just bought a Macbook Air and am trying to install Office for Mac on it using my iMac. So I went to Settings/sharing/ and am looking for DVD or DC sharing box to check but it is missing. It is available on my iMac, but not on the Air. I guess I nee