Information error in SNP Background planning

Hi,
While executing the SNP in background transaction, i am facing this information message again and again for different items
Component <material code> used in  (000000011/000000010) level at location <location code> => No Planning
Could anyone throw some light on this
Thanks and regards,
Nalin Agarwal

Hi Tibor,
Thank you for your response. This was the thing that i was missing out on. Infact the setting was 10, but since the same product was gettin planned multiple times, the iterations were exceeding 10, because of which it was showing the information message "no planning".
It would be great if you could help me understand better what is meant by these iterations. Because, lets say there is a semi finished product "X" which is a BOM component for 200 products, and i am planning all the 100 at the same time in the background run, does this iteration mean that "X" would be getting planned again and again 200 times which would lead to this message??
Thanks
Nalin

Similar Messages

  • Use of parallel processing profiles with SNP background planning

    I am using APO V5.1.
    In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
    For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
    Is this expected behaviour? What are the 'good practices' for using these profiles?
    Any advise appreciated...

    Hello,
    I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
    For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
    If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
    If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
    Best Regards,
    Ada

  • Error occurred during CTM planning run

    Hi folks,
    Appreciate your co-operations!
    I am facing the problem while running the CTM with the profile DEMO2.
    CTM Planning Run gives one error and alert.
    Error: Error occurred during CTM planning run
    Technical Data
    Message type__________ A (Cancel)
    Message class_________ /SAPAPO/CTM1 (CTM: Messgaes)
    Message number________ 401
    Problem class_________ 1 (very important)
    Number________________ 1
    Environment Information
    CTM Action____________ G
    Message type__________ A
    Alert: Internal error has occurred (<!> Segmentation fault)
    Technical Data
    Message type__________ E (Error)
    Message class_________ /SAPAPO/CTM1 (CTM: Messgaes)
    Message number________ 571
    Message variable 1____ <!> Segmentation fault
    Number________________ 1
    Environment Information
    CTM Action____________ G
    Message type__________ C
    Log file display
    <i> 04:37:59 optsvr_main.cpp(1363) 'SuperVisor' => Commandline : 4 respected parameters ...
    Args:
    m0001006
    sapgw04
    28812935
    IDX=1
    <i> 04:37:59 optsvr_main.cpp(645) 'SuperVisor'  * SAP APO CTM Engine [CTM/ctmsvr]
    <i> 04:37:59 optsvr_main.cpp(646) 'SuperVisor'  * Copyright u00A9 SAP AG 1993-2009
    <i> 04:37:59 optsvr_main.cpp(647) 'SuperVisor'  *
    <i> 04:37:59 optsvr_main.cpp(648) 'SuperVisor'  * Version        : 7.0_REL SP05, 407661, Nov 25 2009 22:59:47
    <i> 04:37:59 optsvr_main.cpp(649) 'SuperVisor'  * Platform       : ntamd64/x64
    <i> 04:37:59 optsvr_main.cpp(650) 'SuperVisor'  * Interface      : 2.0
    <i> 04:37:59 optsvr_main.cpp(651) 'SuperVisor'  * Build date     : Nov 25 2009 22:59:47 [1259186387]
    <i> 04:37:59 optsvr_main.cpp(652) 'SuperVisor'  * Build machine  : PWDFM163
    <i> 04:37:59 optsvr_main.cpp(653) 'SuperVisor'  * Latest change  : 407661
    <i> 04:37:59 optsvr_main.cpp(654) 'SuperVisor'  * NW release     : 7100.0.3300.0
    <i> 04:37:59 optsvr_main.cpp(655) 'SuperVisor'  * Perforce branch: 7.0_REL
    <i> 04:37:59 optsvr_main.cpp(656) 'SuperVisor'  *
    <i> 04:37:59 optsvr_main.cpp(676) 'SuperVisor'  * Hostname       : m0001006
    <i> 04:37:59 optsvr_main.cpp(677) 'SuperVisor'  * OS version     : 5.2.3790 (WinServer2003, NTAMD64) SP2.0 (Service Pack 2), SERVER ENTERPRISE TERMINAL SINGLEUSERTS
    <i> 04:37:59 optsvr_main.cpp(678) 'SuperVisor'  * PID            : 6768
    <i> 04:37:59 optsvr_main.cpp(683) 'SuperVisor'  * CWD            : D:\usr\sap\SC6\DVEBMGS04\log
    <i> 04:37:59 optsvr_main.cpp(684) 'SuperVisor'  *
    <i> 04:37:59 core_sysinfo.cpp(453) 'SuperVisor' * free disk space: 190433 MB
    <i> 04:37:59 core_sysinfo.cpp(454) 'SuperVisor' *
    <i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' * Memory information:
    <i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' *   physical memory: 10238 MB total, 6511 MB available [63% free]
    <i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' *   page file      : 73212 MB total, 60889 MB available [83% free]
    <i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' *   virtual memory : 8388607 MB total, 8388499 MB available [99% free]
    <i> 04:37:59 optsvr_main.cpp(693) 'SuperVisor'  *
    <i> 04:37:59 optsvr_main.cpp(783) 'SuperVisor' * running in invoke mode
    <i> 04:37:59 optsvr_rfcconnection.cpp(871) 'MsgMgr' <RFC> RfcPing(RFC_HANDLE=1) received in thread#6912
    <i> 04:37:59 optsvr_rfcconnection.cpp(692) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PARAM_SET' for sending of parameters/options
    <i> 04:37:59 optsvr_rfcconnection.cpp(703) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PARAM_GET' for receiving of parameters/options
    <i> 04:37:59 optsvr_rfcconnection.cpp(712) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PROGRESS' for progress informations
    <i> 04:37:59 optsvr_rfcconnection.cpp(721) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_MESSAGE' for messages
    <i> 04:37:59 optsvr_rfcconnection.cpp(730) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_RESULT' for (intermediate) result informations
    <i> 04:37:59 optsvr_rfcconnection.cpp(739) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_SYSINFO' for system informations
    <i> 04:37:59 optsvr_rfcconnection.cpp(748) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PERFINFO' for performance informations
    <i> 04:37:59 optsvr_rfcconnection.cpp(1269) 'MsgMgr' <RFC> skipping empty profile value [GENERAL] sPROFILE_CUST_ID
    <i> 04:37:59 optsvr_rfcconnection.cpp(1835) 'MsgMgr'
    Sender/Receiver RFC_HANDLE#1:
    <RFC> * RFC connection attributes:
      Own Host    : m0001006
      Partner Host: m0001006
      Destination : OPTSERVER_CTM01
      Program Name: SAPLRCC_COMM_ENGINE
      SystemNr    :              04       SystemId    : SC6
      Client      :             700       User        : MBATCHA    
      Language    :               E       ISO Language: EN
      CodePage    :            1100       Partner CP  : 1100
      Kernel Rel. :            701        Partner Rel.: 701
      Own Release :            711        CPIC ConvId : 28812935
      Own Type    :               E       PartnerType : 3
      Trace       :                       RFC Role    : S
    <RFC> * RFC statistic information:
      number of calls        : 7
      number of received data: 10569
      number of sent data    : 1349
      overall reading time   : 9073
      overall writing time   : 162
    <i> 04:37:59 optsvr_main.cpp(1110) 'SuperVisor' * Starting MainScript ...
    <i> 04:37:59 optsvr_main.cpp(1445) 'SuperVisor'
    ***************************** OPTSVR - OPTIONS ***************************** *
    [CTM_PROFILE]
    nCTMENGINEPACKAGESIZE = 500
    sCOMPONENT = SCM
    sCTMLOGFILE = ctm.DEMO2.0000_0001.20091201043758.log
    sCTMLOGFLAG = 0
    sCTMPROFILE = DEMO2
    sRELEASE = 700
    [general]
    bUNICODE = true
    nSLOT_MAXIMUM = 1
    nSLOT_MINIMUM = 1
    nSLOT_RESERVED = 1
    sAPO_RELEASE = 700
    sAPPLICATION = CTM
    sExeDir = d:\apoopt\ctm\bin
    sExeName = ctmsvr.exe
    sHOST = m0001006
    sInvokeMode = invoke
    sLANGU = E
    sMANDT = 700
    sPRODUCT_NAME = APO
    sPRODUCT_PATCHLEVEL = 0001
    sPRODUCT_RELEASE = 700
    sPROFILE = DEMO2
    sSESSION = tju5Bmz21}6WVG0Sn6pv3W
    sSYSTEM = SC6
    sUNAME = MBATCHA
    [init]
    sSECTION0001 = INIT
    sSECTION0002 = GENERAL
    sSECTION0003 = PASSPORT
    sSECTION0004 = CTM_PROFILE
    [PASSPORT]
    bIS_REMOTE = false
    nACTION_TYPE = 1
    nSERVICE = 1
    sACTION = /SAPAPO/CTMB
    sPRE_SYSID = SC6
    sSYSID = SC6
    sTRANSID = 2205DEDE7A5BF16DA07D001CC46CF90E
    sUSERID = MBATCHA
    ************************** OPTSVR OPTIONS - END **************************** *
    <i> 04:37:59 core_msgmgr.cpp(440) 'MsgMgr' * Sending progress number 802 to OutputInterface from []
    <i> 04:37:59 core_supervisor.cpp(728) 'SuperVisor' <M> Invoking module 'CTMModelGenerator' [6]->download
    <i> 04:37:59 core_msgmgr.cpp(440) 'MsgMgr' * Sending progress number 806 to OutputInterface from [MG]
    <i> 04:37:59 ctm_modelgen.cpp(166) 'CTMModelGenerator' ======================================================================
    <i> 04:37:59 ctm_modelgen.cpp(167) 'CTMModelGenerator' MG::download
    <i> 04:37:59 core_msgmgr.cpp(1110) 'MsgMgr' renaming tracefile
    <i> 04:37:59 core_msgmgr.cpp(1111) 'MsgMgr' old name: optsvr_trace20091201_043759_1a70.log
    <i> 04:37:59 core_msgmgr.cpp(1112) 'MsgMgr' new name: ctm.DEMO2.20091201_043759_1a70.log
    logfile reopened : Tue Dec 01 04:37:59 2009
    logfile name     : ctm.DEMO2.20091201_043759_1a70.log
    <i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
    <i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PRDAT_RFC_READ
    <i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PLPAR_RFC_READ
    <i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_LOC_RFC_READ
    <i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PPM_RFC_READ
    <i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_TRANS_RFC_READ
    <i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_RES_RFC_READ
    <i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_SSTCK_RFC_READ
    <i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_CAL_RFC_READ
    <i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PLPER_RFC_READ
    <i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_INCMD_RFC_READ
    <i> 04:38:03 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
    <i> 04:38:03 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_DEM_RFC_READ
    <i> 04:38:04 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_SUP_RFC_READ
    <i> 04:38:04 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_UCMAP_RFC_READ
    <i> 04:38:04 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
    <i> 04:38:04 core_msgmgr.cpp(440) 'MsgMgr' * Sending progress number 810 to OutputInterface from [MG]
    <i> 04:38:04 ctm_modelgen.cpp(735) 'CTMModelGenerator' MG::download done
    <i> 04:38:04 ctm_modelgen.cpp(736) 'CTMModelGenerator' ======================================================================
    <i> 04:38:04 core_supervisor.cpp(750) 'SuperVisor' <M> Returning from module 'CTMModelGenerator' [6]->download = success [ctx size : 1]
    <i> 04:38:04 core_supervisor.cpp(692) 'SuperVisor' <SCR> Starting script 'CTM Solve' with 9.22337e+012 seconds left
    <i> 04:38:04 core_supervisor.cpp(692) 'SuperVisor' <SCR> Starting script 'CTM Match' with 9.22337e+012 seconds left
    <i> 04:38:04 ctm_executionmanager.cpp(102) 'SuperVisor' ======================================================================
    <i> 04:38:04 ctm_executionmanager.cpp(103) 'SuperVisor' statistics:
    <i> 04:38:04 ctm_executionmanager.cpp(104) 'SuperVisor' number of demands: 7
    <i> 04:38:04 ctm_executionmanager.cpp(105) 'SuperVisor' ======================================================================
    <i> 04:38:04 ctm_executionmanager.cpp(107) 'SuperVisor' ======================================================================
    <i> 04:38:04 ctm_executionmanager.cpp(108) 'SuperVisor' parameters:
    <i> 04:38:04 ctm_executionmanager.cpp(118) 'SuperVisor' time continuous planning
    <i> 04:38:04 ctm_executionmanager.cpp(125) 'SuperVisor' backward scheduling
    <i> 04:38:04 ctm_executionmanager.cpp(184) 'SuperVisor' CBCLP enabled
    <i> 04:38:04 ctm_executionmanager.cpp(457) 'SuperVisor' ======================================================================
    <i> 04:38:04 core_supervisor.cpp(728) 'SuperVisor' <M> Invoking module 'CtmEngine' [7]->run
    <i> 04:38:04 ctm_executionmanager.cpp(523) 'SuperVisor' ======================================================================
    <i> 04:38:04 ctm_executionmanager.cpp(524) 'SuperVisor' EM::execute for packet 1
    <i> 04:38:04 ctm_executionmanager.cpp(1570) 'SuperVisor' EM::execute for packet 1 done
    <i> 04:38:04 ctm_executionmanager.cpp(1571) 'SuperVisor' ======================================================================
    <i> 04:38:04 core_supervisor.cpp(750) 'SuperVisor' <M> Returning from module 'CtmEngine' [7]->run = success [ctx size : 1]
    <i> 04:38:04 core_supervisor.cpp(728) 'SuperVisor' <M> Invoking module 'CTMModelGenerator' [6]->upload
    <i> 04:38:04 ctm_modelgen.cpp(1097) 'CTMModelGenerator' ======================================================================
    <i> 04:38:04 ctm_modelgen.cpp(1098) 'CTMModelGenerator' MG::upload of packet 1
    <e> 04:38:05 ctmsvr_script.cpp(229) 'SuperVisor' <!> STRING EXCEPTION : <!> Segmentation fault
    <i> 04:38:05 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
    <i> 04:38:05 optsvr_main.cpp(1166) 'MsgMgr' Current check values:
    [CHECK_EQUAL]
    [CHECK_UPPERBOUND]
    nPEAK_MEMORY_NTAMD64 = 45344
    [CHECK_LOWERBOUND]
    <i> 04:38:05 optsvr_main.cpp(1209) 'MsgMgr' Performance values:
    bSuccess     false
    nCPU_TIME     0
    nPEAK_MEMORY     45344
    nPEAK_VIRTUAL_BYTES     141844
    nREAL_TIME     6
    tracefile     ctm.DEMO2.20091201_043759_1a70.log
    <i> 04:38:05 optsvr_main.cpp(1235) 'MsgMgr' Performance Monitor values:
    ENGINE_VERSION     7.0_REL SP05, 407661, Nov 25 2009 22:59:47
    nCPU_TIME     0
    nHD_FREESPACE     190433
    nPEAK_MEMORY     45344
    nPEAK_VIRTUAL_BYTES     141844
    nREAL_TIME     6
    <i> 04:38:05 optsvr_dsr.cpp(96) 'MsgMgr' <writeDSRdata> tracing not active => no DSR written
    <i> 04:38:05 optsvr_main.cpp(1256) 'SuperVisor'
    Finished->FAILED ...
    <i> 04:38:05 core_memmgr.cpp(564) 'MsgMgr'   transferring memory of heap 6912 to main heap
    <i> 04:38:05 core_memmgr.cpp(606) 'MsgMgr'   finished transfer of heap 6912
    <i> 04:38:05 optsvr_rfcconnection.cpp(1835) 'MsgMgr'
    Sender/Receiver RFC_HANDLE#1:
    <RFC> * RFC connection attributes:
      Own Host    : m0001006
      Partner Host: m0001006
      Destination : OPTSERVER_CTM01
      Program Name: SAPLRCC_COMM_ENGINE
      SystemNr    :              04       SystemId    : SC6
      Client      :             700       User        : MBATCHA    
      Language    :               E       ISO Language: EN
      CodePage    :            1100       Partner CP  : 1100
      Kernel Rel. :            701        Partner Rel.: 701
      Own Release :            711        CPIC ConvId : 28812935
      Own Type    :               E       PartnerType : 3
      Trace       :                       RFC Role    : S
    <RFC> * RFC statistic information:
      number of calls        : 116
      number of received data: 420457
      number of sent data    : 39262
      overall reading time   : 5.30093e+006
      overall writing time   : 3831
    <i> 04:38:05 optsvr_main.cpp(1332) 'MsgMgr'
    OptimizeServer says  GOOD BYE
    Please help me to resolve this issue.
    Thanks & Regards,
    Khadar

    Hi Khadar,
    1) The information you have provided is the CTM optimiser log.
    Run the job in background and in  sm37 and click on job log &
    analyse the exact error happened.  In case if you are not able
    to do, please provide the error log.
    2) Check the livecache is stable in its operations when the job
    runs (check with basis team)
    3) Run consistency check for master data before CTM run
    4) Check for any struck queues and clear those and rerun
    5) If you feel more inconsistencies in system, run livecache
    consistency and rerun CTM run
    Regards
    R. Senthil Mareeswaran.

  • Error while updating the plan in Enterprise link

    hi
    i am getting an error while updating the plan which polls the JMS topic and inserts into the grid.
    IMessageSourceReceiver->messageReceive: javax.jms.JMSSecurityException: JMS-232: An invalid user/password was specified for the JMS connection
         at oracle.jms.AQjmsDBConnMgr.checkForSecurityException(AQjmsDBConnMgr.java:916)
         at oracle.jms.AQjmsDBConnMgr.getConnection(AQjmsDBConnMgr.java:601)
         at oracle.jms.AQjmsDBConnMgr.<init>(AQjmsDBConnMgr.java:238)
         at oracle.jms.AQjmsConnection.<init>(AQjmsConnection.java:183)
         at oracle.jms.AQjmsTopicConnectionFactory.createTopicConnection(AQjmsTopicConnectionFactory.java:209)
         at iteration.enterpriselink.sources.JMSConsumer.start(JMSConsumer.java:94)
         at iteration.enterpriselink.sources.JMSMessageSourceReceiverImpl.jmsConsumerStart(JMSMessageSourceReceiverImpl.java:1001)
         at iteration.enterpriselink.sources.JMSMessageSourceReceiverImpl.messageReceive(JMSMessageSourceReceiverImpl.java:326)
    [Oracle BAM Enterprise Link error code:  0x75 -- 0x1, 0x75 -- 0x3A]
    Error during Message Receive operation.
    [Oracle BAM Enterprise Link error code:  0x75 -- 0x1, 0x75 -- 0x3B]
    Error while processing the data for the step 'Oracle BAM Enterprise Message Receiver'
    [Oracle BAM Enterprise Link error code:  DC -- 0x1, DC -- 0x83]
    Error while processing the data for the step 'Oracle BAM Enterprise Message Receiver'
    [Oracle BAM Enterprise Link error code:  DC -- 0x1, DC -- 0x83]
    Update of Plan "Untitled, created 3/11/2009 3:56:28 PM" failed.
    [Oracle BAM Enterprise Link error code:  PlanMgr -- 0x1, PlanMgr -- 0xD5]
    the username and password that i have given are valid and i have given privileges that were mentioned on the technote to the user. can somebody help me out in resolving this as this is very critical for me right now
    Thanks in advance

    I also get error while updating the content in workflow thru the checkout option as reviewer. i.e. contributor checks in the content - then reviewer either updates the metadata or checks out and modifies the content and while checking in the following error occurs
    Content Server Request Failed
    Unable to update the content item information for 'HO000128'.
    The content ID must be specified.
    Please help to resolve.
    Thanks in advance
    Prasad

  • Error in triggering background job

    HI, I've been facing an error which a background job is being scheduled. The scenario is something like this ...
    1) A third party schedulling system triggers a SAP job and monitors the progress
    2) In SAP, the master job is been copied with a different user name and then released for processing.
    3) There are multiple jobs which get triggered but there is one job which is causing a problem.
    4) The job makes a copy an stays in a scheduled status without getting released. While i try to manually release the job it give me an error that cannot create record in the database.
    5) Upon checking the system log it gives me this -  Error: INSERT background sched. table(job $$$$).
    There is already an entry for the specified job in the table TBTCS.
    6) While checking the TBTCS table i've found multiple entries out there. Should this table be empty or have information about the runtime of this job.
    Please help as this background job is causing concerns in archiving processess defined.
    Thankx in advance for your help.
    Arvind.

    the earlier problem of background job was solved by changing the  output device assigned to user wf-batch .
    regarding the transport request :
    when i was trying to execute the step : "Schedule bakgrd for missed deadlines " MANUEL it was giving me an option of SAVE AND SCHEDULE . which was creating the transport request . but when i executed it automatically it worked fine without asking a request to me ... i don't know why )-: ... probablly we can specify a different interval then the standard of three minutes and which will be transported (it's just a guess) . i have  executed it  automatically ...
    well thanks all for u r help

  • Processing BDocs Errors in the background

    Hi,
    I was wondering whether the following is possible.
    We currently have BDocs errors in SMW01 on a regular basis.  This is because our users are opening the BP record in CRM and R/3.  There the BDoc error will have a message saying that the business partner is currently being processed by XXXX.  We can reprocess these BDocs and they are processed successfully.  What we would like to do is process these BUPA_MAIN BDoc errors in the background so we don't need to go into them manually.  I understand the program for transaction SMW01 is RSMW_SHOW_BDOC.  Has anyone ever taken a copy of this program and modified it so that it can process the above BDoc errors automatically in the background?  We are looking into this as our business process allows the user to launch a record in both CRM and R/3.  (Don't ask)
    Anyway if anyone has done anything similar - any tips would be much appreciated. 
    Thanks in advance.
    Regards
    JoJo

    Hi Jojo,
    I have not done anything similar but can point you in right direction.
    You can refer to class cl_smw_mflow which handles the middleware for messaging bdoc's.  You can write a bespoek program from scratch with some selection parameters and process the messaging bdocs by looking at the status of bdoc's. There is a function module 'SMW3_MFLOW_QPROCESSMBDOC' which allows to process the bdoc's by passing the header information.
    You can read the status of bdoc's & other header infromation from table SMW3_BDOC.
    Hope this helps.
    Surendar

  • Error logging for Integrated Planning

    Can I turn on error logging for Integrated Planning.  Users get errors and forget to screenshot it to me, so I have no way of finding out what their error was unless I get them to duplicate it.  It would be nice to look up a log of errors by user.  Do they have something like that in Integrated Planning?
    Thanks!

    Hi Dustin,
    Few comments :
    1. UPDATE TASK means that a new background task is create and the function is performed asyncronisly ,with that the user will not wait for the update of the DB to finish ,you can read in SAP help about CALL FUNCTION IN UPDATE
    2. Be aware that the table that you create will be extremely large in no time ,you will have to delete data on a regular basis
    3. The same concept of the thread can be applied to function module 'RRMS_MESSAGE_HANDLING' this function module is responsible of messaging in BEX 'some times IP related messages are invoked from it.
    Regards,
    Eitan.

  • Issue in capturing in transit in SNP interactive planning book

    Hi,
    We are facing issue in capturing in trasit in SNP interactive planning book.
    We are using standard planning area ( 9ASNP02) on which we have build up planning book which is copy of standard planning book. Earlier standard planning area was containing key figure 9AITRAN(in transit)attached to category group (IT1) which contained categories AH, EI.
    As intransit was coming under category CS we added category in category group IT1. After adding it the intransit is not visble in planning book in key figure in transit but when we checking details of cell the value is present with start and end date as 01.01.1970 where as in RRP3 view intransit is present under current date.
    Before additing category CS in IT1 group the intransit was not coming but now after adding category also the in transit is not visible in cell but present in detail view with wrong dates.
    Regards,
    Vrushali.

    HI,
    Stock in transit keyfigure will display only EI orders (incase of VMI
    scenario) and AH orders (if you create inbound delivery).
    If you integrate stock,CS orders only will be available in /sapapo/RRP3.
    The root of the problem is in the setting from Intransit key figure:
    from a planning area management point of view it reflects
    the setting used in the standard SNP planning area  but the customizing
    of the category group used there is different from the one used in
    standard systems.
    Actually you added category CS there. But category CS has a atp category
    type 0, which means stocks actually. In the SNP planning books by design
    stocks are read separately from other orders because  have to be
    considered in the ATD computation.
    Therefore if you need to read orders with category CS you should
    consider it in the category group assigned to the ATD receipts key
    figure and to the stock key figure.
    This customizing could be applied at the planning area level, as
    described above or at the master location level, SNP tab.
    In this case you can specify there the category groups which have to be
    used in order to consider stocks, ATD receipts and ATD issues about all
    products assigned to the used master location. Please refer to the
    related F1 helps in order to get more information about.
    You can get more information at note 591310.
    Regards,
    Sunitha

  • Error while updating a plan

    Hi,
    I have encountered an error while updating a plan. While i update the uplan , the grid gets filled but it throws a message stating
    "Maximum Number of Result Blocks has been reached"
    Maximum number of Result Blocks has been reached.
    [Oracle BAM Enterprise Link error code:  DC -- 0x1, DC -- 0xD4]
    Error while processing the data for the step 'Grid'
    [Oracle BAM Enterprise Link error code:  DC -- 0x1, DC -- 0x83]
    Maximum number of Result Blocks has been reached.
    [Oracle BAM Enterprise Link error code:  DC -- 0x1, DC -- 0xD4]
    Error while processing the data for the step 'Grid'
    [Oracle BAM Enterprise Link error code:  DC -- 0x1, DC -- 0x83]
    Update of Plan "deposittest created 7/19/2007 5:11:50 PM" failed.
    [Oracle BAM Enterprise Link error code:  PlanMgr -- 0x1, PlanMgr -- 0xD5]
    The data updation stops after the row count 49900.
    Please help to resolve this error .
    Thanks in advance.
    Regards,
    Lathika

    Hi,
    in BAM-Administrator you should rraise Parameter "MaxResultBlocks".
    BAM originaly is al licensed Software from Group1 named "Sagent Data Flow".
    This may help searching the web.
    Greetz,
    GOI

  • Internal error occurs in background job scheduling

    Hi Experts,
    We are facing an error message "Internal error occurs in background job scheduling" while trying to execute a custom report(Z report) in background in SA38.
    Please find the following observation on our side on this message.
    1) This message is not coming for only one report not for others.
    2) SU53 screen shot shows that SE38 check is failed, but the weird thing is not happening for other report.
    3) Persons having SE38 auhtorization are able to run this report.
    Please advise.
    Thanks in advance,
    Viven

    What is the message ID and number? Have you tried OSS search and debugging?
    What does this program do, in a nutshell?

  • Getting an error while activating a planning area "Enter values for planning horizon From and planning horizon To for the storage time profile level"

    Dear S&OP community,
    I am getting following error while creating a planning ares in a newly installed sandbox. "Enter values for planning horizon From and planning horizon To for the storage time profile level".
    This what I did...
    1) Created new attributes and master data objects and activated them successfully.
    2) Time profile created and activated successfully
    3) Trying to create planing area by assigning  time profile in step 2 and assigned master data from step1..Unable to save the data and system returns 
    this error - "Enter values for planning horizon From and planning horizon To for the storage time profile level"
    My understanding is time profile needs to be active  but doesn't have to have values...
    Any help is appreciated.
    Thanks,
    Krishna

    YS,
    Here are my time profile settings
    Level       Name          Display Horizon - Past  Display Horizon - Future
    1             Monthly     -6                                       11         
    2             Quarterly     -2                                       3
    3             Yearly        -1                                       2
    Time profile is active and but time profile data is not loaded
    Thanks,
    Krishna

  • Please tell what to do if I see this error while opening an plan or while saving this plan?

    Error messsge: the enterprise global already contains a table named “Enterprise Entry”.
    Do you want to replace the table with the one from the enterprise global, replace all items with duplicates, rename the table in the project, or cancel opening the project.
    Please tell what to do if I see this error while opening an plan or while saving this plan?

    A-K-J --
    The error message indicates that your Project Server administrator has accidentally polluted the Enterprise Global file with local view and/or tables.  The solution does not lie with the user.  It does not really matter what button the user clicks
    in the dialog, since only the Project Server administrator can solve this problem.  To solve the problem, the Project Server administrator must do the following:
    Open the Enterprise Global file editing.
    Click File > Organizer.
    On the right side of the Organizer dialog (in the Enterprise Global file), delete any non-enterprise views.  There should be NO views named Gantt Chart, Resource Sheet, etc.  The only default enterprise view is named Enterprise Gantt Chart, which
    should not be deleted.
    Click the Tables tab.
    On the right side of the Organizer dialog, delete any non-enterprise tables.  There should be NO tables named Entry, Work, Cost, etc.  The only default enterprise table is named Enterprise Entry, which should not be deleted.
    Click the Close button.
    Save the Enterprise Global file.
    Close the Enterprise Global file.
    Close Microsoft Project, relaunch the software, and log into Project Server again.
    After completing the above steps, every PM should also exit Microsoft Project and relaunch the software again.  Hope this helps.
    Dale A. Howard [MVP]

  • Error in Conversion of Planned order to Production Order

    Hi,
    I am getting error while converting the planned order to Production order.
    Error is " Scheduling parameters not defined for the production orders"
    I have maintained the parameters in Prod-> Shop Floor Control -> Operations -> Scheduling....but after that also the same error is coming.
    And also I have maintained in the Capacity Req Planning
    Kindly assist me where I can define the scheduling parameters for the production orders in MTO cycle.
    Regards
    Toshi

    Hi ,
    My production order has been created ....but now m getting error while releasing it.
    Its saying that "No checking rule maintained for operation"
    I have maintained the checking rule in OPJK. But still m getting the same error.
    Kindly assist me for the above error.
    Regards
    Toshi

  • What is "Missing Port Information" error?

    Setting FINEST log level on JAXRPC in the AS8 2004Q04 Beta reveals nothing.
    Searches on this forum provides only 3 matches for "Missing port information" (this one makes 4). In http://forum.java.sun.com/thread.jsp?forum=136&thread=508958 the same problem that I have is described, and the problem is left unresolved, despite a plea from a second user. ( I wonder what happened to these people...Did they sell short or become .NET developers. Neither are an answer for me. )
    Searches in the bug parade shows that this error is generic: In 6157880, classpath was not set correctly. In 4859401, service used a J2SE 1.4 API in a 1.3 environment. In 4802443 there is a getter for a public field and so jaxrcp was confused. There are more bugs matching but I'm not listing them all here. My point is that there doesn't seem to be a single root cause for this problem.
    So, logs don't help, forum doesn't have answer, bug parade is too noisy. This problem seems unresolved here on Sun's web pages, but here I try again:
    Here are some questions:
    1) What is "port information"?
    2) From where is it missing? Where does AS8 expect to find it?
    3) Anyone have a strategy on how to debug this?
    I have Axis TCPMON tracing the soap messages and can verify that AS8 originates the error, not the client. The limited information from the server log verifies this too:
    [#|2004-11-01T16:28:20.008-0500|SEVERE|sun-appserver-pe8.1|javax.enterprise.resource.webservices.rpc.server.http|_Thre
    adID=11;|JAXRPCSERVLET22: no endpoint specified|#]Here is the output of a client that dumps the exception:
         [java] Endpoint address = http://Stratocaster:8080/ym/ym-alias
         [java] javax.xml.rpc.soap.SOAPFaultException: JAXRPCSERVLET28: Missing port information
         [java]     at com.sun.xml.rpc.client.StreamingSender._raiseFault(StreamingSender.java:515)
         [java]     at com.sun.xml.rpc.client.StreamingSender._send(StreamingSender.java:294)
         [java]     at svcp.XXXAPIR0501_Stub.create2(XXXAPIR0501_Stub.java:282)
         [java]     at SubClient.main(SubClient.java:31)

    I believe I've gotten to the bottom of this error and will report it here in case there is anyone that is also having this problem and in case there is anyone that can make sense of it.
    The alias and endpoint fields must be equal.
    The deploy tool allows you to enter an alias and an endpoint for your web service. I'll show you the user interface specifics below. The data you enter in these fields show up in the wsdl, the web.xml and the sun-web.xml, and I'll show what elements below.
    Except for the preceding '/' in the alias field (the '/' is inserted by deploytool), the alias and the endpoint string must be identical: if '/imp-alias' is the alias value, the endpoint must be 'imp-alias'.
    If they are not equal and your client uses the alias value in endpoint address, you will get "Missing port information" error.
    If they are not equal and your client uses the endpoint value in endpoint address, you will get '404 Not Found' error.
    For example, lets say I set alias to 'api-alias' and endpoint to 'api-endpoint'. Using my browser, I can get the wsdl with http://localhost:8080/context/api-alias?wsdl (where context is also a value set in deploytool gui). The returned wsdl will show that the soap:adress location attribute is: http://localhost:8080/context/api-endpoint. If your client uses the wsdl to generate stubs, the default endpoint is this location and your client will get the '404 Not Found' error. If you client provides an endpoint address by setting the ENDPOINT_ADDRESS_PROPERTY to http://localhost:8080/context/api-alias, your client will get the "Missing port information" error.
    You get to the alias and endpoint fields in deploytool from the File panel on the left. Open the subfolder that represents your war file. There will be your web service. Select the web service and the main panel will change to contain about 4-6 tabs. One tab labelled Aliases another tab labelled Endpoint.
    The alias field shows up in the web.xml file like this:
    <servlet-name>APIImpl</servlet-name>
    <url-pattern>/api-alias</url-pattern>
    </servlet-mapping>The endpoint field show up in the web.xml file under the same servlet-mapping element as alias:
    <servlet-name>APIImpl</servlet-name>
    <url-pattern>/api-endpoint/__container$publishing$subctx/*</url-pattern>
    </servlet-mapping>The endpoint field shows up in the wsdl in the soap:address element's 'location' attribute:
          <soap:address location="http://localhost:8080/context/api-endpoint" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"/>The endpoint field shows up in the sun-web.xml file
    <endpoint-address-uri>api-endpoint</endpoint-address-uri>John

  • Error occurred during deployment plan generation

    When I was publish my project i have a below error 
    An exception occurred when deploying the database for the application. An error occurred during deployment plan generation. Deployment cannot continue.
    Visual Studio 2011
    Microsoft SQL 2008
    Firewall on server is off

    A web application? Or a desktop application?
    Yann -
    LightSwitch Central -
    Click here for FREE Themes, Controls, Types and Commands
    Please click "Mark as Answer" if a reply answers your question. Please click
    "Vote as Helpful" , if you find a reply helpful.
    By doing this you'll help others to find answers faster.

Maybe you are looking for