SNP Optimiser issue (Annual Planning Run)
Dear Friends,
My fiscal year is 01.04.2010 to 31.03.2011, Now I am trying to run Annual planning run for the current FY.
After the run, If i see in SNPOPLOG, all demand is showing as unfulfilled. If i run for the next FY it is planning.
I want to run for the current FY, what settings I need to do?
sree
What is the time bucket profile of data view in which u are running optimizer. If it is a single yearly bucket you cannot run optimizer in current year. (your first bucket cannot have the start date in past)
Choose 12 monthly buckets. and run planning. You may 'view' the planning results in single annual bucket.
even if you do this, there will be no plan in current month. system will plan from next month onwards.
Hope this helps.
Regards,
Nitin
Edited by: Nitin Thatte on Apr 13, 2010 6:01 PM
Similar Messages
-
Hi Experts
I am facing the following problems in PPDS
1)How to align the backward logic in PPDS with our practice to minimum stock balance.
2) Some production plan set in SCM is getting advanced to previous week automatically before production orders are generated.
Please give your sugesstions on above issues.
Regards
RaveHi Rave,
Please be specific and descriptive when you post your questions.
1. 1)How to align the backward logic in PPDS with our practice to minimum stock balance.
Any planing run tries to create equal amount of receipt for an order. I guess you are more talking about maintaining the right lot sizes to make sure that the receipts are not bigger than the requirements. Try using Lot size grouping by days with day or week interval. Lot for Lot and Fixed and both end up in creating more order qty. than the requirement. Also try the PPDS heurisitc MRP with shelf life that would consider shelf life of products. What works and what does not really depends on your business and there is not really a general rule.
2) Some production plan set in SCM is getting advanced to previous week automatically before production orders are generated.
I don't understand what you are talking about. Please describe your issue. One of us here may be able to help.
Thanks. -
End Planning Run After Order Selection - issue not clearing down all plans
Hi,
We are using the CTM Profile with "End Planning Run After Order Selection" ticked in the basic settings tab in order to clear down previous CTM runs. So we have a CTM deletion profile.
The issue is that it requires more than one execution to clear down the Planned Orders / Purchase Requisitions. You execute the Deletion CTM Profile once and it will clear down some of the Purchase Requisitions and Planned Orders and leave some still in place. You then have to execute the deletion CTM profile one or two more times in order to delete the remaining Planned Orders / Purchase Requisitions.
This is clearly not realiable for a batch process to clear down plans. Has anyone else experienced this and does anyone have a solution. The solution cannot be based on RLCDELETE or /SAPAPO/DELETE_PP_ORDER
MarkHi,
Can you check your planning mode and deletion mode combination defined in the ctm profile.
Prerequisties for deleting the orders:
u2022 They are within the planning and deletion periods
u2022 They do not have pegging relationships with demands outside of the planning period
They lie outside of the production horizon, stock transfer horizon and the planned delivery time
In addition, all location products of an order must be included in the Master Data Selection so that CTM can
delete the order. Also, CTM cannot delete any manually fixed orders.
If you have selected the SNP Order order type in the CTM profile, CTM cannot delete any PP/DS orders. This
restriction is also valid the other way round for the PP/DS Order order type. If you select this order type, CTM also
deletes SNP orders.
Thanks,
nandha -
PPM run with different productivities for annual plan
Hi Experts
I want to do the annual planning with monthly bucket, The productivity(same product) from month to month is changing in the ppm across the year.
For maintaining the different productivity for different months i have to create the different PPM.
I want to do only one SNP run for one year
please suggest is there any another method for doing it.Hi,
You need to think in different way :
as your prductivity increase that means your rate of production is increase .
so you need to convert your output component and resource utilisation in terms of rate .
than use a quantity/rate definition to define the quantity capacity or
rate in the time intervals for the capacity variants.
To do this, you enter time intervals in a capacity variant and assign
each time interval a quantity/rate definition.
Hope it will help you .
Manish -
I have a annual plan, it supposed must be used in two terminals, but i haven´t been able to install any app in a second computer, it always says that the app is only available as a trial download. What can I do to solve this issue, and be able to use my paid plan in two computers?
Hi Susan,
Please refer to the help document to fix this issue:
Creative Cloud applications unexpectedly revert to trial mode | CS6, CCM
You may also refer to the thread as below:
creative cloud software says my free trial has expired, but I have a paid subscription
Regards,
Sheena -
Source Location in SNP Heuristics - Planning Run
hi,
I am trying to execute a Planning Run for Manufacturing Location -> Depot model. The scenario is that a depot D1 can be supplied from 2 manufacturing locations ( M1 and M2 ).
Transportation lanes have been maintained as M1 -> D1 and M2 -> D1.
Stocks are available in both M1and M2.
When I take the planning run ( /sapapo/snp01 ), a purchase requistion is being created only from M1 ; M2 is not being considered as a source location.
Can anyone give some inputs on the same.
regards,
AnirudhaHi! Anirudh,
SNP hueristic run on a standard logic in priority basis.
1) First it looks wheather quota is defined or not. if it finds an inbound quota it will create a purchase req to that location
2) if quota is not defined it looks for lane priorities defined
3) if there no lane priority it looks at per ton means of transport.
Please revert in case of any doubt
Regards
Vaibhav Sareen -
Good morning,
I am planning a product in a sim version in APO, if I do a regen planning run it works but if I run the same scenario (and reset the results) with a net change it is not included (not planned) in the planning run.
Am I missing something obvious??
Thanks in advance.Hi Prasad,
Planning run can be performed both with netchange as well as with
regenerative mode.
Every time, when planning relevant changes are done in system, system
automatically sets up planning file entry for the location product
combination.
If you select net change method, only those items with plannning file
entries will be planned in planning run.
If you select regen mode, the entire sequence will be replanned
irrespective of planning file entries.
Again, the selection of net change or regen mode is purely depends
upon the client/business depending upon the requirement.
Regards
R. Senthil Mareeswaran. -
Error occurred during CTM planning run
Hi folks,
Appreciate your co-operations!
I am facing the problem while running the CTM with the profile DEMO2.
CTM Planning Run gives one error and alert.
Error: Error occurred during CTM planning run
Technical Data
Message type__________ A (Cancel)
Message class_________ /SAPAPO/CTM1 (CTM: Messgaes)
Message number________ 401
Problem class_________ 1 (very important)
Number________________ 1
Environment Information
CTM Action____________ G
Message type__________ A
Alert: Internal error has occurred (<!> Segmentation fault)
Technical Data
Message type__________ E (Error)
Message class_________ /SAPAPO/CTM1 (CTM: Messgaes)
Message number________ 571
Message variable 1____ <!> Segmentation fault
Number________________ 1
Environment Information
CTM Action____________ G
Message type__________ C
Log file display
<i> 04:37:59 optsvr_main.cpp(1363) 'SuperVisor' => Commandline : 4 respected parameters ...
Args:
m0001006
sapgw04
28812935
IDX=1
<i> 04:37:59 optsvr_main.cpp(645) 'SuperVisor' * SAP APO CTM Engine [CTM/ctmsvr]
<i> 04:37:59 optsvr_main.cpp(646) 'SuperVisor' * Copyright u00A9 SAP AG 1993-2009
<i> 04:37:59 optsvr_main.cpp(647) 'SuperVisor' *
<i> 04:37:59 optsvr_main.cpp(648) 'SuperVisor' * Version : 7.0_REL SP05, 407661, Nov 25 2009 22:59:47
<i> 04:37:59 optsvr_main.cpp(649) 'SuperVisor' * Platform : ntamd64/x64
<i> 04:37:59 optsvr_main.cpp(650) 'SuperVisor' * Interface : 2.0
<i> 04:37:59 optsvr_main.cpp(651) 'SuperVisor' * Build date : Nov 25 2009 22:59:47 [1259186387]
<i> 04:37:59 optsvr_main.cpp(652) 'SuperVisor' * Build machine : PWDFM163
<i> 04:37:59 optsvr_main.cpp(653) 'SuperVisor' * Latest change : 407661
<i> 04:37:59 optsvr_main.cpp(654) 'SuperVisor' * NW release : 7100.0.3300.0
<i> 04:37:59 optsvr_main.cpp(655) 'SuperVisor' * Perforce branch: 7.0_REL
<i> 04:37:59 optsvr_main.cpp(656) 'SuperVisor' *
<i> 04:37:59 optsvr_main.cpp(676) 'SuperVisor' * Hostname : m0001006
<i> 04:37:59 optsvr_main.cpp(677) 'SuperVisor' * OS version : 5.2.3790 (WinServer2003, NTAMD64) SP2.0 (Service Pack 2), SERVER ENTERPRISE TERMINAL SINGLEUSERTS
<i> 04:37:59 optsvr_main.cpp(678) 'SuperVisor' * PID : 6768
<i> 04:37:59 optsvr_main.cpp(683) 'SuperVisor' * CWD : D:\usr\sap\SC6\DVEBMGS04\log
<i> 04:37:59 optsvr_main.cpp(684) 'SuperVisor' *
<i> 04:37:59 core_sysinfo.cpp(453) 'SuperVisor' * free disk space: 190433 MB
<i> 04:37:59 core_sysinfo.cpp(454) 'SuperVisor' *
<i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' * Memory information:
<i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' * physical memory: 10238 MB total, 6511 MB available [63% free]
<i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' * page file : 73212 MB total, 60889 MB available [83% free]
<i> 04:37:59 core_sysinfo.cpp(409) 'SuperVisor' * virtual memory : 8388607 MB total, 8388499 MB available [99% free]
<i> 04:37:59 optsvr_main.cpp(693) 'SuperVisor' *
<i> 04:37:59 optsvr_main.cpp(783) 'SuperVisor' * running in invoke mode
<i> 04:37:59 optsvr_rfcconnection.cpp(871) 'MsgMgr' <RFC> RfcPing(RFC_HANDLE=1) received in thread#6912
<i> 04:37:59 optsvr_rfcconnection.cpp(692) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PARAM_SET' for sending of parameters/options
<i> 04:37:59 optsvr_rfcconnection.cpp(703) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PARAM_GET' for receiving of parameters/options
<i> 04:37:59 optsvr_rfcconnection.cpp(712) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PROGRESS' for progress informations
<i> 04:37:59 optsvr_rfcconnection.cpp(721) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_MESSAGE' for messages
<i> 04:37:59 optsvr_rfcconnection.cpp(730) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_RESULT' for (intermediate) result informations
<i> 04:37:59 optsvr_rfcconnection.cpp(739) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_SYSINFO' for system informations
<i> 04:37:59 optsvr_rfcconnection.cpp(748) 'MsgMgr' <RfcConnection> using function module 'RCCF_COMM_PERFINFO' for performance informations
<i> 04:37:59 optsvr_rfcconnection.cpp(1269) 'MsgMgr' <RFC> skipping empty profile value [GENERAL] sPROFILE_CUST_ID
<i> 04:37:59 optsvr_rfcconnection.cpp(1835) 'MsgMgr'
Sender/Receiver RFC_HANDLE#1:
<RFC> * RFC connection attributes:
Own Host : m0001006
Partner Host: m0001006
Destination : OPTSERVER_CTM01
Program Name: SAPLRCC_COMM_ENGINE
SystemNr : 04 SystemId : SC6
Client : 700 User : MBATCHA
Language : E ISO Language: EN
CodePage : 1100 Partner CP : 1100
Kernel Rel. : 701 Partner Rel.: 701
Own Release : 711 CPIC ConvId : 28812935
Own Type : E PartnerType : 3
Trace : RFC Role : S
<RFC> * RFC statistic information:
number of calls : 7
number of received data: 10569
number of sent data : 1349
overall reading time : 9073
overall writing time : 162
<i> 04:37:59 optsvr_main.cpp(1110) 'SuperVisor' * Starting MainScript ...
<i> 04:37:59 optsvr_main.cpp(1445) 'SuperVisor'
***************************** OPTSVR - OPTIONS ***************************** *
[CTM_PROFILE]
nCTMENGINEPACKAGESIZE = 500
sCOMPONENT = SCM
sCTMLOGFILE = ctm.DEMO2.0000_0001.20091201043758.log
sCTMLOGFLAG = 0
sCTMPROFILE = DEMO2
sRELEASE = 700
[general]
bUNICODE = true
nSLOT_MAXIMUM = 1
nSLOT_MINIMUM = 1
nSLOT_RESERVED = 1
sAPO_RELEASE = 700
sAPPLICATION = CTM
sExeDir = d:\apoopt\ctm\bin
sExeName = ctmsvr.exe
sHOST = m0001006
sInvokeMode = invoke
sLANGU = E
sMANDT = 700
sPRODUCT_NAME = APO
sPRODUCT_PATCHLEVEL = 0001
sPRODUCT_RELEASE = 700
sPROFILE = DEMO2
sSESSION = tju5Bmz21}6WVG0Sn6pv3W
sSYSTEM = SC6
sUNAME = MBATCHA
[init]
sSECTION0001 = INIT
sSECTION0002 = GENERAL
sSECTION0003 = PASSPORT
sSECTION0004 = CTM_PROFILE
[PASSPORT]
bIS_REMOTE = false
nACTION_TYPE = 1
nSERVICE = 1
sACTION = /SAPAPO/CTMB
sPRE_SYSID = SC6
sSYSID = SC6
sTRANSID = 2205DEDE7A5BF16DA07D001CC46CF90E
sUSERID = MBATCHA
************************** OPTSVR OPTIONS - END **************************** *
<i> 04:37:59 core_msgmgr.cpp(440) 'MsgMgr' * Sending progress number 802 to OutputInterface from []
<i> 04:37:59 core_supervisor.cpp(728) 'SuperVisor' <M> Invoking module 'CTMModelGenerator' [6]->download
<i> 04:37:59 core_msgmgr.cpp(440) 'MsgMgr' * Sending progress number 806 to OutputInterface from [MG]
<i> 04:37:59 ctm_modelgen.cpp(166) 'CTMModelGenerator' ======================================================================
<i> 04:37:59 ctm_modelgen.cpp(167) 'CTMModelGenerator' MG::download
<i> 04:37:59 core_msgmgr.cpp(1110) 'MsgMgr' renaming tracefile
<i> 04:37:59 core_msgmgr.cpp(1111) 'MsgMgr' old name: optsvr_trace20091201_043759_1a70.log
<i> 04:37:59 core_msgmgr.cpp(1112) 'MsgMgr' new name: ctm.DEMO2.20091201_043759_1a70.log
logfile reopened : Tue Dec 01 04:37:59 2009
logfile name : ctm.DEMO2.20091201_043759_1a70.log
<i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
<i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PRDAT_RFC_READ
<i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PLPAR_RFC_READ
<i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_LOC_RFC_READ
<i> 04:37:59 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PPM_RFC_READ
<i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_TRANS_RFC_READ
<i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_RES_RFC_READ
<i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_SSTCK_RFC_READ
<i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_CAL_RFC_READ
<i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_PLPER_RFC_READ
<i> 04:38:02 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_INCMD_RFC_READ
<i> 04:38:03 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
<i> 04:38:03 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_DEM_RFC_READ
<i> 04:38:04 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_SUP_RFC_READ
<i> 04:38:04 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_UCMAP_RFC_READ
<i> 04:38:04 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
<i> 04:38:04 core_msgmgr.cpp(440) 'MsgMgr' * Sending progress number 810 to OutputInterface from [MG]
<i> 04:38:04 ctm_modelgen.cpp(735) 'CTMModelGenerator' MG::download done
<i> 04:38:04 ctm_modelgen.cpp(736) 'CTMModelGenerator' ======================================================================
<i> 04:38:04 core_supervisor.cpp(750) 'SuperVisor' <M> Returning from module 'CTMModelGenerator' [6]->download = success [ctx size : 1]
<i> 04:38:04 core_supervisor.cpp(692) 'SuperVisor' <SCR> Starting script 'CTM Solve' with 9.22337e+012 seconds left
<i> 04:38:04 core_supervisor.cpp(692) 'SuperVisor' <SCR> Starting script 'CTM Match' with 9.22337e+012 seconds left
<i> 04:38:04 ctm_executionmanager.cpp(102) 'SuperVisor' ======================================================================
<i> 04:38:04 ctm_executionmanager.cpp(103) 'SuperVisor' statistics:
<i> 04:38:04 ctm_executionmanager.cpp(104) 'SuperVisor' number of demands: 7
<i> 04:38:04 ctm_executionmanager.cpp(105) 'SuperVisor' ======================================================================
<i> 04:38:04 ctm_executionmanager.cpp(107) 'SuperVisor' ======================================================================
<i> 04:38:04 ctm_executionmanager.cpp(108) 'SuperVisor' parameters:
<i> 04:38:04 ctm_executionmanager.cpp(118) 'SuperVisor' time continuous planning
<i> 04:38:04 ctm_executionmanager.cpp(125) 'SuperVisor' backward scheduling
<i> 04:38:04 ctm_executionmanager.cpp(184) 'SuperVisor' CBCLP enabled
<i> 04:38:04 ctm_executionmanager.cpp(457) 'SuperVisor' ======================================================================
<i> 04:38:04 core_supervisor.cpp(728) 'SuperVisor' <M> Invoking module 'CtmEngine' [7]->run
<i> 04:38:04 ctm_executionmanager.cpp(523) 'SuperVisor' ======================================================================
<i> 04:38:04 ctm_executionmanager.cpp(524) 'SuperVisor' EM::execute for packet 1
<i> 04:38:04 ctm_executionmanager.cpp(1570) 'SuperVisor' EM::execute for packet 1 done
<i> 04:38:04 ctm_executionmanager.cpp(1571) 'SuperVisor' ======================================================================
<i> 04:38:04 core_supervisor.cpp(750) 'SuperVisor' <M> Returning from module 'CtmEngine' [7]->run = success [ctx size : 1]
<i> 04:38:04 core_supervisor.cpp(728) 'SuperVisor' <M> Invoking module 'CTMModelGenerator' [6]->upload
<i> 04:38:04 ctm_modelgen.cpp(1097) 'CTMModelGenerator' ======================================================================
<i> 04:38:04 ctm_modelgen.cpp(1098) 'CTMModelGenerator' MG::upload of packet 1
<e> 04:38:05 ctmsvr_script.cpp(229) 'SuperVisor' <!> STRING EXCEPTION : <!> Segmentation fault
<i> 04:38:05 rfc_connection.cpp(599) 'MsgMgr' <rfc> calling function module /SAPAPO/CTM_INT_STATUS_SET
<i> 04:38:05 optsvr_main.cpp(1166) 'MsgMgr' Current check values:
[CHECK_EQUAL]
[CHECK_UPPERBOUND]
nPEAK_MEMORY_NTAMD64 = 45344
[CHECK_LOWERBOUND]
<i> 04:38:05 optsvr_main.cpp(1209) 'MsgMgr' Performance values:
bSuccess false
nCPU_TIME 0
nPEAK_MEMORY 45344
nPEAK_VIRTUAL_BYTES 141844
nREAL_TIME 6
tracefile ctm.DEMO2.20091201_043759_1a70.log
<i> 04:38:05 optsvr_main.cpp(1235) 'MsgMgr' Performance Monitor values:
ENGINE_VERSION 7.0_REL SP05, 407661, Nov 25 2009 22:59:47
nCPU_TIME 0
nHD_FREESPACE 190433
nPEAK_MEMORY 45344
nPEAK_VIRTUAL_BYTES 141844
nREAL_TIME 6
<i> 04:38:05 optsvr_dsr.cpp(96) 'MsgMgr' <writeDSRdata> tracing not active => no DSR written
<i> 04:38:05 optsvr_main.cpp(1256) 'SuperVisor'
Finished->FAILED ...
<i> 04:38:05 core_memmgr.cpp(564) 'MsgMgr' transferring memory of heap 6912 to main heap
<i> 04:38:05 core_memmgr.cpp(606) 'MsgMgr' finished transfer of heap 6912
<i> 04:38:05 optsvr_rfcconnection.cpp(1835) 'MsgMgr'
Sender/Receiver RFC_HANDLE#1:
<RFC> * RFC connection attributes:
Own Host : m0001006
Partner Host: m0001006
Destination : OPTSERVER_CTM01
Program Name: SAPLRCC_COMM_ENGINE
SystemNr : 04 SystemId : SC6
Client : 700 User : MBATCHA
Language : E ISO Language: EN
CodePage : 1100 Partner CP : 1100
Kernel Rel. : 701 Partner Rel.: 701
Own Release : 711 CPIC ConvId : 28812935
Own Type : E PartnerType : 3
Trace : RFC Role : S
<RFC> * RFC statistic information:
number of calls : 116
number of received data: 420457
number of sent data : 39262
overall reading time : 5.30093e+006
overall writing time : 3831
<i> 04:38:05 optsvr_main.cpp(1332) 'MsgMgr'
OptimizeServer says GOOD BYE
Please help me to resolve this issue.
Thanks & Regards,
KhadarHi Khadar,
1) The information you have provided is the CTM optimiser log.
Run the job in background and in sm37 and click on job log &
analyse the exact error happened. In case if you are not able
to do, please provide the error log.
2) Check the livecache is stable in its operations when the job
runs (check with basis team)
3) Run consistency check for master data before CTM run
4) Check for any struck queues and clear those and rerun
5) If you feel more inconsistencies in system, run livecache
consistency and rerun CTM run
Regards
R. Senthil Mareeswaran. -
Dear Friends ,
The scenario is as follows :
1.There are 16 Distribution Centers which are procuring products from 2 production plants say Plant 1 and Plant 2 .
2.There are common products which can be manufactured in either of the production plants .
3.The following is expected from the SNP Optimizer run à First book the capacity of the Plant 1 for the common codes based on demand availability date , after the capacity is booked 100% then the product should be sourced from the other plant ie Plant 2 as there is idle capacity available in Plant 2 .
4.I have been able to get the above mentioned result wherein the production planned is prepared through SNP Optimiser in Plant 1 and after 100% booking the external procurement is generated for Plant 2 and this happens for top to bottom codes ie complete product shift FGSFGRM .
5.Now the requirement is that the Inventory is available for the said code in Plant 2 then the complete production ie net production after deducting inventory should be generated in Plant 2 and similarily if the inventory is available in Plant 1 then the complete product ie top to bottom should be manufactured in Plant 1 . Logically wherever inventory exists ie plant1 or plant 2 the balance said product should be manufactured in the plant which has inventory .
Would request you to review the above scenario and let me know your suggestions on how we can map the point 5 in the SNP Optimiser , needless to say the consistency should be maintained in the solution ie inventory is dynamic can be at Plant 1 or Plant 2 as the codes are common and can be manufactured in either of the plants .
Thanks and Best Regards ,
Prashant KumarDear Murali ,
Thanks for your inputs . This is how the Optimiser Performed :
Settings in the Objects : Incase I put a transportation cost in transportation lane between plant 1 and 2 it does not procure externally infact constraints the quantity ie reduces it . Even in case of PPM Cost Lower in Plant 1 and higher it Plant 2 it reduces the quantity rather than sourcing from Plant 2 . Hence my planning run was based on same PPM Cost in both the PPMs .
(a) Where there is stock of the SFG ( second level ) in Plant 1 the stock was considered due to storage cost and production planned made in Plant 1 and
Plant 2 . Share of production planned figure is based on the consumption defined in the PPM . Production Planned figure at Plant1 greater than Plant 2 guess stock was at plant 1 .
(b) Where there is stock of the SFG ( second level ) in PLant 2 the stock was considered due to storage cost and production planned made in Plant 2 and
Plant 1 . Share of production planned figure is based on consumption defined in the PPM . Production Planned Figure at Plant 2 greater than Plant 1 guess stock was at plant 2 .
(c) Incase there is no stock at either of the plants there is procurement done from plant 2 and production planned at plant 1 .
(d) Also what would be the situation in case we have BOM below the second level and stock is available in any of the level material below the second level code . Would it consider the stock and ensure production planned of the second level in the plant where stock of third or say fourth level code is available . i.e. PLant 1 having third level with stock would ensure that plant 1 is loaded for production for levels higher as well.
I tried to make this explanatory but think made this lengthy , would appreciate your validation and advice on the above points .
Thanks and Regards ,
Prashant Kumar -
Gurus,
I am facing an issue regarding SAP TPM IP ( HANA)
I have 3 Infoproviders
Planning infocube, Planning DSO1, Planning DSO2 and i created multiprovider and added these 3 infoproviders into it. I have created Aggregation level on multiprovider. Created Bex Input ready query on Aggregation level.
Issue is Planning layout is not getting rendered and is dumping at : CL_RSDRC_TREX_QUERY_LAYER ~ _GET_PARTPROVS_WITH_TREX_PART.
I tried debugging it and found it is trying to read i_r_pro -> n_ts_part. It is populated with only 3 values (i.e. 2DSOs and 1 Cube), whereas <l_partprov>-partprov is referring to Aggergation level, hence read statement isn't successful, it is dumping.
The class CL_RSD_INFOPROV_CACHE->GET() method is trying to populate the N_TS_PART.N_TS_PART uses P_R_INFOPROV table which seems to be already populated. So, I debugged all the below methods to find out how the P_R_INFOPROV but couldn't find any clue.
Can any one help,it would be really help.
Thanks
AshokHello Gregor,
On the launch of planning layout it throws an error message:
Planning is not possible RSCRM_IMP_CORE008.
When I debugged, i got to a point wherein
particular Real Time Planning DSO is
not getting retrieved under the MultiProivder in below class.
Class: CL_RSCRM_IMP_ACTIONS_SERVICE, Method: GET_INFOPROV is not
returning the real time Info-Provider Name(i.e. Planning DSO)
underlyingthe Multiprovider.
I've also tried to run the report mentioned by you for the Multiprovider but issue still exists.
Let me know, if you have any pointers on this topic.
Thanks,
Jomy -
Error - Background Processing: Planning Run
Hey Guys,
I want to schedule a background planning run. The report /SCMTMS/CL_TS_VSR_BGD is terminated
because of the following error. Has anyone an idea how to solve this issue?
Kind reagards,
Christian
Error in the ABAP Application Program
The current ABAP program "/SCMTMS/VSR_OPT_BGD" had to be terminated because it
has
come across a statement that unfortunately cannot be executed.
The following syntax error occurred in program
"/SCMTMS/CL_TS_VSR_BGD=========CP " in include
"/SCMTMS/CL_TS_VSR_BGD=========CI " in
line 42:
"Direct access to the components of the global interface /SCMTMS/IF_SRV"
"MGR_TYPES is not allowed (INTERFACE /SCMTMS/IF_SRVMGR_TYPES LOAD state"
"ment is missing)."
The include has been created and last changed by:
Created by: "SAP "
Last changed by: "SANNG "
Error in the ABAP Application Program
The current ABAP program "/SCMTMS/VSR_OPT_BGD" had to be terminated because it
has
come across a statement that unfortunately cannot be executed.Can you refer the following SAP Notes will help you to resolve your problem.
1230638 Memory consumption for VSR planning run too high
1285772 Planning: Write application log from VSR batch reports
1279704 Planning: Various performance improvements -
Has anyone had issues with Planning security refreshing in Essbase
I am on Planning version 9.3.0.1 and we are having issues with planning users being able to use an essbase connection to pull data from smartview. They are able to see data in web forms in planning but they get #no access when they try to connect to the same data in smartview.
Has anyone else experieced this issue.If user works when you provision individually its because they were set in the environment in EAS.
group has to have all the access needed (bug in orig). css file open a ticket for the file name. example user needs calc access you setup calc but bug in system is not also giving user read and write access. Thus you can individually add this to the group in provisioning area in shared services. in EAS goto File open- editors create/run script with
alter user 'userID' Add application_access_type Planning;
alter user 'userID' add application_access_type Essbase; to add user to the environment if error you need to refresh eas first then run maxl script for planning you also need to go to workspace and select any of your dimensions all your groups ...edit .. migrate identities.. to get group/Users in planning wether adding or removing users. when you do planning changes you need to do database and security filters refresh. -
Error Mrp: Total Planning Run stopped due to 25 Terminations of 202 Matls
Hi
We are trying to run the Mrp Job I.e steps (RMMRP00) with variant PLVH_NETCHNG)..
I have 1 db server and 2 apps servers and it the starts to run for about 2 minutes and then crashes with the ffg error:
Total Planning Run stopped due to 25 Terminations (dumps)of 202 Matls.
Apps Guys says its a basis issue and i cant see an issue.
ANy help will be appreciatedHi,
Please check the MRP setting some times we define mrp value upto 3-4 decimal and our setting upto 2 decimal points. If you send error log then that will be helpful for us to see the exact problem in the dump.
In case you want to send this by mail send [email protected]
Regards,
Anil -
Launch ASCP plan run very long time
Hi,
I launched constrained ASCP plan and it took very long time. I launch ASCP plan in friday and it still not finished yet till monday, Memory Based Shapshot & Snapshot Delete Worker are still running, Loader Worker With Direct Load Option is still in pending phase. MSC: Share Plan Partitions has been set to Yes.
When I run query below :
select table_name,
partition_name
from all_tab_partitions
where table_name like 'MSC_NET_RES_INST%'
OR table_name like 'MSC_SYSTEM_ITEMS%'
order by substr(partition_name,instr(partition_name,'_',-1,1)+1);
The results are:
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS_0
MSC_NET_RES_INST_AVAIL
NET_RES_INST_AVAIL_0
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS_1
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS__21
MSC_NET_RES_INST_AVAIL
NET_RES_INST_AVAIL__21
MSC_NET_RES_INST_AVAIL
NET_RES_INST_AVAIL_999999
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS_999999
Please help me how to increase the performance when launching the plan. Is change MSC: Share Plan Partitions to No the only way to increase the performance in running plan?
Thanks & Regards,
YolandaHi Yolanda,
a) So does it means that plan was working fine earlier but you are facing this issue recently.. ? If so then what you have changed at server side or have you applied any recent patches.. ?
b) If you have not completed plan for single time,
I will suggest that run data collection in complete refresh mode for one organization which is having relatively small data. Further, you can modify plan options in order to reduce planning calculation load like
- disable pegging
- remove any demand schedule / supply schedule / global forecast etc
- enable only single organization which having relatively small demand and supply picture
- disable forecast spread
Once one plan run will be completed, then expand your collection scope to other organizations and also enabling above mentioned setting.
There are lots of points need to consider for performance issue like server configuration, hardware configuration, num of demands etc. So you can raise SR in parallel while working on above points.
Thanks,
D -
Planning Run - Enhanced Backward Scheduling Error
Hi SAP Expert,
I encountered planning run error "Heuristic terminated due to a possible endless loop" after I run the planning run via transaction /SAPAPO/CDPSB0 - Production Planning Run with heuristic "Enhanced Backward Scheduling".
Do anyone have any idea on it?
Thanks.
Regards,
Hooi ChiaHi All,
This issue had been resolved. I'm now close this issue.
Regards,
Hooi Chia
Maybe you are looking for
-
Blue & White G3 won't boot from Tiger DVD
Correct me if I'm wrong, but the B & W G3's ARE supported under OS X 10.4 right? This computer came to me only a week ago with 10.3.9 installed (flaky, but working), 32x CD-ROM, 384 MB RAM, 25 GB HD. The PRAM battery is dead. I was also given an NEC
-
Pre populate adapter in OIM 11gr2 not triggered in database
Hello, Folowing is the steps for creation of pre populated adapter in OIM ** we have created one form in OIM which is provisioned to Database** Steps · Installed GTC connector for Database Web App 9.* · Created new user and Table in Database · Create
-
Premiere Elements 9 running extremely slow on Core i7 Win7 Quadro system
Hello, We've had a single installation of Premiere Elements 9 for some time now and for love nor money we've been unable to get it working properly, i've tried all manner of driver combinations for graphics and motherboard and the problems persist. B
-
PDF Conversion of Smartform Output
Hi All, Requirement : Be able to provide customers with reprinting old invoices. As Is : We have a standard way of doing it by reissuing a print of the Invoice by going into transaction VF02. But in doing so we have a chance that if some master data
-
How do I delete contacts in Contacts App. when not on iCloud
iPad2 IOS 6.1.3. When I disconnect from iCloud by deleting mi Apple ID account, I still have information in my Contacts App. which I can't delete because the "Edit" button is missing. I presume this means that they are physically stored on my IPad. I