Practical Approach to MRP
Hi everybody!
Has anybody got any experience in using MRP in the real life?
I mean executing planning run with hundreds of materials and then evaluating what happend?
In all the standard SAP scenarios (including Best Practices) there are examples with planning of only ONE material (with previously known material number). You know what I mean : MD02 for material X, and then MD04 for the material in question.
But I don't think this approach is plausible in the real life ... When you have to plan 100 materials every day, create Production Orders and Purchase Requisitions, track order progress, change schedule run ... and in the end you are lost in all the mess ...
So, any help to offer form positive production experience?
Best Regards
P.S. Any helpul answer will be rewarded of course!
Hi Petar,
Some of the ways in which you can divide the materials among the planners, would be the type of materials being planned... say electrical, mechanical etc... this works when you techno-planners i.e. technically competent planners, the other way to look at it is, product / assembly based, where you have planners for specific assemblies, works best when there are few to medium number of products to focus on. You can also have a split based on the value of materials, i.e. high, medium n low & assign mrp controllers accordingly.
Well you can come up with more such combinations, but the key is, check how many parts are normally having changes which are relevant for planning, then divide that number by the number of people you have. Also work closely with your planners to identify on an average how many parts they can manage realistically, as there is no industry standard as such in this, but purely depends on the number of parts & the volume of movement of these parts.
Am sure with the above as a guideline you can form your matrix of planners & materials.
All the best.
Regards,
Vivek
Similar Messages
-
Best practice approach for seperating Database and SAP servers
Hi,
I am looking for a best practice approach/strategy for setting up a distributed SAP landscape i.e separating the database and sap servers. If anyone has some strategies to share.
Thanks very muchI can imagine the most easiest way:
Install a dialog instance on a new server and make sure it can connect nicely to the database. Then shut down the CI on the database server, copy the profiles (and adapt them) and start your CI on the new server. If that doesn't work at the first time you can always restart the CI on the database server again.
Markus -
Best Jsp book with practical approach
hi
I am new to JSP.
I wish to learn JSP with more of practical approach.
I have already gone through "Head First JSP and servlets"...but it is more dedicated towards SCWCD...
so this time i wish to take sugessions before picking up a book...
I need a book with is much more of practical approach...
covers traps in JSP....
thanks.kmangold wrote:
Core Servlets & JavaServer Pages is the book I used when I was learning. It helped me get started real easily.I thought Marty Hall's books were OK, but I think Hans Bergsten's JSP book from O'Reilly is the hands down winner. It starts with JSTL right off the bat. You only learn scriptlet coding in a later chapter, almost as a last resort. I think that's the right way to learn JSPs.
% -
Best Practice to use MRP results
Dear Forum,
As you know that SAP standard system will not show any exception message for the finished product in case one of the components is arriving late. What is the best practice to deal with that situaution if we are not going to APO.
Thank You,
FadiFadi,
The planner for the component will see the exception message.
Depending on the organization of the planning group, the component planner either addresses the entire top-to-bottom mateiral BOM chain, including the FGs, or the component planner collaborates with the FGs planner to resolve the issue.
SAP supports automatic sending of messages to appropriate individuals within the organization, using a number of delivery methods.
I don't believe SAP best practices speaks to this matter. It is usually left to the company to decide.
Rgds,
DB49 -
MDS Best Practice Approach - Sample HR Scenario
Thanks for taking to time to read my MDS requirement...just looking for a better way to go about it.
Here is the requirement:
Every month CEO releases an excel list of approved employment positions that can be filled to the HR. The HR dept wants to be able to add positions that CEO approves and remove positions that the CEO feels are
no longer necessary. The recruiting group wants to track/modify this master list of positions per the CEOs discretion and assign employees to potentially each position as people are hired/terminated.
The HR data steward must be able to:
-when a position is filled, must be enabled to assign employees to the positions for org chart reporting
-they need the ability to assign/reassign parent child relationships for any position i.e. the Director Position manages multiple Manager positions which manage multiple Register Clerk positions.
I am new to MDS and am initially not sure how to approach this problem...do I create one entity for 'Positions' and another for 'employees' ? I'm thinking with that approach I can create employee as an domain based attribute for Position, then
create a derived Hierarchy for the Position parent/child relationships...just wondering if this is a good approach.
Are there other things I should be taking into consideration? Thanks!If your Material list document is not excessively long it probably wouldn't be too much overhead to add a few extra columns using the CalculatedColumn action block. These extra columns, even though the number would be the same for all rows could contain a column for each of your aggregated functions.
Then in your iGrid just set the Column Width for these addtional fields to zero and upon UpdateEvent of the grid you could javascript them from Row number 1 to your desired html elements, etc. -
With 2008 - What would be the 'best practice' approach for giving a principal access to system views
I want to setup a job that runs a few select statements from several system management views such as those listed below. Its basically going to gather various metrics about the server, a few different databases and jobs.
msdb.dbo.sysjobs
msdb.dbo.sysjobhistory
sys.dm_db_missing_index_groups
sys.dm_db_missing_index_group_stats
sys.dm_db_missing_index_details
sys.databases
sys.dm_exec_query_stats
sys.dm_exec_sql_text
sys.dm_exec_query_plan
dbo.sysfiles
sys.indexes
sys.objects
So, there a number of instance-level permissions that are needed, mainly VIEW SERVER STATE
https://msdn.microsoft.com/en-us/library/ms186717.aspx
Granting these permissions to a single login seems like introducing a maintenance headache for later. What about a server role?
Correct me if Im wrong, but this is a new feature of 2012 and above, the ability to create user-defined server roles.
Prior to version 2012, I will just have to settle for granting these instance-level permissions to individual logins. There wont be many logins that need this kind of permissions, but id rather assign them at a role level then add logins to that role.
Then again, there is little point in creating a seperate role if there is only 1...and maybe 2 logins that might need this role?
New for 2012
http://www.mssqltips.com/sqlservertip/2699/sql-server-user-defined-server-roles/Just as any Active Directory Administrator will tell you you should indeed stick to the rule - "user in role- permissions to role" - in AD terms "A-G/DL-P. And since this is very much possible since SQL Server 2012 why not just do that. You
lose nothing if you don't ever change that one single user. In the end you would only expect roles to have permissions and save some time when searching for permission problems.
i.e.
USE [master]
GO
CREATE SERVER ROLE [role_ServerMonitorUsers]
GO
GRANT VIEW SERVER STATE TO [role_ServerMonitorUsers]
GO
ALTER SERVER ROLE [role_ServerMonitorUsers]
ADD MEMBER [Bob]
GO
In security standardization is just as much key as in administration in general. So even if it does not really matter, it may matter in the long run. :)
Andreas Wolter (Blog |
Twitter)
MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
www.SarpedonQualityLab.com |
www.SQL-Server-Master-Class.com -
LabVIEW Training - ThinkEleme​nts (Practical Approach)
ThinkElements is introducing LabVIEW training for future and beginner developers who are looking for programming techniques to develop medium to large scale labVIEW applications at a low competitive price.
The course is instructor driven evening online sessions. Working professionals including students and academic people can take advantage of the course.
More information can be found at:
http://www.thinkelements.com/LabVIEWTraining.aspx
Regards,
ThinkElements
[email protected]
www.thinkelements.com
Ph: 336-298-1377 -
Practical approach to using 64bit kernel and 32bit userspace?
I want to run a 64bit kernel on 32bit userspace. I know there are some out there who claim it's only bad and slowing down the transition to full blown 64bit systems. That may be, but for me and for this particular computer, I need to do this.
I have tried it on my work laptop and there are no problems whatsoever: On my plain vanilla 32bit install I downloaded and installed the current 64bit kernel (after carefully backing up the current one). No problems anywhere, so far.
However, for this other machine, I need to have a sane solution for the long run. What about kernel updates? When a new kernel is released, pacman is going to want to install the 32bit one instead of the 64bit version I want to use. I don't suppose there is a way to "tag" the kernel package to be x86_64 instead of i686, or something to that effect?
I'd be happy to hear your thoughts on this.hw-tph wrote:
flamelab wrote:put kernel26 in HoldPkg and each time it wants to be updated, install the 64bit one.
According to pacman.conf(5), HoldPkg only seems to have any effect when attempting to remove a package:
pacman.conf(5) wrote:HoldPkg = package ...
If a user tries to --remove a package that's listed in HoldPkg, pacman will ask for confirmation before proceeding.
I'll go with the local repository route.
Well, I have pacman on HoldPkg, and I thought that HoldPkg makes the package manager to ask for a package to be installed before the others. It seems that I'm wrong -
Best practices for approval of MRP generated PRs
Hi, all. I'd very much like to route our MRP generated purchase requisitions directly to the Purchasing department without the need for additional approvals, regardless of $ amount.
PRs not generated by MRP (manual) will still need to pass through a release strategy.
What is the standard industry practice for processing MRP generated PRs?
What have you seen?
Thanks in advance.
SarahHi,
Well i haven't come across a situation which requires an approval of MRP generated PR's.
If the process of loading demands is controlled, then the output from MRP is by default controlled, unless there is some specific enhancements put in MRP calculation which does a change.
So if there is no such enahancements, then i do not see a need for the same.
So from a good practise perspective, i would look at controlling the demand & the manner in which it is entered into the system.
Regards,
Vivek -
Hi,
does someone know where to get a best practices about running MRP or has a good tutorial?
I have setup my material database ( e.g. MRP 1,2,3,4 tabs) but am not really sure how to continue.
I'm a little bit confused in which order I have to execute the transaction e.g. MD20 / MDAB, MD01/MDBT, MD15, MD05, etc.!?
Could someone help me just a little bit
Thanks in advance.Hi,
Steps sequence which you have written is correct one.
1)MD20 (Manual Planning File Entry Maintain / MDAB (Background job) :- This is the first step which system checks during total Planning run.System considers only those materials for which entry is maintain over here.But there is no need to maintain manual planning file entry each time.If your plant is activated(T.code OMDU) for MRP then system will take care of this means entry will managed automatically by the system.but for safty side you can use MDAB for the scheduled maintenance of planning file.
2) MD01 - Total Planning.
3) MD15 - you can convert Planned orders to PR by this code in Mass.
4) MD05 - Its report only and saw the result of last MRP run.
Regards,
Dhaval -
Oracle 11g Performance tuning approach ?
Hello Experts,
Is it the right forum to follow oracle performance tuning discussions ? If not, let me know what will be the forum to pick up some thread on this subject.
I am looking for performance tuning approach for oracle 11g. I learned there are some new items in 11g in this regard. For persons, who did tuning in earlier versions of Oracle,
what will be the best way adopt to 11 g?
I reviewed the 11g performance tuning guide, but I am looking for some white papers/blogs with case studies and practical approaches. I hope that you have used them.
What are the other sources to pick up some discussions?
Do you mind, share your thoughts?
Thanks in advance.
RIThe best sources of information on performance tuning are:
1. Jonathan Lewis: http://jonathanlewis.wordpress.com/all-postings/
2. Christian Antognini: http://www.antognini.ch/
3. Tanel Poder: http://blog.tanelpoder.com/
4. Richard Foote: http://richardfoote.wordpress.com/
5. Cary Millsap: http://carymillsap.blogspot.com/
and a few dozen others whose blogs you will find cross-referenced in those above. -
Oracle 9i R2 XML-XSLT Conversion: Best Approach?
Hello Folks,
I have an architectural design question that is based on the cabilities of the Oracle XMLTYPE datatype and functionality in the 9i Release 2 database. For the upgrade-focussed crowd out there, the company I'm working for are currently working on an 11G migration, but timescales are not going to allow this version of the database to be used on the project I'm working on.
Broadly speaking, this is what I know how to accomplish in 9i Release 2 at this point in time (with my present knowledge):
1. Convert an XML file - stored as a CLOB in the database - into an XMLTYPE object within PL/SQL.
2. Use an XSL stylesheet in CLOB form, combined with the XMLTYPE.TRANSFORM functionality in PL/SQL, to convert the originating XML file's format into Oracle's 'Canonical' (<ROWSET>.....</ROWSET>) XML format.
3. Insert the Canonical Format XML into a standard Relational Table using the DBMS_XMLSAVE functionality.
I can make all the above work. However, the obvious configuration design would rely on storing the XSL Stylesheets in CLOB form in a database table. Whilst I'm confident this would work okay, I want to be sure I've not overlooked a better and more practical way to reference Stylesheets and perform this kind of operation at database level.
There is just so much documentation available on the 9i R2 XML functionality, that the mind boggles trying to digest it all. There seem to be multiple and differing approaches to producing the kind of outcome I've just described. What I'm trying to investigate is if there is a "Best Practice" approach I should be considering?
If anybody can pass on any suggested approaches from their experience, or recommend a good book or reference source that isn't an Oracle Manual (I've already read those), then that would be great.
Thanks in advance for any help or suggestions.
JamesHere is a temporary workaround for the problem...
SQL> SELECT value(p).transform
2 (
3 dburiType('/SCOTT/STYLESHEET_TAB/ROW[ID = "1"]/SHEET/text()').getXML()
4 ).getClobVal() AS result
5 FROM nations p
6 /
RESULT
3333 -
Hallo,
I want to start a
discussion, to find a best practice method to change several related master
data objects via BDT. At the moment we are faced with miscellaneous requirements,
where we have a master data object which uses BDT framework for maintenance (in
our case an insured objects). While changing or creating the insured objects a
several related objects e.g. Business Partner should also be changed or
created. So am searching for a best practices approach how to implement such a
solution.
One Idea was to so call a
report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
method has only poor options to handle errors. Second it is also hard to keep LUW
together.
Another idea is to call an additional
BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
this solution working correctly, because there is always something missing
(e.g. global memory is not transferred correctly between the two BDT instances).
So hopefully you can report
about your implementations to find a best practice approach for facing such
requirements.
Hallo
ich möchte an der Stelle eine Diskussion starten um einen Best Practice
Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
oder geändert wird (zum Beispiel ein Geschäftspartner)
Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
LUW zu garantieren.
Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
eine Best Practice Ansatz für das Thema finden können
BR/VG
Dominik -
Best practice to Load FX rates to Rate Application in SAP BPC 7.5 NW
Hi,
What is the best practice/approach to load FX rates to Rate Application in SAP BPC 7.5 NW? Is it from ECC or BW?
Thanks,
RushiI have seen both cases.
1) Rates coming as a flat file from external system, treasury department, and ECC and BPC both loads in to respective systems in batch.
2) ECC pushes rate info to BW and data in turn get pushed to BPC along with other scheduled process chains.
How are rates entering your ECC?
Shilpa -
'Best practice' for avoiding duplicate inserts?
Just wondering if there's a 'best practice' approach for
handling potential duplicate database inserts in CF. At the moment,
I query the db first to work out if what I'm about to insert
already exists. I figure I could also just send the SQL and catch
the error, which would then tell me the data's already in there,
but that seemed a bit dodgy to me. Which is the 'proper' way to
handle this kind of thing?MrBonk wrote:
> Just wondering if there's a 'best practice' approach for
handling potential
> duplicate database inserts in CF. At the moment, I query
the db first to work
> out if what I'm about to insert already exists. I figure
I could also just
> send the SQL and catch the error, which would then tell
me the data's already
> in there, but that seemed a bit dodgy to me. Which is
the 'proper' way to
> handle this kind of thing?
i wouldn't consider letting the db handle this as "dodgy". if
you're seeing the
majority of inserts as "ok" then you're saving at least 1 db
interaction per
insert which can add up in high transaction environments.
Maybe you are looking for
-
Blinking attributes for custom bitmap buttons
Hi Everyone, I have a number of custom buttons that have custom bitmaps. I'd like to make them blink, but the blinking attribute just seems to change the forground and background colors for the applicable controls, and doesn't seem to do the sme for
-
Image.createImage() crashed in Nokia 7210 Emulator
Dear all, I'm developing a midlet for showing png file on a canvas by the 7210 emulator. whenever I call the Image.createImage("/myfile.png"), the midlet will shown "Unable to run application" and then exit the midlet application. My png file's size
-
FF4 starts using all of available CPU after several hours, needs reboot.
This used to happen in the previous release version of FF, and it's apparently still the case in FF4. Everything works fine after I start FF4. I have 50+ tabs open and visible through TabMixPlus and FF is fast and snappy just like I want it to be, bu
-
Moving catalog and photos with external drive, Elements can't see catalog database
Tried searching, but couldn't find my situation. I have an old computer with a second hard drive (1.5 TB). I pulled it from my old computer and put it in the new computer. I also copied my "My Catalog" folder which contains the catalog database fil
-
We bought Leopard to install into our eMac G4. I was told that we needed at least 512 MB to run Leopard, so I bought 1 gig just to be sure I had enough. I opened the panel and there was no extra space to add memory. It's currently at 384MB, so would