Best Pratices

Hello
I do not know if this is the best place for this post, but after searching for "java documentation best practices" in google the results were not very clear and direct.
For instance, I have an interface (e.g. ExampleInterface) and a Class (e.g. ImplmentsExampleInterface) that implements the ExampleInterface.
The "ImplmentsExampleInterface" will have no code, just the "ExampleInterface" overwritten methods and then I will have several other Class's that will extend "ImplmentsExampleInterface", and it will be this Classes that will contain the code.
I only ask for pointers to proper place that has examples and information on how to properly comment/document this kind of structure, for instance, should I document everything (interface, super class, and class's) even if it is copy paste? Or should I put the effort in commenting the diferences betewen the bottom Class's?
Regards
Cad

Look at the source code for the JSE standard library included in the JDK for an example of Javadoc best practices. In general I would focus on documenting the interface; the method documentation will then be automatically inherited by the implementation classes.
Edited by: Nick_Radov on Nov 17, 2009 5:09 PM

Similar Messages

  • ADF Faces & BC: Best pratices for project layout

    Season greetings my fellow JDevelopers!
    Our software group has been working with ADF for around 5 years and through the years we have accumulated a good amount of knowledge working with JDeveloper and ADF. Much of our current application structure has been resurrected in the early days of JDeveloper 10 where there were more samples codes floating around then there were "best pratice" documentation. I understand this is a subjective topic and varies site to site, but I believe there is a set of common practices our group has started to identify as critical to streamlining a development process(reusable decorated ui components, modular common biz logic, team development with svn, continuous integration/build, etc..). One of our development goals is to minimize dependency between each engineer as everyone is responsible for both client and middle layer implementation without losing coding consistency. After speaking with a couple of the aces at the last openworld, I understand much of our anticipated architectural requirements are met with JDeveloper 11(with the introduction of templates, declarative components, bounded task flows, etc..) but due to time constraints on upcoming deliverables we are still about an year away before moving on with that new release. The following is a little bit about our group/application.
    JDeveloper version: 10.1.3.4
    Number of developers: 7
    Developer responsibilties: Build both faces & bc code
    We have two applications currently in our production environments.
    1.A flavor of Steve Muench's dynamic jdbc credentials login module
    2.Core ADF Faces & BC application
    In our Core ADF Faces application, we have the following structure:
    OurApplication
         -OurApplicationLib (Common framework files)
         -OurApplicationModel (BC project)
              -src/org/ourapp/module1
              -src/org/ourapp/module2
         -OurApplicationView (Faces project)
              public_html/ourapp/module1
              public_html/ourapp/module2
              src/org/ourapp/backing/module1
              src/org/ourapp/backing/module2
              src/org/ourapp/pageDefs/
    Total Number of Application Modules: 15 (Including one RootApplicationModule which references module specific AMs)
    Total Number View Objects: 171
    Total Number of Entities: 58
    Total Number of BC Files: 1734
    Total Number of JSPs: 246
    Total Number of pageDefs: 236
    Total Number of navigation cases in faces-config.xml: 127
    Total Number of application files: 4183
    Total application size: 180megs
    Are there any other ways to divide up this application? Ie: module specific projects with seperate faces-config files/databindings? If so, how can these files be "hooked" together? A couple of the aces has recommended that we should separate all the entity files into its own project which make sense. Also, we are looking into the maven builds which should remove those pesky model.jpr files that constantly gets “touched”. I would to love hear how other groups are organizing their application and anything else they would like to share as an ADF best pratice.
    Cheers,
    Wes

    After discussions over the summer/autumn by members of the ADF Methodology Group I have published an ADF Coding Standards wiki page that people may find useful:
    [http://wiki.oracle.com/page/ADF+Coding+Standards]
    It's aimed at ADF 11g and is intended to be a living document - if you have comments or suggestions please post them to the ADF Methodology google group ( [http://groups.google.com/group/adf-methodology?hl=en] ).

  • Best pratices for ODI interfaces

    I was wondering how everyone is handling the errors that occur when running an interface with ODI.
    Our secinaro:
    We have customer data that we want to load each night via ODI. The data is in a flat file and a new file is provided each night.
    We have come across an issue where a numeric field had data that was non numeric in it ... so ODI created a bad file ... with the bad record in it .... and an error file with the error message. We then had some defined constraints that forced records into the E$ table.
    My question is how does everyone handle looking for these errors. We would like them to just bereported to one place ( an oracle table ) so when the process runs we can just look at the one table and then act on the issues.... as shown above ODI puts errors in two different places... DB ones in a flat file and user defined in the E$ tables.....
    I was wondering if anyone has come across this issue and might be able to tell me what was done to handle the errors that occurr .. or what the best pratices might be for handling these errors?
    Thanks for any assistance.
    Edited by: 832187 on Sep 29, 2011 1:18 PM

    If you have got few fields affected by conversion problem you could try insert an ODI constraint or you could modify LKM to load bad file if present.

  • Best pratices for the customizing about the performance

    Hello,
    I would like to know the list of the best pratices for the customizing BPC NW 7.5 about the performance.
    Best regards
    Bastien

    Hi,
    There are few how to guides on SDN which will give you a basic idea on script logic. Apart from this, you can refer to the help guide on help. sap.com.
    The templates might also effect the performance. The number of EVDRE functions, the number of expansion dimensions, the number of members on which expansion takes place will effect the performance. A complex formatting in the template will also effect.
    Hope this helps.

  • Best pratices for GATP by SAP

    Hi all,
    I am not able to download best pratices for GATP by SAP from http://help.sap.com/bp_scmv250/BBLibrary/HTML/ATP_EN_DE.htm. Seems the documents are removed. Can some one who already downloaded share the same with me?
    Also can you provide working links for best pratices for SNP and PP/DS?
    Thankyou,
    Ram

    Hello Ram
    Please check this wiki page - it has good content and some useful links
    APO-GATP General Information - Supply Chain Management (SCM) - SCN Wiki
    and
    Find out more on RDS solution for GATP at : http://service.sap.com/rds-gatp
    if you search http://service.sap.com/bestpractices you will find a documents about best practice in GATP.  The help.sap.com for GATP is a good resource too to start with as well.
    Also you can read below blog written by me
    Global Available To Promise (GATP) Overview
    Hope this will help
    Thank you
    Satish Waghmare

  • Need best pratices advices

    Hey guys,
    Anyone can share with me the best pratices for the setup of an oracle database. I know that the amount of redo, grouping, file system layout, etc.. depend on the size of your BD. So to help here is the spec of my BD
    oradata : 200GB
    change rate : 50k/s (I got that by dividing the size of my archive redolog by the amount of time between the first and last archlog).
    This is a standard database (not OLTP or Data Warehouse) use to store client information
    My RPO (Recovery Point Objective) is 30 minutes
    Some quick question
    1. How should I layout the file system
    2. How many redo/group/size
    3. How many control file, where shoud I put it
    4. How I should setup the log switching
    Anyway doc, quick, don't want to read a 300 pages oracle document :-) This is why I'm looking on your knowledge
    Thanks
    Edited by: Sabey on 9-Feb-2011 8:01 AM

    Sabey wrote:
    Ok a bit more information.
    Storage : SAN, RAID 5 disk onlySince it's SAN, the RAID 5 (which is generically bad for performance in any update environment) will have minimal adverse effect (because the RAID 5 is hidden by massive cache). Just try to spread the data files across as many disks as possible.
    Oracle works best for datafiles on 'SAME' (Stripe and Mirror Everything). Spread the data files across all possible disks and mix data and index to try to get randomization.
    No ASMPity. A lot of potential transparency will be side-stepped.
    OS: Solaris 10 on a M4000, (2 SPARC 2.1GHz, 4 core each), 16GB RAMFinally some meat. ;-)
    I assume Enterprise Edition, although for the size, the transaction rate proposed, and for the configuration, Standard Edition would likely be sufficient. Assuming you don't need EE-specific features.
    You don't talk about the other things that will be stealing CPU cycles from Oracle, such as the app itself or batch jobs. As a result, it's not easy to suggest an initial guess to memory size. App behaviour will dictate PGA sizing, which can be as important as SGA size - if not more so. For the bland description of app you provide, I'd leave 2GB for OS, subtract whatever else required (app & batch, other stuff running on machine) and split the remaining memory at 50/50 for SGA and PGA until I had stats to change that.
    >
    Like I said, I espect a change rate of 50k/s, is there a rule of thumbs for the size of redo log, the amount, etc.. No bulk load, data is entered by people from a user interface, no machine generated data. Query in read for report but not a lot.Not too much to worry about then. I'd shoot for a minimum of 8 redo logs, mirrored by Oracle s/w to separate disks if at all possible, and size the log files to switch roughly every 15 minutes under typical load. From the looks, that would be (50k/s * 60 sec/min * 15 min) or about 50M - moderately tiny. And set the ARCHIVE_LAG_TARGET to thrum at 15 minutes so you have a predictable switch frequency.
    >
    BTW, what about direct I/O. Should I mount all oracle FS in that mode to prevent the use of OS buffer cache?Again, this would be eliminated by using ASM, but ... here is Tom Kyte's answer confirming direct IO http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4159251866796
    Your environment is very very small in Oracle terms. Not too much to fuss over. Just make sure you have a decent backup/recovery/failover strategy in place and tested. Use RMAN for the BR and either DataGuard (or DBVisit for Standard Edition)

  • Linux Native Multipath and ASM Best Pratices questions

    Hallo,
    I 'd like to know your opinions about some questions I have:
    I am using linux native multipath without asmlib and I wonder:
    1-
    Is it mandatory/best pratice to partition (by fdisk) device-mapper luns before using them to build an ASM DISKGROUP, or Oracle asks to
    partition them because asmlib works better on partition? In other words, is there any issues to use directly /dev/device-mapper/mpath1 or I
    have to use /dev/device-mapper/mpath1p1 with 1 MB offset?
    2-
    Is it best to give the proper user/group to mpath lun's by rc.local or by udev rules? Is there any difference?
    Please , write me what do you have experienced..
    Thanks and bye

    ottocolori wrote:
    Hallo,
    I' m trying to have a clearer picture of it, and as far as I know:
    1 -
    Assuming you need to use the whole disk,
    Partitioning it is mandatory only if you use ASMLIB as it works only on partitioned disk.
    Yes you need to partition the disk first before presented to ASMLib
    ottocolori wrote:
    2-
    There is no need to skip first cylinder, or at least, I can't find official infos about that.
    What do you think about? TIANo need in linux platform to skip 1st cylinder, If I remember correctly you'd need to skip 1st cylinder in solaris as there is bug
    Cheers

  • Vmware Data recovery Best pratices

    Hi,
    I am looking for Vmware Data recovery Best pratices, I find everywhere on the internet the following link :  http://viops.vmware.com/home/docs/DOC-1551
    But this is not a valid link and I can't find it anywhere...
    Thanks

    Hi,
    I am looking for Vmware Data recovery Best pratices, I find everywhere on the internet the following link :  http://viops.vmware.com/home/docs/DOC-1551
    But this is not a valid link and I can't find it anywhere...
    Thanks

  • Best pratice on disabling fields?

    Hi,
       I'm using jdev11R1. I'm having one jspx page named(firstpage.jspx). this page is on unboundedTF.
    I want to disable more than 15 fields on a single hit of button. but here each and every developer has different approach to do this.
    can any one of the expert says which one is best approach?
    Developer 1 Approach:
    1. saying that bind the all fields in your bean.
    2. Inside button actionlistener bindvar.setDisabled(true).
    2. add partial target programmatically.
    Developer1 saying his Advantage on his view:
    1. ADF traditionally Approach (create bindings). Apart from that i no need to any other stuff
    Developer 2 Approach:
    1.  I'll create 15 variable in your bean return type should Boolean along with getter setter.
    2.  i'll have a common method public void endis(Boolean bool) {this.setVar(bool);}
    3.  Inside button actionlistener i'll call endis(true);
    4. add partial target programmatically.
    5. last step in all fields i have to use el expersion like #{scopetype.beanname,var}
    Developer2 saying his Advantage on his view:
    1. Java traditionally Approach(getter setter).
    2. In future if i there is a change in requirement like
        - no need to disable all the fields. just some fields.
        - repeat same thing some other buttons.
        ultimately no need to re-write fully .little bit effort only needed.
    Developer 3 Approach :
    1. I have a pageFlowScope variable inside the actionlistener AdfFacesContext.getCurrentInstace.getPageflowScope.put('disable','y');
    2. go to all fields have el expression like #{pageFlowScope.disable eq 'y' ? true : false}
    Developer3 saying his Advantage on his view:
    1. No need of binding and no need of variable declaration, getter and setter in bean. just piece of code
    Developer1 perspective saying that following:
       --Developer2 doing having lot of code for disabling the fields.
       --Developer3 not best approach. because oracle docs saying that pageflowscope only for BTF. But pageflowscope works in unboundedTF also.
    Developer2 perspective saying that following
      --Developer1 binding the each and every variable into the bean for disabling the fields. default binding take care of framework why should i do?
      --Developer3 anyhow it will create getter and setter for that by the framework. if scope value lost all will fails.
    Developer3 perspective saying that following
      --Developer1 binding the each and every variable into the bean for disabling the fields. default binding take care of framework why should i do?
      --Developer2 having lot of code for disabling the fields.
    "from the bottom line of developer discussion i conclude.
    each and every one saying I'm the best. but each and every approach has own strength and weakness."
    My question will be
    1.Is there any other approach apart from these three?
    before answering the second question. have these things in mind.
       point1. simple and re-usable all developer.
       point2. resource usage(memory allocation).
       point3. less time to do this job. here after.
    2.please suggest which Best Approach in all of the prespective?
    -edited lately, please go through again,
    thanks,

    Developer 1 Approach:
    Developer1 saying his Advantage on his view:
    1. ADF traditionally Approach (create bindings).
    Since when is this the traditional approach? Never bind components to a bean if it's not really necessary! As you have other solutions this one is out of the race.
    Developer 2 Approach:
    Developer2 saying his Advantage on his view:
    1. Java traditionally Approach(getter setter).
    2. In future if i there is a change in requirement like
        - no need to disable all the fields. just some fields.
        - repeat same thing some other buttons.
        ultimately no need to re-write fully .little bit effort only needed.
    Yes, but how likely is it that you have to change the fields to enable/disable?
    From my point of view you put in a lot of effort for something you don't know if it will happen at all. Even if you know it's coming it would be easier to build groups and use the group variable to enable/disable the fields. You can then change a field from group a to group b without much programming.
    Developer 3 Approach :
    Developer3 saying his Advantage on his view:
    1. No need of binding and no need of variable declaration, getter and setter in bean. just piece of code
    You don't even need the code in the actionListener as you can use a setPropertyListener for this. The question here is why putting the variable into page flow scope? We don't know the use case to answer this. Generally you should not put anything in pageFlowScope if it's not needed outside the page. This we don't know. PageFlowScope is a broad scope, so you should think about using a smaller scope if possible.
    4. Solution: put all the fields into one layout container. Write a method in a bean method to which enables/disables all input Fields of a given container Something like this
        // toggle disable of all the child uicomponents inside the given uiComponent
        private void toggleDisableInputItems(AdfFacesContext adfFacesContext, UIComponent component) {
            List<UIComponent> items = component.getChildren();
            for (UIComponent item : items) {
                toggleDisableInputItems(adfFacesContext, item);
                if (item instanceof RichInputText) {
                    RichInputText input = (RichInputText) item;
                    input.setDisabled(!input.isAutoSubmit());
                    adfFacesContext.addPartialTarget(input);
                } else if (item instanceof RichInputDate) {
                    RichInputDate input = (RichInputDate) item;
                    input.setDisabled(!input.isAutoSubmit());
                    adfFacesContext.addPartialTarget(input);
    There are probably more solutions. I would stick with 3 or 4 depending on the use case.
    Timo

  • Best Pratice of Error Handling calling multiple external components

    I have used EJBs in my question, but it applies to any component/service that is exposed.
    If I have EJB1 calling EJB2, what is the standard approach to handling an error received from EJB2? Does EJB1 simply pass it back or wrap its own error around it?
    I use the term error to include exceptions & all information that would be used for debugging & used by the caller.
    If we allow the errors from EJB2 to be returned to the caller of EJB1, then the caller of EJB1 must be aware of those errors.
    If EJB1 wraps the errors from EJB1, then the caller of EJB1 only needs to know about errors returned from EJB1.
    This can be extended a little and EJB1 calls multiple ejbs. Some of EJBs may be external EJBs or have the possibility of returning an error from a 3rd party tool.
    What should be returned if EJB3 can return the same error as say EJB4? If (for some reason) the caller needs to know exactly when the problem occurred it would imply that additional information needs to be attached to the original error.
    What would be a 'best practice' approach to returning the errors to the original caller?

    I have used EJBs in my question, but it applies to
    any component/service that is exposed.
    If I have EJB1 calling EJB2, what is the standard
    approach to handling an error received from EJB2?It depends on the context.
    Does EJB1 simply pass it back or wrap its own error
    r around it?
    I use the term error to include exceptions & all
    information that would be used for debugging & used
    by the caller.
    If we allow the errors from EJB2 to be returned to
    the caller of EJB1, then the caller of EJB1 must be
    aware of those errors.
    If EJB1 wraps the errors from EJB1, then the caller
    of EJB1 only needs to know about errors returned from
    EJB1.
    Caller?
    EJBs are in a layer. For application programming layers will seldom return errors (plural) to other layers.
    Within a layer handling depends on the context.

  • Best pratices for RMAN backup management for many databases

    Dear all,
    We have many 10g databases (>40) hosted on multiple Windows servers which are backup up using RMAN.
    A year ago, all backup's were implemented through Windows Scheduled Tasks using some batch files.
    We have been busy (re)implementing / migrating such backup in Grid Control.
    I personally prefer to maintain the backup management in Grid Control, but a colleague wants now to go back to the batch files.
    What i am looking for here, are advices in the management of RMAN backup for multiple databases: do you guys use Grid Control or any third-party backup management tool or even got your home-made solution?
    One of the discussion topic is the work involved in case that the central backup location changes.
    Well... any real-life advices on best practices / strategies for RMAN backup management for many databases will be appreciated!
    Thanks,
    Thierry

    Hi Thierry,
    Thierry H. wrote:
    Thanks for your reaction.
    So, i understand that Grid Control is for you not used to manage the backups, and as a consequence, you also have no 'direct' overview of the job schedules.
    One of my concern is also to avoid that too many backups are started at the same time to avoid network / storage overload. Such overview is availble in Grid Control's Jobs screen.
    And, based on your strategy, do you recreate a 'one-time' Oracle scheduled job for every backup, or do your scripts create an Oracle job with multiple schedule?
    You're very welcome!
    Well, Grid Control is not an option for us, since each customer is in a separate infrastructure, and with their own licensing. I have no real way (in difference to your situation) to have a centralized point of control, but that on the other hand mean that I don't have to consider network/storage congestion, like you have to.
    The script is run from a "permanent" job within the dba-scheduler, created like this:
    dbms_scheduler.create_job(
            job_name        => 'BACKUP',
            job_type        => 'EXECUTABLE',
            job_action      => '/home/oracle/scripts/rman_backup.sh',
            start_date      => trunc(sysdate)+1+7/48,
            repeat_interval => 'trunc(sysdate)+1+7/48',
            enabled         => true,
            auto_drop       => false,
            comments        => 'execute backup script at 03:30');and then the "master-script", determines which level to use, based on weekday from the OS. The actual job schedule (start date, run interval etc) is set together with the customer IT/IS dept, to avoid congestion on the backup resources.
    I have no overview of the backup status, run times etc, but have made monitoring scripts that will alert me if/when a backup either fails, or runs for too long. This, in addition with scheduled disaster/recovery tests makes me sleep rather well at night.. ;-)
    I realize that there (might be) better ways of doing backup scheduling in your environment, since my requirements are so completely different than yours, but I guess that we all face the same challenges in unifying the environments as much as possible, to minimize the amount of actual work we have to do. :-)
    Good luck!
    //Johan

  • Cisco Prime SNMP Traps Best Pratice

    The Cisco Prime documentation recommends configuring switches to send SNMP traps. However it does not give any more details.
    I was wondering what sorts of SNMP traps people in the community are using with Cisco Prime 2.1. I'm looking for some sort of best practice or for an idea of what traps would be the most useful to configure on the switches, to send to Prime.

    Hi ,
    Snmp traps need to be configured only on device end , there is no config need to be done on PI.
    you can enable all the traps that you want.  for e.g
    snmp-server enable traps syslog
    snmp-server enable traps ipsec start stop
    snmp-server enable traps memory-threshold
    snmp-server enable traps interface-threshold
    snmp-server enable traps connection-limit-reached
    snmp-server enable traps cpu threshold rising
    etc......
    and you can monitor then in PI (Administration > System Settings > Severity Configuration, Link down)
    check the below link as well:
    https://supportforums.cisco.com/discussion/11919481/prime-infrastructure-20-link-status-alarms
    Thanks-
    Afroz
    ***Ratings Encourages Contributors ***

  • Best Pratice links for HM/HR

    Hello Gurus...
    I am trying to get some best practices for HR/HCM for BI implementation, but the links that i got in SDN threads dont work.
    Can any one please guide me to the correct URL?
    Also, is there any diff between HR and HCM?
    Is there any change in HR between R/3 4.6 and ECC. I have documents for 4.6
    Thank you,
    Kris

    http://help.sap.com/bp_bw370/html/index.htm
    then   Business intelligence -> Preconfigured Scenarios  ... here u can find best practices for HCM
    HCM is new terminology in NW BI for HR in BW..
    There must be some different 4.6 and ECC, but most of the ECC content is available as 3.x
    http://help.sap.com/saphelp_nw70/helpdata/en/2a/77eb3cad744026e10000000a11405a/frameset.htm

  • Best pratices to set Timers in CDA + WSA

    I'm deploying  WSA in transparent mode with WCCP redirection from ASA.
    Everything is OK, but I would like to know the best practices to setup the correct Time to avoid mismatch of ip-mapping.
    In WSA the parameter possible are:
    Credential Cache Options:
    a) Surrogate Timeout:  value to setup
    b) Client IP Idle Timeout:  value to setup
    over CDA the paramentes possible are:
    c) dcStatusTime
    d) dcHistoryTime
    e) userLogonTTL
    can you suggest ?
    Further, What are happen if I set more or less this value? what is risk about it?
    thanks for support.

    - On CDA, we changed the History timer to  60mins so that after every 60mins, the CDA clears out the User-to-IP mapping cache and checks with the AD to get the new mapping. This setting would lower down the false positives on the WSA as CDA would have more updated mapping.
    However we should not lower down this value too much otherwise CDA would requery the AD more frequently and thus would increase the load on the CDA as well as on the Active Directory might also result in performance issues on the CDA.
    - As per the customer request, we have configured re-authentication timer to 20mins on the WSA ( tuiconfig command). This would enable the WSA to clear out the user's session every 20mins and would ask the end user to reauthenticate.
    Please note, most of the web browsers cache the user credentials and thus reply to the WSA's re-authentication request with the cached credentials. This enables the user to have a seemless working environment without being prompted for re-authenticate again and again, without being aware that re-authentication has already happened in the background.

  • Import best pratices

    Hello everybody
    I want to import the SAP best practices, then how to import?
      can use the transaction code SAINT?
    i read the document Quick guide, but cant understand that how to upload the files means which transaction   code is used?
    and
    how to directly create the a transport request and workbench request
    anybody, please suggest me how to do?
    Thanks
    ganesh

    Hi,
    Go through the Note 847091 and at the bottom of the note you will find "BL_Quick_Guide_EN_UK.zip".
    You can follow that document for BP installation.
    Note : This note is for SAP Best Practices Baseline Package UK,SG,MY V1.500 but your requirement may be different search accordingly.
    --Kishore

Maybe you are looking for

  • SSO for Java not working

    Hi, We have configured the Secure login Server and enabled the SPNEGO. We are getting the certificates and able to fully get the features of X.509 and Kerberos functionallity in ABAP. However in the case of JAVA stack it is not taking the windows aut

  • Load balancing on an applicaton with multi-ports

    One of our application open 5 ports and other 4 management ports. the ports can not be ranged. to load balancing this, I did: make seperate contents rules for every port. and all of them use aca. Please advice me : 1. how can I group all the ports in

  • MSS Approve Time (views)

    Hello --  We are using XSS for ECC6. I am struggling with limited VIEWS available in MSS for Time Approval.... What is the deal with not having the use of the OADP tables like everywhere else in MSS? For example, I cannot switch the view from 'Direct

  • WPA authentication help

    Hi, I am looking at impementing a small wireless network, but knowing the way things seems to work, it will most likely grow a lot in the near future.  The penny pinchers say that they want to go with a shared key scheme since they don't want to buy

  • Confirmation quantity greater than order quantity

    Hi Experts, While doing confirmation for phase wise for a process order it is showing the qty as follows. Confirmation Qty = Order Qty * Base Qty. I maintained Base qty in BOM as 98kg. I maintained Base qty in Master Recipe as 98kg and operation qty