Updates to aliases seems not to be taken into account

Hello,
After having re-installed my email server on a macosx server snow leopard leopard, I tried to setup again my use emails accounts by modifying the aliases files, re-generate the aliases.db file, restarting the email service. Initially, I have an account where the email account is "firstnamelastname". I then added a line in the aliases file like:
<<
firstnamelastname: firstname.lastname, firstname, info
>>
Doing so, I should have 3 new email aliases
Then I re-generated the aliases database using "newliases" (or "postalias" to re-create the db).
Finally, I restarted the service and tested out the aliases.
results:
A trace in the smtp log like "No record for user firstname.lastname"
Any ideas? Thx
Regards,
Daniel

For that group, you need 2 counting rules:
- one for Holiday Class 0 (not a public holiday) and Day Work Schedule Class 0 (not a working day)
- one for Holiday Class 0 (not a public holiday) and Day Work Schedule Class 1 to 9 (a working day)
If you want, you can add an other one or two for Holiday Class 1 to 9 (public holiday)
extract from our T556C
2       1   10 Q      10 Sick leave deduct from quotas (incl personal/spec) XXXXX   X  X    XXXXXXXXXX  XXXXXXXXX X X        XX XX    100.00  100.00              1     1       10              
2       1   10 Q      11 Sick leave deduct from quotas (incl personal/spec) XXXXX   X  X    XXXXXXXXXX X          X X        XX XX      0.00    0.00              1     1       10              
2       1   30 Q      10 Special leave with pay (with quotas deduction)     XXXXX   X  X    XXXXXXXXXX  XXXXXXXXX X X        XX XX    100.00  100.00              1     1       30              
2       1   30 Q      11 Special leave with pay (with quotas deduction)     XXXXX   X  X    XXXXXXXXXX X          X X        XX XX      0.00    0.00              1     1       30

Similar Messages

  • Public Holiday calendar not to be taken into account for Sick Leave

    Dear All,
    I need help.
    Public Holiday calendar not to be taken into account for Sick Leave.
    Eg -When I employee applies for sick leave, the public holiday in between the dates shouldnt be considered. How can I configure this requirement.
    Please help.
    Regards,
    Poornima

    For that group, you need 2 counting rules:
    - one for Holiday Class 0 (not a public holiday) and Day Work Schedule Class 0 (not a working day)
    - one for Holiday Class 0 (not a public holiday) and Day Work Schedule Class 1 to 9 (a working day)
    If you want, you can add an other one or two for Holiday Class 1 to 9 (public holiday)
    extract from our T556C
    2       1   10 Q      10 Sick leave deduct from quotas (incl personal/spec) XXXXX   X  X    XXXXXXXXXX  XXXXXXXXX X X        XX XX    100.00  100.00              1     1       10              
    2       1   10 Q      11 Sick leave deduct from quotas (incl personal/spec) XXXXX   X  X    XXXXXXXXXX X          X X        XX XX      0.00    0.00              1     1       10              
    2       1   30 Q      10 Special leave with pay (with quotas deduction)     XXXXX   X  X    XXXXXXXXXX  XXXXXXXXX X X        XX XX    100.00  100.00              1     1       30              
    2       1   30 Q      11 Special leave with pay (with quotas deduction)     XXXXX   X  X    XXXXXXXXXX X          X X        XX XX      0.00    0.00              1     1       30

  • Open sales orders not to be taken into account

    Hi all,
    As i was doing credit management....i have dechecked open orders in the Automatic credit control
    still during credit check it picks up the open order value...
    can anyone please suggest me how to avoid of calculating open order values....
    Cheers,
    Ram

    Hi All,
    Just saw all your replies, Thanks a lot for all those replies.
    I have
                unchecked open items
                unchecked open orders
       and update group is 0000018.
    Still system takes open order value into consideration.
    As analysed found that if it is a Dynamic check even if you have unchecked open items & orders system still takes sales orders into consideration.
    But here i have one more problem that when i change Horizon Date(which is now showing the current date) in FD32 and save it and again if i open it system shows me the current date only again how to change it??
    Thanks in Advance.
    Cheers,
    Ram

  • I have OS 10.4.11 and iTunes 9.2.1 (5). My iPhone 4s does not appear in iTunes. only iPhoto recognize my iPhone. Update of iTunes seems not possible. How can I make appear my iPhone in iTunes?

    I have OS 10.4.11 and iTunes 9.2.1 (5). My iPhone 4s does not appear in iTunes. only iPhoto recognize my iPhone. Update of iTunes seems not possible. How can I make appear my iPhone in iTunes?

    From the box the iphone came in:
    Syncing with iTunes on a Mac or PC requires:
    Mac: OS X v10.5.8 or later
    PC: Windows 7; Windows Vista; or Windows XP Home or Professional with Service Pack 3 or later
    iTunes 10.5 or later (free download fromwww.itunes.com/download)"
    You will need Mac OS 10.5.8 or later in order to get itunes 10.5 or later, in order to connect your iphone.

  • New values not taken into account in MASS / XD99

    I am trying to use a background job to block Customers for Billing at month end.
    In MASS/XD99  I created a variant to update field KNVV-FAKSD (Sales Area Billing Block).
    But running the variant in background (using MASSBACK), it seems the new values I thought I saved in the variant in MASS/XD99 are not taken into account.
    I opened the variant again in MASS/XD99 and indeed it seems it had not saved the new values, because the screen again shows the current values.
    Tried a couple of times again, but everytime I open the variant I have saved again, the new values revert back to 'current values'.
    Is this the reason MASSBACK is also making no updates? What am I doing wrong?
    Thanks in advance for your comments & advice.

    Here is the second method  -
    Regarding your issue:
    Retraction is based on the records of the SEM buffer. Zero records are
    in general filtered in the buffer, so that they cannot be considered forplanning functions that execute the retraction. I regret but this is
    the normal system behaviour in the standard. As workaround, please use adummy key figure that is set to a value <> 0.
    The only way is a work around. If you fill a dummy key figure with an
    arbitrary value, the frame work recognizes this as not 0, even if the
    proper key figure is 0. In that case the 0 retraction would work. So if
    not existing, create an additional dummy key figure to the cube and workon as described.
    For all the of retractions (CO-OM, IM, PS) BPS planning functions are
    used and they run in the general BPS framework. This framework is
    designed to explicitly not process zero records because of performance
    reasons.
    If zero records would be processed, planning functions would run longer
    but they can't deal with those records anyway. Therefore the developmentdecided on the current design, to delete all zero records first, before
    processing them with a planning function. This decision has been
    discussed several times before and it not likely to be changed.
    Regarding SAP note you can check the work aroud I explained can be foundin the oss note 768822 SEM-BPS: Retraction of WBS element w/ Pushback.
    I hope this explains the reason.
    Please kindly confirm the message if your issue is answered.
    Thanks for your support and cooperation.

  • WebLogic 11g: Weblogic-Application-Version not taken into account

    Hello,
    I'm trying to set up Production Redeployment on my EAR, under WebLogic 10.3.6.
    However, I can't figure out how to make Weblogic-Application-Version work in MANIFEST.MF
    Here is my EAR structure:
    MyApplication
    --- APP-INF
    --- --- lib
    --- --- classes
    --- META-INF
    --- --- application.xml
    --- --- weblogic-application.xml
    --- --- MANIFEST.MF <- file I update
    --- WebApp1
    --- --- META-INF
    --- --- --- MANIFEST.MF
    --- --- WEB-INF
    --- --- --- classes
    --- --- --- lib
    --- --- --- web.xml
    --- --- --- weblogic.xml
    --- WebApp2
    --- --- same as WebApp1
    MANIFEST.MF content is the following:
    Manifest-Version: 1.0
    Weblogic-Application-Version: VERSION_1
    The problem is that it's not taken into account when I deploy the application in the WebLogic console.
    When I use staged mode, the line Weblogic-Application-Version: VERSION_1 is removed from the staged MANIFEST.MF file.
    The only way to manage to version my application is to add the flag -appversion to weblogic.Deployer, but I don't want to use that on a Production environment (and it's not recommended in the documentation!).
    Any idea why Weblogic-Application-Version not taken into account? I tried to add debug mode to node weblogic/debug but with no success.
    Thanks by advance,
    Julien

    I tested it at my end and was able to redeploy it successfully with the application versions.
    Are you saying that by deploying through weblogic console,the archive version is not honoring/getting picked up .is that correct?
    if yes, what does your weblogic console shows for archive version,when you first try to deploy your application with the VERSION_1.
    I mean,when you are on the "Optional Settings" Page.Do you see the Archive Version as VERSION_1?
    Deployments >> Click on the Install >> select your application ( MyAPP)>> Install this deployment as an application >> Select deployment targets >> Optional Settngs
    Example from the Optional settings Page for My application
    What do you want to name this deployment?
    Name: MyAPP     
    Archive Version:     
    VERSION_1
    This can confirm that the application being deployed is versioned one and if it doesn't shows up,then you need to check why its not getting picked up.?
    Moreover,You can enable the "deploy" debug flag to see whats going wrong in your case during the deployment process
    Server > Debug > Weblogic > Deploy
    Do refer the below viewlet on the Production Redeployment
    http://download.oracle.com/otn_hosted_doc/wls/redeployment/wls-side-by-side-non-annotation_viewlet_swf.html

  • Sales orders ans reservations not taken into account on MPS / MRP

    Hello,
    I am implementing the MPS / MRP and I have the following problem:
    I have modified the necessary material master fields to take into account some materials on the MPS / MRP execution. These fields are MRP type, Lot Size, Strategy group and Availability Check.
    Then, executing the MPS / MRP I have realised that only the sales orders and reservations created after the material master modifications are taken into account when I execute the MPS / MRP.
    I suppose this is because the sales orders and reservations created before the material master modifications did not create the requirements because the availability check was not set for the material and so on... but I need to take into account these "old" reservations and sales orders .
    Is there any way to "update" these orders and reservations?? Or I need to delete them and create new ones??
    There are several sales orders and reservations created before the material master modifications. I think there have to be an standard way to update them.
    Thank you very much in advanced,
    Jordi.

    Dear,
    Are you getting these reservation in MD04?
    Check the Planning file entry for these material in MD21 if not then create it through MD20 and take MRP run with planning mode -3 Deleted and recreated. And Processing key NETCH.
    Also check the The firmed receipt is within the rescheduling horizon. The rescheduling horizon is set in Customizing for the MRP group (Transaction OPPR, Table T438M). A default value can be predefined in the Customizing of plant data (Transaction OPPQ, Table T399D). define the longer duration for it.
    The firmed receipt (the MRP element) participates in the rescheduling check. You can set which MRP elements participate in the rescheduling check in the Customizing of the plant data (Transaction OPPQ).
    Else you can carry out modification A using the MD_CHANGE_MRP_DATA BAdI and the CONSIDER_RESB method without making a modification.
    Regards,
    R.Brahmankar

  • [ACCEPT][PROMPT] Input not taken into account in the script

    Hi,
    I'm preparing sql scripts to create schemas, one of the script is the main one.
    In that one I'm requesting the passwords for two users, and a path to create an oracle directory.
    The password for the first user and the path are taken into account.
    But the second password requested seems not to be retained.
    Here is the script:
    SET ECHO OFF;
    spool epss_schema.log;
    ACCEPT EPSS_PASSWD char PROMPT 'Enter EPSS Schema password: ' hide;
    ACCEPT PIC_DIRECTORY char PROMPT 'Enter The Directory where the PIC dump files will be stored:' ;
    create user epss10 identified by &EPSS_PASSWD default tablespace users temporary tablespace temp;
    grant dba to epss10;
    connect epss10/&EPSS_PASSWD;
    @EPSS_SCHEMA.sql
    @EPSS_ACTIVITY_CODES_DATA.sql
    @EPSS_BCP_TABLE_DATA.sql
    @EPSS_CALL_SETUP_TYPE.sql
    @EPSS_COUNTRY_CD_DATA.sql
    @EPSS_ERC_KEYWORDS_DATA.sql
    @EPSS_ERC_KEYWORDS_LS_DATA.sql
    @EPSS_ERC_KEYWORDS_PE_DATA.sql
    @EPSS_ERC_KEYWORDS_SH_DATA.sql
    @EPSS_GEN_ACTIVITY_CODES_DATA.sql
    @EPSS_INSTRUMENT_LIST_DATA.sql
    @EPSS_KEYWORDS_DATA.sql
    @EPSS_NACE_CD_DATA.sql
    @EPSS_REVIEW_PANELS_DATA.sql
    @EPSS_REVIEW_PANELS_GEN_DATA.sql
    @EPSS_STATE_DATA.sql
    create or replace directory PIC_IMP_DIR as '&PIC_DIRECTORY';
    ACCEPT EPSS_DUMP_PASSWD char PROMPT 'Enter EPSS_DUMP Schema password: ' hide;
    create user epss_dump10 identified by &EPSS_DUMP_PASSWD default tablespace users temporary tablespace temp;
    grant connect,resource,JAVAUSERPRIV to epss_dump10;
    grant CREATE DATABASE LINK to epss_dump10;
    grant CREATE PUBLIC SYNONYM to epss_dump10;     
    grant CREATE SYNONYM to epss_dump10;
    grant CREATE TABLE to epss_dump10;
    grant CREATE VIEW to epss_dump10;
    grant UNLIMITED TABLESPACE to epss_dump10;
    grant read,write on directory PIC_IMP_DIR to epss_dump10;
    connect epss_dump10/&EPSS_DUMP_PASSWD
    @EPSS_DUMP_SCHEMA.sql
    spool off;
    exit;
    I don't understand why the two first work and not the last one.
    I already tried by moving the third ACCEPT at the top of the script without any success.
    I had thought, as I'm frst connecting as sysdba asking for passwords and then connect as another user meaning another session, the script would have lost the variables values. But no as now I'm rerquesting the second password after the second connection.
    Advice are welcome.
    Thanks and regards.

    Hi,
    I just made some extended tests.
    Code :
    SET ECHO OFF;
    spool epss_dump_schema.log;
    ACCEPT EPSS_DUMP_PASSWD char PROMPT 'Enter EPSS_DUMP Schema password: ' hide;
    ACCEPT EPSS_PASSWD char PROMPT 'Enter EPSS Schema password: ' hide;
    ACCEPT PIC_DIRECTORY char PROMPT 'Enter The Directory where the PIC dump files will be stored:' ;
    prompt &EPSS_DUMP_PASSWD;
    prompt &EPSS_PASSWD;
    prompt &PIC_DIRECTORY;
    The variables have the right information.
    Now create the users will work without any problem
    If I first connect with the first user (EPSS_DUMP) and then connect directly with the other one, that will work.
    Code :
    SET ECHO OFF;
    spool epss_dump_schema.log;
    ACCEPT EPSS_DUMP_PASSWD char PROMPT 'Enter EPSS_DUMP Schema password: ' hide;
    ACCEPT EPSS_PASSWD char PROMPT 'Enter EPSS Schema password: ' ;
    ACCEPT PIC_DIRECTORY char PROMPT 'Enter The Directory where the PIC dump files will be stored:' ;
    prompt &EPSS_DUMP_PASSWD;
    prompt &EPSS_PASSWD;
    prompt &PIC_DIRECTORY;
    CREATE user epss_dump10 IDENTIFIED BY &EPSS_DUMP_PASSWD DEFAULT tablespace users TEMPORARY tablespace temp;
    CREATE user epss10 IDENTIFIED BY &EPSS_PASSWD DEFAULT tablespace users TEMPORARY tablespace temp;
    CREATE OR REPLACE directory PIC_IMP_DIR AS '&PIC_DIRECTORY';
    GRANT connect,resource TO epss_dump10;
    GRANT dba TO epss10;
    GRANT connect,resource,JAVAUSERPRIV TO epss_dump10;
    GRANT CREATE DATABASE LINK TO epss_dump10;
    GRANT CREATE PUBLIC SYNONYM TO epss_dump10;     
    GRANT CREATE SYNONYM TO epss_dump10;
    GRANT CREATE TABLE TO epss_dump10;
    GRANT CREATE VIEW TO epss_dump10;
    GRANT UNLIMITED TABLESPACE TO epss_dump10;
    GRANT READ,WRITE ON directory PIC_IMP_DIR TO epss_dump10;
    connect epss_dump10/&EPSS_DUMP_PASSWD
    PROMPT 'EPSS PASSWORD: &EPSS_PASSWD';
    connect epss10/&EPSS_PASSWD;
    Now If I run additional script after I'm connected with the first user and then I connect with the second user, the connection will faill telling wrong login/password.
    Code :
    SET ECHO OFF;
    spool epss_dump_schema.log;
    ACCEPT EPSS_DUMP_PASSWD char PROMPT 'Enter EPSS_DUMP Schema password: ' hide;
    ACCEPT EPSS_PASSWD char PROMPT 'Enter EPSS Schema password: ' ;
    ACCEPT PIC_DIRECTORY char PROMPT 'Enter The Directory where the PIC dump files will be stored:' ;
    prompt &EPSS_DUMP_PASSWD;
    prompt &EPSS_PASSWD;
    prompt &PIC_DIRECTORY;
    CREATE user epss_dump10 IDENTIFIED BY &EPSS_DUMP_PASSWD DEFAULT tablespace users TEMPORARY tablespace temp;
    CREATE user epss10 IDENTIFIED BY &EPSS_PASSWD DEFAULT tablespace users TEMPORARY tablespace temp;
    CREATE OR REPLACE directory PIC_IMP_DIR AS '&PIC_DIRECTORY';
    GRANT connect,resource TO epss_dump10;
    GRANT dba TO epss10;
    GRANT connect,resource,JAVAUSERPRIV TO epss_dump10;
    GRANT CREATE DATABASE LINK TO epss_dump10;
    GRANT CREATE PUBLIC SYNONYM TO epss_dump10;     
    GRANT CREATE SYNONYM TO epss_dump10;
    GRANT CREATE TABLE TO epss_dump10;
    GRANT CREATE VIEW TO epss_dump10;
    GRANT UNLIMITED TABLESPACE TO epss_dump10;
    GRANT READ,WRITE ON directory PIC_IMP_DIR TO epss_dump10;
    connect epss_dump10/&EPSS_DUMP_PASSWD
    @EPSS_DUMP_SCHEMA.sql
    @COMPILE_INVALID_OBJECTS.sql
    PROMPT 'EPSS PASSWORD: &EPSS_PASSWD';
    connect epss10/&EPSS_PASSWD;
    My script EPSS_DUMP_SCHEMA.sql is just a script to create tables with constraints, synonyms, views, packages and so on.
    Nothing seems strange.
    Well the only strange thing is that if you call a script inside the main script, it seems that all the variables have lost their values.
    Code :
    SET ECHO OFF;
    spool epss_dump_schema.log;
    ACCEPT EPSS_DUMP_PASSWD char PROMPT 'Enter EPSS_DUMP Schema password: ' hide;
    ACCEPT EPSS_PASSWD char PROMPT 'Enter EPSS Schema password: ' ;
    ACCEPT PIC_DIRECTORY char PROMPT 'Enter The Directory where the PIC dump files will be stored:' ;
    prompt &EPSS_DUMP_PASSWD;
    prompt &EPSS_PASSWD;
    prompt &PIC_DIRECTORY;
    CREATE user epss_dump10 IDENTIFIED BY &EPSS_DUMP_PASSWD DEFAULT tablespace users TEMPORARY tablespace temp;
    CREATE user epss10 IDENTIFIED BY &EPSS_PASSWD DEFAULT tablespace users TEMPORARY tablespace temp;
    CREATE OR REPLACE directory PIC_IMP_DIR AS '&PIC_DIRECTORY';
    GRANT connect,resource TO epss_dump10;
    GRANT dba TO epss10;
    GRANT connect,resource,JAVAUSERPRIV TO epss_dump10;
    GRANT CREATE DATABASE LINK TO epss_dump10;
    GRANT CREATE PUBLIC SYNONYM TO epss_dump10;     
    GRANT CREATE SYNONYM TO epss_dump10;
    GRANT CREATE TABLE TO epss_dump10;
    GRANT CREATE VIEW TO epss_dump10;
    GRANT UNLIMITED TABLESPACE TO epss_dump10;
    GRANT READ,WRITE ON directory PIC_IMP_DIR TO epss_dump10;
    connect epss_dump10/&EPSS_DUMP_PASSWD
    @EPSS_DUMP_SCHEMA.sql
    PROMPT 'EPSS PASSWORD: &EPSS_PASSWD';
    PROMPT 'EPSS_DUMP PASSWORD: &EPSS_DUMP_PASSWD';
    PROMPT 'PIC DIRECTORY: &PIC_DIRECTORY';
    The three prompts return
    &EPSS_PASSWD
    &EPSS_DUMP_PASSWD
    &PIC_DIRECTORY
    Thanks for your help.

  • Zero values not taken into account during retraction

    Hello,
    we are having an issue with data retraction.
    When we reset a value to zero, the change in the value is not taken into account and the corresponding value in R/3 remains unchanged.
    There are some posts that say that the function modules UPR_COST_PLAN_EXEC and UPR_COST_PLAN_INIT should be modified in order to populate <xth_data> with zero values, but debug mode shows that the data are read before calling these functions, and the zero values are
    already ignored at this stage.
    Could anyone help me, please?
    Thank you in advance!

    Here is the second method  -
    Regarding your issue:
    Retraction is based on the records of the SEM buffer. Zero records are
    in general filtered in the buffer, so that they cannot be considered forplanning functions that execute the retraction. I regret but this is
    the normal system behaviour in the standard. As workaround, please use adummy key figure that is set to a value <> 0.
    The only way is a work around. If you fill a dummy key figure with an
    arbitrary value, the frame work recognizes this as not 0, even if the
    proper key figure is 0. In that case the 0 retraction would work. So if
    not existing, create an additional dummy key figure to the cube and workon as described.
    For all the of retractions (CO-OM, IM, PS) BPS planning functions are
    used and they run in the general BPS framework. This framework is
    designed to explicitly not process zero records because of performance
    reasons.
    If zero records would be processed, planning functions would run longer
    but they can't deal with those records anyway. Therefore the developmentdecided on the current design, to delete all zero records first, before
    processing them with a planning function. This decision has been
    discussed several times before and it not likely to be changed.
    Regarding SAP note you can check the work aroud I explained can be foundin the oss note 768822 SEM-BPS: Retraction of WBS element w/ Pushback.
    I hope this explains the reason.
    Please kindly confirm the message if your issue is answered.
    Thanks for your support and cooperation.

  • Index not taken into account

    Dear experts,
    I have the following problem. Even though table VEPO has the necessary index (by unvel), it is not taken into account in my select statement:
            SELECT SINGLE *
              INTO wa_vepo_line
              FROM *vepo
             WHERE  unvel = wa_vepo_line-venum .
    the index is standard and is called : A : Selection by subordinate shipping unit  and it is certainly activated.
    What might be wrong?
    Thanks in advance for your help.
    Roxani

    hi
    the system picks up the primary keys or index at the runtime...it is not possible to find out which one it will pick!
    try to add vepos in ur where condition.
    avoid select * . specify the required fields IF possible..
    regards,
    madhu

  • Complaint wth ref. to invoice in CRM - Copy control not taken into account!

    Hi all,
    We are facing problems when replicating return orders (business object Complaint)  with reference to invoices in ERP.
    The copy control in ERP is not taken into account. Does anyone know if it is possible to check the copy control when replicating the complaint to ERP with reference to the invoice?
    The return order is being replicated correctly and the link to the invoice is shown (so, the flow is correct).
    But:
    1.- The manual pricing conditions in the invoice in ERP are not being copied in the return order.
    2.- Some data as the 'Incoterms' is being redetermined from the customer master data and is not being copied from the invoice.
    3.- Data related to material is also being redetermined when the order is replicated
    The copy control is not being taken into account and we can replicate return orders from CRM to ERP with reference to invoices for which the copy control is not maintained in ERP...
    Please help...
    Thanks in advance!
    Yolanda

    Try something like this, in place of  ls_input_field_names-fieldname = 'ZZDEP2PVNDPGM'..
    ls_input_field_names-objectname = 'ORDERADM_H'.
      ls_input_field_names-ref_handle = '100'.
      insert ls_input_field_names into table lt_input_field_names.
    Edited by: Kevin Alcock on Aug 10, 2009 4:03 PM

  • When the message arrives, the message tone sound that many times. How this problem can be solved? I installed the new version, but the problem is not resolved, the future is to be taken into account in the new version of this?

    When the message arrives, the message tone sound that many times. How this problem can be solved? I installed the new version, but the problem is not resolved, the future is to be taken into account in the new version of this?

    I'm not sure I understand the question, is the message tone going more then once?
    Turn off repeat message alert here:
    settings > notifications > messages > repeat alert > never
    Some people have found this does not stop the repeat miessage tone,
    if you are one of those then I suggest you contact Apple:
    http://www.apple.com/feedback/iphone.html
    They may not respond, but hopefully will fix in uodate if enough people complain.

  • CK11N  Follow-up material not taken into account when costing

    We are running the standard cost with quantity structure and we have detected that all the follow-up information is not taken into account to determine the bill of materials that will be used to calculate the standard cost.
    SAP helpdesk has answered me the following:
    "The situation in product cost planning is we do not check availability of any material stocks, and therefore we cannot provide the same functionality as MRP. You will have to create a new BOM that includes
    the follow-up material or apply the modification to take into account the follow-up part."
    Please, if anyone knows any user-exit we could use to develop something to fix it, we would appreciate it very much. It is a normal process and we can not say it does not work.
    Regards, and waiting for any solution.
    Yolanda.

    Dear,
    Are you getting these reservation in MD04?
    Check the Planning file entry for these material in MD21 if not then create it through MD20 and take MRP run with planning mode -3 Deleted and recreated. And Processing key NETCH.
    Also check the The firmed receipt is within the rescheduling horizon. The rescheduling horizon is set in Customizing for the MRP group (Transaction OPPR, Table T438M). A default value can be predefined in the Customizing of plant data (Transaction OPPQ, Table T399D). define the longer duration for it.
    The firmed receipt (the MRP element) participates in the rescheduling check. You can set which MRP elements participate in the rescheduling check in the Customizing of the plant data (Transaction OPPQ).
    Else you can carry out modification A using the MD_CHANGE_MRP_DATA BAdI and the CONSIDER_RESB method without making a modification.
    Regards,
    R.Brahmankar

  • GR processing time not taken into account for exception message in MD04

    Hello Gurus!
    In MD04 when we toggle between displaying at GR date and AV date sometimes there is a shortage in AV date view but none in GR date view due to GR processing time.
    How can we include the GR processing time to be taken into account so that an exception message is shown for the shortage during AV view?
    Thanks in advance!
       -Alvin

    Hi There
    What you meant by shortage,,,
    AV- Available date,
    GR- Availble date+GR processing time
    if you have not set up the GR processing time in Master, it may come as ZERO days,, so AV and GR date may be same,,
    if this is not your quesry, please provide more details on your query,,,
    Thanks
    Senthil

  • Profiles are not taken into account with DIS

    Hi,
    I created several profiles in the content management.
    They work correctly.
    Now, I have to use Desktop Integration Suite.
    The installation worked correctly.
    The problem is, when a user saves and closes the document, the "check-in" screen is the standard one.
    So, all the metadata are displayed on the screen (nearly 40 - it is becoming a mess for the users to known which ones to fill).
    I would like to use the profiles to reduce the metadata (as it is done for the check-in and search in the content server).
    How can I do it ?
    Does anybody encounter this problem ?
    Any help is welcome.
    Kind regards
    Pierre

    Hi Jason,
    I understood my mistake (but I installed the patch).
    The profile is correctly taken into account.
    But the trigger of my profile is the document type.
    With the DIS, I don't know how to transfer metadata before to open the "standard check-in" page.
    Once the check-in page is opened, if I change the document type, the profile is not activated.
    Do you know if it is possible :
    - To transfer metadata to the content server that could be used to determine the profile, for example by defining properties on the Word document.
    - To change of profiles by changing the value of the trigger used for the profile.
    For the moment, the only trick I've found was to create a "server" by value of the trigger and to put the document type as a data to be remembered.
    At that moment, the profile is correctly loaded.
    But, this solution is not very user-friendy. Do you see another one ?
    Pierre

Maybe you are looking for