Last Standard Process Date issue....

What happens is we need to unterminate employees during payroll. We have tried changing last standard process date to one month past the term date but this still doesnt work. It may be a workflow issue. Eg; if they terminate and employee on Aug 17th,, they set the last standard process date to Aug 31 and when they run payroll on Sept 4th, payroll tries to route a time card for approval but cannot do so.

I noticed you said 'third party payroll' can't process the final timecard. Is this an issue with that system or the interface from OTL to it rather than an issue with OTL itself?
I think that OTL will not let you enter any hours after the employee has left. But, if there is a timecard for any days up to 20th (using your example), then these should be processed OK within OTL.
It is possible to enter elements directly via element entry or BEE between the termination date and the last standard process (LSP) if you have set LSP to a later date on the termination record. In this case, the assignment is not ended, but is set to 'terminate assignment' status.
Hope this helps!
Regards
Tim

Similar Messages

  • I want to run SSIS package every time based on last time job processed data time.........

    Hi all,
    i have SSIS package ..its needs run every day .for that we were schedule a job for ssis..
    but we need when package going to run that time we have to take data, besed on last time processed date..for that am using msdb data base to get process date ..but am not yet schedule a job am going to do 1st time..
    insde the package am using this query using execte sql task 
    SELECT     CASE WHEN DateDiff(Day, CONVERT(DATETIME, LEFT(MAX(H.run_date), 8)), GetDate()) = 0 THEN 30 ELSE DateDiff(Day, CONVERT(DATETIME, LEFT(MAX(H.run_date), 8)), GetDate()) END   AS Dates
    FROM            dbo.sysjobhistory AS H INNER JOIN
                             dbo.sysjobs AS J ON H.job_id = J.job_id
    WHERE        (H.run_status = 1) ..
    if job already ran means its was working fine.if job not yet scheduled means its giving error...
    pandiyan

    Try this one on:
    SELECT COALESCE(dates,fallBack) AS dates, name
    FROM (
    SELECT 30 AS fallBack
    ) b
    LEFT OUTER JOIN (
    SELECT CASE WHEN DateDiff(Day, CONVERT(DATETIME, LEFT(MAX(H.run_date), 8)), GetDate()) = 0 THEN 30
    ELSE DateDiff(Day, CONVERT(DATETIME, LEFT(MAX(H.run_date), 8)), GetDate())
    END AS Dates,
    j.name
    FROM dbo.sysjobhistory AS H
    INNER JOIN dbo.sysjobs AS J
    ON H.job_id = J.job_id
    WHERE H.run_status = 1
    GROUP by j.name
    ) a
    ON b.fallBack = b.fallBack
    You might want to consider filtering on j.name or j.job_id to get the info for the specific job.

  • How to Update the Final Process Date in Termiantion Screen Oracle HRMS

    Hi Gurus,
    I have a requirement where i need to update the Final Process Date. is there any standard api to update the final process date. i know there is a update to update the termination details like actual termination date,last standard process but how about final process date.
    Eg : For one assignment the final process date is 31-March-2013 and as of now he is a exemployee in system as he was terminated on 31-DEC-2012. now i want to prepone the final process date to 31-JAN-2012.
    any replies on this is highly appreciable.
    Thanks in advance.

    I would reverse the termination then terminate the employee with the right FPD.
    hr_ex_employee_api.reverse_terminate_employee
    hr_ex_employee_api.actual_termination_emp
    hr_ex_employee_api.final_process_emp
    Maybe also the below might work:
    hr_periods_of_service_api.update_pds_details
    Please note that there should be no payroll result in your case after January payroll.
    On a side note, you can also termiante employee without setting the FPD, and then put it only when approved by the business - in this way wou can avoid the prepone.

  • Significance of Final Process Date

    Hi All,
    Can somebody tell me why do we have the Final Process Date when the Payroll of the employee is not going to run
    beyond the Last Standard Process?
    Regards,
    Gayatri

    Hi Guys,
    Its not any odd behaviour of the system. It all depends upon how you have defined your element.
    In the element definition screen, there is a section for "Termination" which has 3 values namely Actual Termination, Final Close and Last Standard Process.
    The element will behave according to this setting once the employee has been terminated. If it is configured as LSP(default), then element will be processed on or before LSP date.
    For the case of Bonus, it would be better to set it as a Final close date.
    Hope this helps
    Cheers
    Arun

  • M:N relationships within a dimension: Standard process vs. BI Data model

    Hi,
    I just completed a review of the u201CMulti-Dimensional Modeling with BIu201D from this link:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84 and I have a quick question here:
    On page 36 of this link, the author noted that
    u201CAccording to the standard process, color should be in the master data table for material, like material type. But this is not possible because the material is the unique key of the master data table. We cannot have one material with multiple colors in the master data table.u201D
    i.e. my understanding is that, based on Standard Process it is NOT possible to place two characteristics in M:N relationships in the SAME dimension but with BI Data Model, this, the author points out  is possible
    u201Cdue to the usage of surrogate keys (DIM-IDs) in the dimension tables allowing the same material several times in the dimension tableu201D  i.e. although material is the unique key of the dimension.
    1.
    What is being referred here as u201CStandard Processu201D since document is on u201C u2026 modeling with BIu201D?
    2.
    It goes on to discuss u201CDesigning M:N relationships using a compound attributeu201C as a solution to the M:N relationship in a dimension.
    What is the need to address this problem with compound attributes if characteristics in M:N relationships within a dimension, such as material and color, are not a problem in BI Data Model?
    3.
    Can you help explain the underlined cautions of the following guidelines for compound attributes (with examples if possible please):
    u201CIf you can avoid compounding - do it!
    Compound attributes always mean there is an overhead with respect to:
    Reporting - you will always have to qualify the compound attributes within a query
    Performance
    Compounding always implies a heritage of source systems and just because it makes sense within the
    source systems does not necessarily mean that it will also make sense in data warehousing.u201D
    Thanks

    Hi Amanda.......
    In a dimension table, any number of semantically related dimension attributes are stored in a hierarchy (parent-child relationship as a 1:N relationship). If an M:N relationship exists between dimension attributes, they are normally stored in different dimension tables.
    I checked the document.........
    On page 36 of this link, the author noted that
    u201CAccording to the standard process, color should be in the master data table for material, like material type. But this is not possible because the material is the unique key of the master data table. We cannot have one material with multiple colors in the master data table.u201D
    1.
    What is being referred here as u201CStandard Processu201D since document is on u201C u2026 modeling with BIu201D?
    Here the first thing that I want to tell u is that............the diagram shown here is Classic Start Schema............since Extended Star Schema will never store Master data in Dimension tables.........it stores Masterdata in seperate Master data tables..........and nowadays..............Classic Star schema is obsolet.......Dimension table will only store Dimension id and SID......
    Now the Standard process is that..........anything which is Describing a master data..........can be added as an Attribute of that master data.......
    Suppose........Employee is the Masterdata.then Ph no can be one of the Attribute of this master data......
    So this the Standard Process.........but this cannot be followed every time.........why........already explained.....
    2.
    It goes on to discuss u201CDesigning M:N relationships using a compound attributeu201C as a solution to the M:N relationship in a dimension.
    What is the need to address this problem with compound attributes if characteristics in M:N relationships within a dimension, such as material and color, are not a problem in BI Data Model?
    Bcoz ..........we use compounding Characteris tic to define the Master data uniquely.........and we load compounding Characteristic seoerately...which is independent of the Master data........ie......compounding Characteristic there is a seperate master data tables...........so ..........problem resolved......
    3.
    Can you help explain the underlined cautions of the following guidelines for compound attributes (with examples if possible please):
    u201CIf you can avoid compounding - do it!
    Compound attributes always mean there is an overhead with respect to:
    Reporting - you will always have to qualify the compound attributes within a query
    Performance
    Compounding always implies a heritage of source systems and just because it makes sense within the
    source systems does not necessarily mean that it will also make sense in data warehousing.u201D
    For Compounding Characteristic............u hav to laod the Coumpoundede master data seperately..which is a overhead...........moreover while query execution......two tables will be accessd..which may result a performance issue.......Performance can be affected when compounded characteristics are used extensively, particularly when a large number of characteristics are included in a compounding. In most cases, the need to compound is discovered during data modeling.
    Regards,
    Debjani.........

  • Last Goods Issue Date & Last Goods Receipt Date

    Hi ,
    I would like to know in which table (and fields) I can find the "<b>Last Goods Issue Date</b> " & "<b>Last Goods Receipt Date</b>" based on Material Code, Plant and Storage location.I have to prepare a existing stock report for slow moving material.
    Thanks.

    Hi,
       Check table LIKP deliver header and LIPS deliver Item
    Regards,
    Prashant

  • Date issue in processing billing documents that shipped in a prior period

    Date issue in processing billing documents that shipped in a prior period that is now closed:
    SAP values those deliveries with the current document date in A/R when it should be the original delivery date.  Baseline date needs to stay the delivery date, but is getting copied from the billing date in the new period.  A user exit exists (& needs to be applied) to manipulate the baseline date when the accounting document is being created.
    Any input is appreciated.

    Hi
    Try with Invoice Correction Request concept
    Sale document Type: RK
    Reference Document Type : Billing Number
    for Understanding the Invoice Correction request check below link
    [Creating Invoice Correction Requests|http://help.sap.com/saphelp_46c/helpdata/en/dd/55feeb545a11d1a7020000e829fd11/content.htm]
    Regards,
    Prasanna

  • Master Data Issue

    Hi BW Guru's,
    I have an issue with standard master data data source in BW system.  In R/3 side they have done modifications for the existing strucuture and based on the modification they want to see the values but where as when we load data the values are not getting (Daily we will load master data through process chains).  And the strange thing is when we do full load manually with the same info source the values will be uploaded.  and the users can see the values.
    The data source is 0CUST_COMPC and the modification that they have done remapping of Customers to the salesrep and this will show an impact on the Collector.In BW we are facing the following issue that the change of collector is not updated in BW
    why is this field not considered into the deltas....?
    Is there any OSS note which support to pull this field in to BW please suggest me.
    can any one help me on this issue.  Your help will be appreciated in advance.
    Thanks,
    Ganni

    Dear Venkat,
    Once you have Added the New Fields in BW Side, You have to Reinitialize the Delta .
    And I dont think the master data would be very frequently changing as the data comes from KNB1 , so may be this is the result the delta is not coming for those fields, But You have Reinit again to get deltas working properly after enhancing KNB1.
    Hope it helps..
    Thanks,
    Krish

  • Oracle BPM Process Data mart

    I am required to create audit reports on BPM workflows.
    I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
    Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.

    Hi,
    You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
    I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
    You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
    Here's the contents of the document:
    1. Introduction
    The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
    BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
    Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
    In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
    2. Creating the Service Endpoints
    The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
    In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
    Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
    ·     the password for the user
    Scroll down to the bottom of the page and click <Save>.
    In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
    Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
    On the following screen, specify:
    ·     the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
    Then click <Save>.
    In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
    Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
    ·     the password for the user
    3. Configuring BAM Updater Service
    Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
    So, here’s the example of how to set up the BAM Updater service….
    Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
    Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
    Once the data is input, click <Save>.
    We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
    Text showing the successful creation of the database and data structures should appear.
    Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
    4. Configuring Process Data Mart Updater Service
    In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
    In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
    Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
    Also, un-tick the Generate O3 Cubes (see later notes):
    Then click <Save>.
    Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
    Text showing the successful creation of the database and data structures should appear.
    5. Configuring Common Updater Service Parameters
    In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
    • Log directory: location of the log file
    • Messages logged from Data Store Updater: severity level of the Updater logs
    • Language
    • Generate Performance Metrics: enables performance metrics generation
    • Generate Workload Metrics: enables workload metrics generation
    • Generate O3 Cubes: enables O3 Cubes generation
    In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
    a. the Log directory must be specified in order for the logs to be generated
    b. the Messages logged from Data Store Updater, which indicates the level
    of the logs, should be DEBUG for troubleshooting and WARNING otherwise
    c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
    d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
    The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
    6. Set up the WebLogic configuration
    We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
    Click on <Lock and Edit> and then <New> to add a New data source.
    Specify:
    ·     the Name – use the name you set up in the Process Administrator
    ·     the JNDI Name – again use the name you set up in the Process Administrator
    ·     the Database Type – Oracle
    ·     use the default Oracle Database Driver
    Then click <Next>
    On the next screen, click <Next>
    On the next screen specify:
    ·     the Database Name – this is the SID – for me that is XE
    ·     the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
    ·     the Database User Name and Password – this is the BAM database user specified in the Process Administrator
    Then click <Next>
    On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
    On the next screen, select your server as the target server and click <Finish>:
    Finally, click <Activate Changes>.
    7. The Last Step: Starting Up and Shutting Down the Updater Service
    ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
    -     For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
    $ALBPM_HOME/bin$ ./albpmwarehouse.sh start
    -     For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
    %ALBPM_HOME%\bin> albpmwarehouse.bat install
    After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
    C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
    8. Finally: Running BAM dashboards to show it is Working
    Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
    9. General BAM and Data Mart Caveats
    a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
    b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
    c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
    d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
    e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
    f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
    The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
    g. Configuring the Updater Service Log
    In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
    - dwupdater.log: which is the log for the Data Mart updater service
    - bam-dwupdater.log: which is the log for the BAM updater service
    In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
    - wrapper.console.loglevel: level for the updater service log
    - wrapper.logfile.loglevel: level for the updater service log
    - wrapper.java.additional.n: additional argument to the service JVM
    - wrapper.logfile.maxsize: maximum size of the updater service log files
    - wrapper.logfile.maxfiles: maximum number of updater service log files
    - wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
    9.1. Updater Service Log Configuration Caveats
    a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
    b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
    c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
    Hope this helps,
    Dan

  • In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.

  • Custom report -Mb51 -EBELN,AUFNR,KDAUF & KUNNR -Data issues

    Hi,
    This custom report is related to material document list (MB51).
    I was trying to retrieve Purchase Order(EBELN),Order number(AUFNR),Sales order(KDAUF) & Ship-to-party( KUNNR) from MSEG & MKPF.
    Unfortunately
    It looks like for Sales Orders, it picks up the Customer Number but no Order Number. For STO’s, it looks like it picks up the STO # but no Customer #.
    How to over come this?
    FYI..I also tried adding fields to standard report(MB51) and it is also doing the same manner .
    Am I pulling it from Wrong tables? or is it related to DATA issues?
    Regards
    Praveen
    Message was edited by:
            PRAVEEN s

    Hi,
    This custom report is related to material document list (MB51).
    I was trying to retrieve Purchase Order(EBELN),Order number(AUFNR),Sales order(KDAUF) & Ship-to-party( KUNNR) from MSEG & MKPF.
    Unfortunately
    It looks like for Sales Orders, it picks up the Customer Number but no Order Number. For STO’s, it looks like it picks up the STO # but no Customer #.
    How to over come this?
    FYI..I also tried adding fields to standard report(MB51) and it is also doing the same manner .
    Am I pulling it from Wrong tables? or is it related to DATA issues?
    Regards
    Praveen
    Message was edited by:
            PRAVEEN s

  • Standard Process for Credit Memo & Debit Memos in CRM

    Dear all,
    What is the standard process for Credit Memo and Credit Memo Creation within CRM Complaint Transaction?
    We can successfully replicate Return Items and Replacement Items in ECC Return Order
    But I don't know how to process Credit Memo and Debit Memo in ECC.
    What are the Accounting Implications takes place once we create and release the credit memo / Debit memo in CRM Complaint Transaction?
    How to see the entire document flow?
    I can create Credit Memo for a Returned Line Item in CRM without any errors. But I could not see any follow-up transaction or process either in CRM or ECC.
    Kindly educate me in this regard. Your suggestions are highly appreciated
    Best regards
    Raghu ram
    Edited by: Raghu Ram on Jun 23, 2010 6:29 PM
    Edited by: Raghu Ram on Jun 25, 2010 8:30 AM

    Hi suchi,
    The following would be very useful to you.
    To reduce implementation time for print forms development, SAP has created a set of the most commonly used forms - Preconfigured Smart Forms:
    <u><b>SD:</b></u> Invoice, Quotation, Contract, Delivery Note, Order Confirmation, Scheduling Agreement, Inquiry, Cash Sales, Picking List
    <b><u>MM</u></b>: Purchase Order, Request for Quotation, Contract, Delivery Schedule, Good Issue (3 scenarios), Good Receipt (3 scenarios)
    <b><u>FI:</u></b> Dunning Notice
    All can easily adapt these forms to their requirements, saving time and money.
    Hope this information has been useful to you.
    if you would like to have a smartform of your own i.e a Z or a Y then there is an option to get a sap script copied onto a smartform.
    the sap script for the same is -> <b>F140_DOCU_EXC_01</b>
    Go to Txn SMARTFORMS and just put the required smart form name in FORM field and then go to the menu bar: Utilities-> Migrate SAPScriptForm and juts type in the Script name which is given above(i.e F140_DOCU_EXC_01 )
    I hope this will be an amicable solution.
    if helpful pls reward.
    Thanks
    Venugopal

  • LAST GOODS MOVEMENT DATE wise report

    Hi MM Experts
    Is there any report through which we can find LAST GOODS MOVEMENT DATE ( goods receipt , issue date ).
    i have tried MC.2 but in this report material , plant and sloc description is coming , i want code so that we can understand easily.
    regards
    anubhav

    Hi,
    To get the plant, material code with description in MC.2 transaction.
    Please following below steps:
    on output screen >>>
    settings >> Characteriastics display >>> select key and description or only Key.
    Regards,
    Shailesh Mackwan

  • Function to return a value of a object based on last record by date

    Post Author: Tned
    CA Forum: Desktop Intelligence Reporting
    Can anyone assit with a method/formula to return the a value of an object based on the last record by date. BO 5 or XI. See example below. Query structure is not an issue. Only need help with the last record function or aggregate.
    Data Table         
    ID / Serial #
    Date
    Status
    Condition
    Abc1
    01/01/08
    1
    A
    Abc1
    01/02/08
    1
    Z
    Abc1
    01/02/08
    3
    Z
    Abc1
    01/04/08
    2
    D
    Abc1
    01/05/08
    5
    E
    Abc2
    01/01/08
    1
    F
    Abc2
    01/02/08
    2
    Z
                                                                                    Desired query results. Only the values of the latest records returned.                      
    ID / Serial #
    Status
    Condition
    Abc1
    5
    E
    Abc2
    2
    Z

    Post Author: Tned
    CA Forum: Desktop Intelligence Reporting
    Thanks Prashant
    However, when i add either the status or condition variables to the report all lines related to the ID are returned not just the last entery by date.
    Thanks again.Terry

  • Vendor wise last purchase/grn date

    Hi Experts,
    Please quote if any statndar report is available to vendor wise last purchase/grn date for a partcular pur org or plant.
    Regards

    Hi,
    Not standard report which your req.
    You have to create Qurey report.
    Regds
    Sami

Maybe you are looking for