Documentation on Post Processing Framework (PPF)

Hi Experts,
Can anyone direct me to some good documentation on Post Processing Framework (PPF) functionality in SAP TM. Information such as how the functionality can be best used to optimize with SAP ECC will help.
Thanks a lot in advance.
Regards,
Hari

Dear Hari,
Post Processing Framework (PPF) functionality can be used to processing of printing of the documents related to transportation. The main print documents in SAP TM are Bills of lading, Freight forwarding Order documents, Labels, Pro Forma Customer Freight Invoice, Pro Forma Supplier Freight Invoice.
Please find the URL link below for accessing the help document on the same.
http://help.sap.com/saphelp_tm70/helpdata/en/1f/52e49dd95644eea0fec10e686f591e/frameset.htm
I hope this will help you.
Regards
Datta

Similar Messages

  • Post Processing Framework

    Hello Everyone,
    I am new to Netweaver 2004s and am working on an outbound interface through IDOC generation. I don't have an idea about the new "Post Processing Framework" concept but do understand that it closely resembles the regular Output Determination Type.
    If someone can help me with any materials towards better understanding of the concept as well as a working template towards the same, I would be highly obliged.
    Looking forward to hear from you all.
    Thanks & Rgds,
    Soumya

    Hi Soumya,
    PPF configuration is basically meant for processing the meesage on the shipment document in GTS.
    There are couple of steps that has to be followed :
    1 .
    Define Techn. Medium for Msgs (PPF Actions) f. Cust. Shipm
    In an action profile, we define all of the permitted actions. We also define general conditions for the actions contained in the profile. For example, we can define the way in which the system performs the action (by method call or Smart Forms).
    In an action Profile we define different Action it could be for EDI or Print as required.
    The Action definition is associated to Processing type
    Where  Actually the action type is defined like the Form and  method get defined here to process the message
    Action Profile is assigned to Custom Document type.
    Kind Regards,
    Sameer
    2. Define Conditions and Output Parameters for Comm. of Cust.Sh
    Here we define the parameter for messages such as printer and spool as required.
    3. Define Messages for Communication Processes
    Here we assign the defined Action definiton to the message so now action def is associated to Processing type so message will ge processed as per processng type.
    Hope this help in better understanding.

  • Changing user for PPF - Post Processing Framework

    Hello,
    Is there a way to change the user for a Post Processing action once the trigger is activated?
    There are times where we want a user action to kick off a PPF task, but we do not want them to have security to complete this extra task themselves.  Also, the PPF task may create an activity document where we want a specific user id to be shown.
    I believe the answer is no, but was wondering if anyone else out there was able to do this or had other ideas?
    Thanks.
    Mike

    Thank you, PK!
    That helped us look at our system and we do see a logical RFC connection we set up 6 years ago for workflow for when we went live.  So that did help us some.
    If we set up this logical connection, do you know how we can tie the ABAP RFC that we set up in BRF to use that logical connection? 
    Thanks.
    Mike

  • Any documentation on post-processing capabilities of Adobe LiveCycle?

    Hi,
    As part of product evaluation, can anyone advice if Adobe LiveCycle supports post processing as common correspondence management feature? If yes, please help to provide a learning document of the same.
    Thanks and regards,
    Mayank

    Yes, post-processes can be used with CM solution during evaluation.
    Go through my blog @ http://blogs.adobe.com/santosh/2012/03/25/how-do-i-create-post-process-for-correspondence- management/  , It also has a link to Adobe's online help docs

  • Cl_manager_ppf; PPF(Post Processing Framework)-Manager

    Hi all,
    I selected from table UDM_CCT_REL to have customer contacts that has promises to pay. The requirement is: Using the selected data from table UDM_CCT_REL, I have to call PPF-Manager to create a Correspondence form. My problem is I have no idea what or how to use a PPF-Manager. By any chance could I request some sample code or anything that could help me. Thanks a lot.

    Hi Fabien,
    Yes that's right , in case you are using processing using selection report then this is really a matter of performance.
    For this SAP has provided an option of using the field OPTIMIZATION RULE while scheduling jobs through report RSPPFPROCESS.
    This fields picks up only those actions which are relevant as per that date. Please go through the note :
    653159 - Using optimization rules in the PPF
    /Hasan

  • Parallel Processing framework using package BANK_PP_JOBCTRL

    Hi All,
    I am analyzing differnt parallel processing techniques available in SAP and need some input on Parallel Processing framework using package BANK_PP_JOBCTRL.
    Can someone please let me know if you have any documentation available with you on this framework.
    I have couple of questions on this framework as mentioned below.
    1) This framewrok was develped as part of SAP Banking soltion. So is it possible to leverage it for other modules in SAP since now it is part of SAP_ABA component.
    2) What are the benfits of it over other technique like asynchronous Remote function call (aRFC).
    Any inputs on this will be of great help since there is very less documentation available on this topic on net.
    Regards/Ajay Dhyani

    Hello,
    Apologies, never saw this thread and your query and i already worked it out myself during the time i posted it . If you are still interested here are some of the inputs for you.
    With in package bank_pp_jobctrl , you will find these FM. I have mentioned the use of it as well.
    RBANK_PP_DEMO_GENERATE_DATA: To create the Business data for Parallel Processing.
    RBANK_PP_DEMO_CREATE_PACKMAN: To create Packages out of the business data.
    RBANK_PP_DEMO_START : To process data in parallel.
    RBANK_PP_DEMO_RESTART: To re-process failed records during parallel Processing.
    You will need to call above in your report program in the same sequence as shown above based on you requirement. I did used only first three.
    TO generate events you will need to execute SE38: RBANK_PP_GENERATE_APPL to create application this will create the FM with numbers as shown below.
    Events: This PPF automatically triggers various events during the execution of the Start Program. Each of this event is associated with a custom function module which contains the business logic.
    For implementing this framework, at least the below mentioned methods should be implemented .
    0205 – Create Package Templates  : This method is used to write the logic for creating packages which in turn decides the data to be processed in parallel. This function module is called in loop at the loop ends only when the exporting parameter E_FLG_NO_PACKAGE has a value ‘X’ passed back to the Parallel processing framework.
    1000 – Initialize Package :This method is the first step in processing a package. It fetches all the parameters required for the parallel processing to start. All the parameters are passed to this FM as importing parameters and it is the responsibility of this FM to save it in global parameters so that it can be utilized by Parallel processing framework.
    1100 – Selection per Range : This method is used to read data for a package. The objects selected must be buffered in global data areas of the application for later processing. The package information is stored as interval in global parameters and this information is used to select the package specific data.
    1200 – Selection for Known Object List: This method is used instead of method 1100 if it is a restart run. The objects to be processed are known already.
    1300 – Edit Objects: The processing logic to be implemented using parallel processing for the selected objects is written in this method. This function module is used to implement the business logic and
    Also, obiviously you would like to log your messages , so the framwrok provides macros to do it.
    Let me know if you need some further help as I know there is very little information provided on this.
    Regards/Ajay

  • SAPINST failed step "Install Oracle Database (post processing)"

    i will install ep6.0 with oracle database 9.2.0.4 on sun solaris.
    the SAPINST failed at the step "Install Oracle Database (post processing) with follow error.
    the oui - installer (runinstaller) finished sucessfully.
    ERROR 2004-08-31 16:05:17
    CJS-00084  SQL statement or script failed. DIAGNOSIS: Error message: ORA-01501: CREATE DATABASE failedORA-01101: database being created currently mounted by some other instance
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
    With the Partitioning optionJServer Release 9.2.0.4.0 - Production. SOLUTION: See ora_sql_results.log and the Oracle documentation for details.
    no user is logged on the Database. Database is not running (is shutdowned and unmounted). No Oracle process is running.
    pleas help.
    thanks
    armin hadersbeck

    Hi Armin,
    We're going to install EP 6 stack3 with Oracle 9.2.0.4 and Solaris 9.    We have exactly the same error as you during sapinst.
    How do you solved that ?
    Thanks a lot for your help,
    Regards from Mexico,
    Diego

  • Issue with Bulk Load Post Process Scheduled Task

    Hello,
    I successfully loaded users in OIM using the bulk load utility.  I also have LDAP sync ON.  The documentation says to run the Bulk Load Post Process scheduled task to push the loaded users in OIM into LDAP.
    This works if we run the Bulk Load Post Process Scheduled Task right away after the run the bulk load.
    If some time had passed and we go back to run the Bulk Load Post Process Scheduled Task, some of the users loaded through the bulk load utility are not created in our LDAP system.  This created an off-sync situation between OIM and our LDAP.
    I tried to use the usr_key as a parameter to the Bulk Load Post Process Scheduled Task without success.
    Is there a way to force the re-evaluation of these users so they would get created in LDAP?
    Thanks
    Khanh

    The scheduled task carries out post-processing activities on the users imported through the bulk load utility.

  • Post Processing Steps after Support Package Stack Application

    I'm curious if anyone has any guidelines for post processing steps (or pre-processing) when applying Support Package Stacks to their Development Infrastructure (Developmend Workplace and the Central NWDI Server).  We have just upgraded a couple of developers local engines and developer studio to SP stack 19 from SP stack 14 and are experiencing some problems.  We also applied the J2EE stack and appropriate SCA files to the NWDI server.
    After the support packs it looks like our DTR files are gone (when reimporting configuration via Developer Studio the SC's are there but there are no DC's inside of them).  Additionally, it looks like we have to manually reimport the newest versions of SAP_BUILDT, SAP_JTECHS, and SAP-JEE.  Another thing is that old Local Web Dynpro DC's are now complaining about class path problems and different versions of files.  We followed the documentation for applying SP19 exactly as the documentation says for a Java Development Usage type.  Is there a checklist or something that anyone has for steps to perform after the application of the support packs?

    I think I'm missing something.  Now I see the code and DC's inside the DTR.  However, when I try to import into NWDS no DC's come in (the software components exist, but no DC's in them).  Additionally, the CBS web ui reports that my software components do not contain any DC's even though I see them in the DTR.  What things can I look at to determine what I'm missing here?
    Thought I'd add some more info..after applying the support packs, we imported the new SAPBUILD, SAP_JTECH, and SAP_jee SCA'S into our track as we required some functionality from the newer build SCA.  We also reimported our old archives back into the system by manually checking them in assuming this would fix the problem with us not seeing the source in the NWDS or the DTR.  After the import, the CBS no longer sees our custom DC's, but the DTR does (both in active and inactive ws).  When importing the dev configuration into the NWDS our custom DC's no longer appear, but SAP's standard SCA's do.
    Message was edited by:
            Eric Vota

  • Pre/post processing

    Hello group!
    Configuration:
    Oracle 8.1.7
    XDK 9.0.0.0.0(beta)
    We use XSQL Servlet scripts (great framework!). We tried to provide single entry point into our webapp. We want to replace XSQLServlet class, and gain control of our servlet environment (transactions, thread synchronization, logs...).
    Problem is, if we replace XSQLServlet class with our own class (which extends HttpServlet), we can't use XSQLPageProcesor directly because it is declared private for XSQLServlet package.
    So, we try to use XSQLRequest class and construct an instance with XSQLServletPageRequest as parameter and then call XSQLRequest.process().
    It works ,but ...
    When we attach object with setRequestObject (), requestProcessed is never called.
    Why? We mist something?
    If we use XSQLRequest (URL url), must we handle sessions, request parameters, cookies ...?
    Where is the best place to put pre/post processing of request?
    Thanks in advance
    Tomi
    null

    We found just XSQLPageRequest.setRequestObject(), that we already try to use (first post in thread).
    We associate *.xsql with our HttpServlet because we want to transparently add controller object(s), and we want to turn this feature off in servlet engine config whenever we want. If we forward it (getServletContext().getRequestDispatcher().forward()), with same name and extension ...
    Some code:
    public void doGet(HttpServletRequest request, HttpServletResponse response)
    throws ServletException, IOException
    XSQLServletPageRequest req = new XSQLServletPageRequest(request, response, myContext);
    //TransactionController implements the XSQLRequestObjectListener
    req.setRequestObject("transaction",new TransactionController(req));
    XSQLRequest xsqlrequest = new XSQLRequest(request.getRequestURI(), req);
    try
    xsqlrequest.process(response.getOutputStream() ,new PrintWriter(System.err));
    catch(Exception ex)
    ex.printStackTrace();
    return;
    }When we call XSQLRequest.process, servlet response just fine (except we lost default encoding windows-1250?).
    TransactionController is properly attached, but requestProcessed is never called.
    We found why. Maybe?
    XSQLRequest.process call createNestedRequest from XSQLServletPageRequest, and then setIncludingRequest.
    This seems like little overhead :)
    We just want to keep our XSQL Scripts clean from including controllers via custom actions in every page.
    Can we do that?
    Anybody tried something similar?
    Thanks for your time
    Tomi
    null

  • BPS Post Processing Question

    Dear All,
    We have an EXIT function which resides in one planning level and the planning layout in another planning level.
    The EXIT function is calculating the numbers correctly which creates internal table XTH_DATA and passes it to function module UPF_PARAM_EXECUTE. When we check XTH_DATA, the data records are as we need the records to be generated.
    The issue we are having is that, when the EXIT funciton has completed, the data in XTH_DATA looks perfect. However, the data saved to infocube is not exactly the same as it created. For example, the EXIT creates data for 2007/001 - 012. The resulting data in infocube contains many entries of fiscal period 2006001 through 2006006. The posting period is left blank from EXIT function. The resulting data has posting period of blank, 1 through 6, etc.
    We use request ID to diaplay data and makes sure the data source is correct.
    Planning level for EXIT function data model:
    Characteristics are:
    Assortment Type
    Base Unit
    Climate Zone
    Concept
    Department
    Distribution Channel
    Fiscal year
    Fiscal year/period
    Fiscal Year Variant
    Hierarchy ID
    Lifestyle
    Local currency
    Plng Area
    Posting period
    Sales Organization
    Version
    Volume Group
    Key Figures are:
    Balanced Receipts
    Number of Stores
    Stock Index
    Any helps, comments, and/or hints are greatly appreciated.
    Best regards,
    Sam

    Highly frustrating. I've got the IPlanetDirectoryPro SSO token being set, but the custom code to add additional domain cookies...those are not re-written by the cdservlet, and even if hard-coded for testing (I assume I can parse the domain/URI being requested by parsing the HttpServletRequest, but am just testing now) to the 'new domain,' they are sent but discarded by the browser.
    This is bad, as those custom cookies are required by several apps. Is there any documentation on writing a custom cdcservlet, sample code, code for the existing one, or any other means to do this?
    To be clear - 'basic' CDSSO seems to be working, if a request is made to a resource on Domain B, it directs to the AM host, which is in Domain A. The IPlanetDirectoryPro cookie is being set for Domains A and B in this case, and accepted by the browser. That setting I finally found in AMAgent.properties, here:
    com.sun.am.policy.agents.config.cookie.domain.list (in case this helps someone else in the future)
    However, I have post-process code implementing AMPostAuthProcessInterface which was setting custom cookies required by some apps, and I am unable to change the domain these are set in. More accurately, I can change the domain, and the data is sent, but the browser is then rejecting it, presumably as it's seeing it as a cross-domain cookie, and thus bad/discarding.
    This seems to only leave me with trying to use a custom cdcservlet, assuming I can find the existing code or similar to start with, as I have no idea how it's avoiding the cross domain cookie issue..
    Anyone?

  • Post processing a report

    The report attributes page has a section for External Processing with a online help of
    "Specify the URL to a server for post processing of a report. See documentation for instructions."
    I couldnt find anything in the documentation abou this.
    Can someone from Oracle please explain what this is with an example?
    Thanks

    Oh Man that's really sad...I've been working on SOA for that last 9 months and just getting back into APEX. Carl helped me heaps and answered alot of my questions. I'm an oracle instructor and have been teacing/using the product since it started. Carl's name was synonymous with APEX...
    I've just checked out quite a few blogs and it seems he helped many other people as well.
    thanks Tony
    Paul P : (
    By the way I eventually found out about FOP it's all explained at http://www.oracle.com/technology/pub/notes/technote_htmldb_fop.html and numeous postings.

  • Pre-processing versus Post-processing Event Handlers

    After looking through the documentation, and a lot of forum posts, I'm still a little unclear as to where custom user modification updates most typically go... if I want to create a customer handler to say transform some data on a user after input on the main OIM form... should that go in the pre or post processing event handler? Or a more basic questions, which transactions or type of events would go in pre-processing, and which would go into post-processing?

    As far as I understand, there is no hard and fast rule as to what goes into pre and what goes into post. If you are doing a trusted recon then there is no pre will 11g and you have only post but if you are doing it from the OIM profile page then you can have either one and would depend upon your use case and requirement.
    Generally post should cover most of your scenarios from the UI unless you have access policies based on event handler derived attributes. If that's the case then you will have to fine tune the ordering of the event handler so that access policy is the last to be triggered.
    -Bikash

  • OPENSSO PRE/POST PROCESSING ATTRIBUTE FETCHING VIA POLICY AGENT

    Is it possible to apply filter or post processing when fetching attributes from open sso using a policy agent? If so, do you know if the process is documented and where or under what search criteria should I use to start my search?
    Assume the following attribute (keys) can store multiple values:
    Keys:
    A | 1 (key 1)
    B | 4 (key 2)
    A | 7 (key 3)
    Is there a way to only extract the key values of B | 4 instead of all of the key values (keys 1, 2 & 3) ?

    Is it possible to apply filter or post processing when fetching attributes from open sso using a policy agent? If so, do you know if the process is documented and where or under what search criteria should I use to start my search?
    Assume the following attribute (keys) can store multiple values:
    Keys:
    A | 1 (key 1)
    B | 4 (key 2)
    A | 7 (key 3)
    Is there a way to only extract the key values of B | 4 instead of all of the key values (keys 1, 2 & 3) ?

  • Audio post processing

    Hello everyone!
    I will start with a question as it goes:
    Is it possible to intercept the PCM stream that is routed by the system to the AudioSink (i.e. speakers or headphones) and inject some post-processing on system wide basis?
    My current investigation lead me to the following status.
    -There is a system-wide audio destination, the CMMFAudioOutput sink. Maybe it is possible to create another sink, that will perform the post-processing and then redirect the resulting stream to the original output.
    -There is a set of codecs that are instantiated by the DataPath depending on the FourCCs of the data source and data sink. Maybe it is possible to create a system-wide wrapper that will be called instead of any other codec, that will inside load the needed codec and after that perform the post-processing before returning the stream.
    -There might be some other place for such a sound hook that i haven't found in the documentation.
    I absolutely understand that such a scenario is a very dangerous thing to allow in an OS for phones. This is the reason that makes me wonder if this problem is solvable at all. Still, there might be some ways, that will surely include driver signing and any other actions preventing malicious usage.

    Sorry but I only use this support forum so can't tell you which section of the developers forum to use to get an answer to that question.  
    At a guess I would say here:
    http://discussion.forum.nokia.com/forum/forumdisplay.php?f=62 

Maybe you are looking for

  • Deploying Crystal Reports 2008 on a website other than the default website

    Hi, We have an ASP.NET 2003 web application using CR 2008 as reporting tool. To deploy the application, we are using the CR 2008 mergemodules. If the application is installed into the default website, this works fine, but if another website is select

  • Why do i need a credit card if i have an itunes gift card?

    I just bought a itunes card and i need to use a credit card to buy the music will it charge me anything by puting on the card info?

  • How to add new driver to create master repository database

    Dear All, i want to creade ODI Repositroy Database on SQL SERVER, when i click the driver button in "Create Repository Wizard" the only driver i found for sql serve ris just able to connect to sql server 2000 i got a driver written by microsoft to co

  • Document review in workflow

    Hi folks, is there a simple way to review the document sent via approval workflow to the user? In the default approval workflow only the location of the file is provided; since I want to check the document before approval I have to go to this locatio

  • How do I set domain and range in grapher.app?

    Hey, I'm brand new to this program, and i can't figure out for the life of me how to set the domain and range of an equation. For example, if i had the equation y=x^2 , how would i go about setting the x values>0? Any quick replies are very appreciat