Deployment Optimization

Hi experts,
We are trying to obtain a solution from deployment optimizer such that whenever a specific transportation lane is utilized (i.e. deployment stock transfer is observed on that specific transportation lane), a fixed transportation cost based on the distance traveled will be observed. This logic will be valid for all transportation lanes. The underlying reasoning behind this is that either a single phc or 1000 phcs is transported; the same transportation cost has to be incurred.
It would have been ideal if we could observe transportation costs dependent on the number of trucks that will be utilized on a specific transportation lane for a specific deployment stock transfer. However, since we do not know the number of trucks before running TLB, we cannot perform such an optimization within deployment. So, we decided to ignore the number of trucks and stay with a fixed cost based on the distance traveled for all units transferred.
We tried to utilize discrete optimization with a transportation cost function and means of transport cost. Yet, we cannot observe these transportation costs as a part of the total costs in the objective function. However, the input log for the optimizer includes the defined cost values.
We would appreciate any comments on this issue. Thanks in advance.
~isil

We decided to follow a different logic.

Similar Messages

  • Rules for Deployment Optimization

    Hi ,
    While defining deployment profile, we have to define Fair Share and PD rules. Just wanted to know what is its' significance in Deployment Optimization. As far as I understand Deployment Optimization is done only on the basis of costs.
    Reply needed urgently.
    Thanks & regards
    Kunal

    user1073781 wrote:
    What about the following rules for good SQL usage and query optimization imposed by tha DBA of my company?
    My, where to begin? I assume this is for the World's Most Hopeless DBA competition?
    >
    1) Don't use referential integrity but implement it applicationally. This for easy tables administration
    Because it is real easy to deal with logically corrupted data I guess.
    >
    2) Don't specify schema name in the queries, but only use table name. The table name must be unique on the entire Oracle instance.
    It doesn't matter if it is unique you won't find it without the schema name, unless public synonyms are used and these should be avoided.
    >
    3) Don't use select *.
    Sometimes valid, sometimes not.
    >
    4) Don't use ROWNUM in the where condition
    Which will exclude being able to execute top-n or pagination queries, unless the intent is to return all, possibly billions of rows and throw away all except ten for example.
    >
    5) Don't create too much indexes because they make worse performance of the insert,update and delete operations
    Define too much.
    >
    In my humble opinion the 1) is completely wrong: why use a relational dmbs without relationships between tables?
    Agreed.
    >
    What do you think?
    Most of the others make no sense either.

  • Change in Deployment Optimization Profile

    Currently in Deployment Optimization Profile--- Deployment Parameter View--- SNP Checking horizon is 4 days
    If this Parameter (SNP Checking horizon) is changed to 7 days what will be its effects on the current scenario
    --Advantages & Disadvantages.
    Kindly highlight your views/ comment on above scenario.
    Thanks & Regards
    Shashikant Salunkhe

    Pls go thru F1 of SNP Checking horizon. You can get the Answer for your question.

  • Deployment Optimizer - Modeling question

    I am new to the Deployment/Optimizer area.  I have modeled a scenario as described below. 
    And have a few questions on what I would like to happen during deployment.
    Central DC: CDC1, Regional DCs: RDC1 & RDC2
    Product gets deployed from Central CDC1 to RDC1 & RDC2.  I have customer demand and forecasts at all the three locations. 
    I have defined ATD Receipts category group as just Stock on hand only (category CC) and ATD Issues includes AY, BH, BI,
    BM, BR, EB etc.  These two categories are assigned to the Location Master. 
    I have specified Fair Share Rule "A" in the Product master and location dependent penalty costs in SNP1 tab.
    In the Deployment Optimizer profile I have mentioned that the Distribution is based on Lowest Costs for both Supply Shortage
    and Supply Surplus.  I have defined a Pull Depl. Horizon of 10 days, and blank for Push Deployment and SNP Checking Horizon.
    When I run the deployment with all the three locations, what I notice is that the Customer orders and forecast of the Central DC are
    satisfied before deploying anything to the Regional DCs.
    My requirement is that the Central DC should deploy the available quantity at the Central equally among the Centrals and the Regional
    DCs on a fair share basis for the Sales orders first, then the remainder of the stock should be deployed, again among Central and
    Regional DCs on a fair share basis for the forecast and the safety stock is the last.
    How can I achieve this equal deployment among central and regionals even though Central is the source location.
    Could somebody let me know how to model this above requirement?
    Thanks,
    Venkat

    Hi Venkat,
    I am trying to model the exact scenario where the ATD quantity is not being equally split but the whole chunk is being sent to one of the destination locations. Can you please let me know how you have setup the cost models.
    Regards

  • Deployment Optimizer - Equal weightage for all the locations in the network

    Hello All,
    I am having difficulty in assigning equal weightage for all the locations in the deployment network.  My scenario is as described below:
    Deployment Source location: LOC1
    Destination locations: LOC2, LOC3
    I am running Heuristics to plan LOC2, LOC3 & LOC1, followed by
    a Deployment Optimizer run at LOC1 to deploy to LOC2 & LOC3.
    ATD quantity at LOC1 is always calculated after saftisfying the requirements at LOC1 and the remainder is deployed to LOC2 & LOC3 based on the cost settings.  This seems to be the case even if I don't run the heuristics at LOC1 and start deployment optimizer after running the heuristics at LOC2 and LOC3.
    The need at this client place is that all three locations are equally important even though LOC1 is the only deploying source location.
    This applies for all demand categories i.e. Sales orders, Forecast and forecast at all the three locations.
    I would appreciate if anybody has done this kind of configuration before and let me know how to model this.
    Thanks,
    Venkat

    Have you follow all this checking ?
    If you think that you already follow all the steps, but the value is not as per your config, you can raise ticket to SAP about it.
    But I suggest you to double check your computation.
    To determine a vendor's price level, the system compares the vendor's effective price with the market price for the material.
       1. The system first checks whether the buyer has maintained a market price for the material or the material group.
       2. If not, the system calculates the market price which is equal to the average of the effective prices for all vendors supplying this material. Prices from purchase orders and prices for subcontracting are dealt with separately.
       3. The system then applies the effective price for the vendor from the conditions.
       4. The vendor's effective price is then compared with the market price and the percentage variance determined.
       5. The system then assigns a score to the variance in accordance with the settings made in Customizing.
       6. This score is valid for the material, that is, at info record level. Since the score the vendor receives for a subcriterion is based not on an individual material, but on the total of all the materials he supplies, the following steps are necessary:
       7. The system repeats the comparison between effective price and market price for each of the vendor's materials.
       8. The system calculates an average from the sum of the scores determined. This average represents the vendor's score for the subcriterion Price Level.
    Edited by: w1n on Apr 19, 2010 2:35 PM

  • Deployment Optimizer - software developer

    Dears,
    could you, please, help me to find any documentation related to Deployment Optimizer's or simply Optimizer's mathematical engine? In case you have something, please, share. The only thing I heard that it was developed by iLog development company.
    I need everything describing Optimzier's mathematics, mathematical algorithms, methods and so on.
    Thank you in advance,
    Regards,
    Kirill Nepomnyashchiy
    Edited by: Kirill Nepomnyashchiy on Mar 2, 2009 3:56 PM

    Hi kirill,
    Thanks for your confirmation.  You can also make use of the below links
    http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp?topic=/rzajq/rzajqjoinoptalg.htm
    Optimisation algorithm
    http://wiki.services.openoffice.org/wiki/Optimization_Solver
    Optimisation solver
    http://www.mosek.com/fileadmin/products/5_0/tools/doc/html/pyapi/node009.html#182968292776
    Optimisers for problems
    Hope these links will be very useful to you.  Please confirm.
    Regards
    R. Senthil Mareeswaran.

  • Optimization profile for Deployment optimizer. 3.0 -- 5.1

    To APO guru
    we are going to 5.1 directly from 3.0. then I am in stcuk to setting up optimizer profile.
    In the Deployment optimizer, one of field is optimizer profile which is customized setting we are using it.
    After upgrade to 5.1, the field value didn't come through by transport.
    When I try to set up for this value, I had supprised due to different totally even in attribute.
    So, I am struggling to set it up, could you help anyone has field mapping between 3.0 and 5.1 for optimization profile.
    Regards, Junu

    Are you able to change it manually?
    Or you may to right a customized program which will take care of your requiremnent.

  • Differece between SNP Optimizer and Deployment Optimizer

    Hi,
    Can anyone please list down the difference in the planning method for a deployment optimizer and SNP Optimizer?
    Thanks & Regards,
    Sanjog Mishrikotkar

    Hi Sanjog,
    First of all if we understand the difference between SNP Heuristic Planning run which finds the 'Source' of Supply with dates and quantities and Deployment Planning run which CONFIRMs the Supply .... then it is easy to understand the difference between SNP & Deployment Optimizer.
    The Optimizer as you know Optimizes based on Costs and Objectively tries to MINIMIZE it.  Therefore while the SNP Optimizer finds the most cost effective way of finding the Source of Supply with dates and quantities, identifying where in SC it is better to Store or move the Product,  while the Deployment Optimizer generates the Best way to CONFIRM whether the Supply can ACTUALLY be done for the next few days.  Deployment precedes the TLB Run which after confirmation we put the Quantities on a Transport Load to build Orders for Execution (Shipping).
    Both are Cost based & use the Same Costs Information ... however one Plans and the Other Confirms.  During Deployment Optimization run, the Optimizer may decide to confirm the supply from a different source than what the SNP Optimizer planned based on the Available to Deploy Stock Quanties and the Cost of Confirming the Supply.  The Deployment Optimizer will apply Fairshare & Push/Pull Rules and looks at the Push and Pull Deployment Horizons which SNP Optimizer cannot.  The Difference is also in Planning Time Range.  You plan SNP Supply for mid to long term time range.  The Deployment looks at Confirming the supply in the next few days from TODAY. 
    So in short first understand the Difference between a SNP Heuristic and Deployment Heuristic and apply the same principle to a cost based optimization. This should tell you the difference between the two.
    Try read this ...  First para on Deployment Optimizer ...
    http://help.sap.com/saphelp_scm50/helpdata/en/1c/4d7a375f0dbc7fe10000009b38f8cf/frameset.htm
    Read the First Paragragraph as well as 'Distribution based on Lowest Costs' section
    Hope you find this answer usefull.  Reward Points if it is.
    Regards,
    Ambrish Mathur

  • Does Deployment Optimizer need a separate server?

    We just finished blue-print and came to a conclusion to use SNP Heuristics and Deployment Optimizer. Since we are not using SNP Optimizer, my question is that should we need a separate server?
    I did not find any quicksizer document for Deployment Optimizer however, I could see for SNP Optimizer though.
    Any help would be appreciated.
    Thanks,
    C.A.

    Any Optimiser (SNP / Deployment / PPDS / TPVS / CTM) will need Optimiser Server connection as the Optimiser Routines are .exe files requiring a Windows-based hardware with sufficient Main Memory (RAM).
    Deployment Optimiser will be part of SNP Optimiser Engine.
    Somnath

  • Deployment Optimizer - Is Category EF recommended to be in ATR?

    Hi All,
    We are using deployment optimizer. Our ATR category group does not include the distribution receipts confirmed (Category EF).
    The deployment receipts planned within the deployment optimization are not considered further for deployment.
    I need your help in finding out if other companies configure ATR in the same way? or is it a mistake from our design team.
    Regards,
    Zeeshan.

    Hi,
    Its depend on the business requirment. In your case I hope you are using TLB also. Actually you cant deploy products from one location to another location without any kind of stock, prod confirmed or TLB confirmed quantity. Now if you consider  'EF' as deployable quantity then you may face problem. Say you have a product of 100 quantity as 'EF' category at location 'B'. Now 'B' is supplied from Location 'A'. And another location 'C'  receive goods from 'B'. Now can you supply all those 100 quantity from 'B' to 'C' ? You may not, because you may find that only 80 quantity was supplied from 'A' to  location 'B' as TLB confirmed. So, actually you have 80 quantity at 'B' (received from 'A')
    So, I think ATR is well configured ...
    Thanks,
    Satyajit
    Edited by: Satyajit Patra on Jan 19, 2010 9:40 AM

  • Few basic questions on deployment

    Hi Experts,
    I have never worked on deployment. Hence have some few basic/conceptual questions.
    1. Can I run just deployment (heuristics/optimiser) without running SNP planning?
    2. Does deployment run change/modify the SNP orders created? If yes, then what are the paramenters that deployment run changes (order date, qty, from/to location etc)? Does deployment run create new/fresh orders also?
    3. What is the difference between deployment heuristics run and deployment optimiser run?
    4. What is the disadvantage to have just SNP planning run and dont have deployment runs?
    Regards
    Manotosh

    Hi Manotosh,
    Please find my responses below
    1. Can I run just deployment (heuristics/optimiser) without running SNP planning?
    Any distribution demand based deployment (pull dep /push strategies that deploy based on demand) needs a demand propagation run like heuristic or a planning run like optimizer. If you are using Push strategies that do not consider demands like (push by quota) you do not need a preceding heuristic run. Deployment optimizer does not need a heuristic run. Real time deployment does not need a heuristic run.
    2. Does deployment run change/modify the SNP orders created? If yes, then what are the paramenters that deployment run changes (order date, qty, from/to location etc)? Does deployment run create new/fresh orders also?
    There are 3 modes available in deployment (/sapapo/SNP02)
    DO NOT CHANGE mode - This is just simulation of deployment results. orders are not created.
    REDUCE mode - Heuristic orders are reduced to the extent of orders confirmed by deployment. Unconfirmed heuristic orders are not deleted.
    DELETE mode - Unconfirmed heuristic orders are deleted.
    Deployment heuristic only confirms heuristic orders ( some exceptions are push deployment wuth push by quots strategy as explianed above). Real time deployment /Deployment optimizer can create fresh orders.
    3. What is the difference between deployment heuristics run and deployment optimiser run?
    Deployment optimizer works based on costs and tries to generate the cost optimal solution for stock transfers.
    It is similar to SNP optimizer but it does not created pur reqs or plnd orders only stock transfers. Dep optimizer does not require a preceding heuristic run.
    Dep heuristic does not work based on costs bust based on the actual distribution demand propagated by heuristic and based on target stock reqmt & the pull /push /fairshare strategies...
    4. What is the disadvantage to have just SNP planning run and dont have deployment runs?
    SNP planning run (heuristic) is un constrained planning, the stock transfers are created to the extent of net requiremetns without considering the stock/capacity avaiablity at source, so it cannot be executed. A process like deployment is required to check how much of the heuristic orders can actually be executed considering executable receipts.
    Optimizer does not require deployment as such because it considers available receipts (constrained planning). You can directly execute optimizer results. SOme scenarios you may need Deployment & optimizer runs together.
    For example if you take weekly optimizer runs which create /stock transfers planned orders but you want to move stocks every day based on actually completed production. Then you can run deployment every day based on receipts actually available.
    Regards,
    Ashok

  • Criticism of new data "optimization" techniques

    On February 3, Verizon announced two new network practices in an attempt to reduce bandwidth usage:
    Throttling data speeds for the top 5% of new users, and
    Employing "optimization" techniques on certain file types for all users, in certain parts of the 3G network.
    These were two separate changes, and this post only talks about (2), the "optimization" techniques.
    I would like to criticize the optimization techniques as being harmful to Internet users and contrary to long-standing principles of how the Internet operates. This optimization can lead to web sites appearing to contain incorrect data, web sites appearing to be out-of-date, and depending on how optimization is implemented, privacy and security issues. I'll explain below.
    I hope Verizon will consider reversing this decision, or if not, making some changes to reduce the scope and breadth of the optimization.
    First, I'd like to thank Verizon for posting an in-depth technical description of how optimization works, available here:
    http://support.vzw.com/terms/network_optimization.html
    This transparency helps increase confidence that Verizon is trying to make the best decisions for their users. However, I believe they have erred in those decisions.
    Optimization Contrary to Internet Operating Principles
    The Internet has long been built around the idea that two distant servers exchange data with each other by transmitting "packets" using the IP protocol. The headers of these packets contain the information required such that all the Internet routers located between these servers can deliver the packets. One of the Internet's operating principles is that when two servers set up an IP connection, the routers connecting them do not modify the data. They may route the data differently, modify the headers in some cases (like network address translation), or possibly, in some cases, even block the data--but not modify it.
    What these new optimization techniques do is intercept a device's connection to a distant server, inspect the data, determine that the device is downloading a file, and in some cases, to attempt to reduce bandwidth used, modify the packets so that when the file is received by the device, it is a file containing different (smaller) contents than what the web server sent.
    I believe that modifying the contents of the file in this matter should be off-limits to any Internet service provider, regardless of whether they are trying to save bandwidth or achieve other goals. An Internet service provider should be a common carrier, billing for service and bandwidth used but not interfering in any way with the content served by a web server, the size or content of the files transferred, or the choices of how much data their customers are willing to use and pay for by way of the sites they choose to visit.
    Old or Incorrect Data
    Verizon's description of the optimization techniques explains that many common file types, including web pages, text files, images, and video files will be cached. This means that when a device visits a web page, it may be loading the cached copy from Verizon. This means that the user may be viewing a copy of the web site that is older than what the web site is currently serving. Additionally, if some files in the cache for a single web site were added at different times, such as CSS files or images relative to some of the web pages containing them, this may even cause web pages to render incorrectly.
    It is true that many users already experience caching because many devices and nearly all computer browsers have a personal cache. However, the user is in control of the browser cache. The user can click "reload" in the browser to bypass it, clear the cache at any time, or change the caching options. There is no indication with Verizon's optimization that the user will have any control over caching, or even knowledge as to whether a particular web page is cached.
    Potential Security and Privacy Violations
    The nature of the security or privacy violations that might occur depends on how carefully Verizon has implemented optimization. But as an example of the risk, look at what happened with Google Web Accelerator. Google Web Accelerator was a now-discontinued product that users installed as add-ons to their browsers which used centralized caches stored on Google's servers to speed up web requests. However, some users found that on web sites where they logged on, they were served personalized pages that actually belonged to different users, containing their private data. This is because Google's caching technology was initially unable to distinguish between public and private pages, and different people received pages that were cached by other users. This can be fixed or prevented with very careful engineering, but caching adds a big level of risk that these type of privacy problems will occur.
    However, Verizon's explanation of how video caching works suggests that these problems with mixed-up files will indeed occur. Verizon says that their caching technology works by examining "the first few frames (8 KB) of the video". This means that if multiple videos are identical at the start, that the cache will treat them the same, even if they differ later on in the file.
    Although it may not happen very frequently, this could mean that if two videos are encoded in the same manner except for the fact that they have edits later in the file, that some users may be viewing a completely different version of the video than what the web server transmitted. This could be true even if the differing videos are stored at completely separate servers, as Verizon's explanation states that the cataloguing process caches videos the same based on the 8KB analysis even if they are from different URLs.
    Questions about Tethering and Different Devices
    Verizon's explanation says near the beginning that "The form and extent of optimization [...] does not depend on [...] the user's device". However, elsewhere in the document, the explanation states that transcoding may be done differently depending on the capabilities of the user's device. Perhaps a clarification in this document is needed.
    The reason this is an important issue is that many people may wish to know if optimization happens when tethering on a laptop. I think some people would view optimization very differently depending on whether it is done on a phone, or on a laptop. For example, many people, for, say, business reasons, may have a strong requirement that a file they downloaded from a server is really the exact file they think they downloaded, and not one that has been optimized by Verizon.
    What I would Like Verizon To Do
    With respect to Verizon's need to limit bandwidth usage or provide incentives for users to limit their bandwidth usage, I hope Verizon reverses the decision to deploy optimization and chooses alternate, less intrusive means to achieve their bandwidth goals.
    However, if Verizon still decides to proceed with optimization, I hope they will consider:
    Allowing individual customers to disable optimization completely. (Some users may choose to keep it enabled, for faster Internet browsing on their devices, so this is a compromise that will achieve some bandwidth savings.)
    Only optimizing or caching video files, instead of more frequent file types such as web pages, text files, and image files.
    Disabling optimization when tethering or using a Wi-Fi personal hotspot.
    Finally, I hope Verizon publishes more information about any changes they may make to optimization to address these and other concerns, and commits to customers and potential customers about their future plans, because many customers are in 1- or 2-year contracts, or considering entering such contracts, and do not wish to be impacted by sudden changes that negatively impact them.
    Verizon, if you are reading, thank you for considering these concerns.

    A very well written and thought out article. And, you're absolutely right - this "optimization" is exactly the reason Verizon is fighting the new net neutrality rules. Of course, Verizon itself (and it's most ardent supporters on the forums) will fail to see the irony of requiring users to obtain an "unlimited" data plan, then complaining about data usage and trying to limit it artificially. It's like a hotel renting you a room for a week, then complaining you stayed 7 days.
    Of course, it was all part of the plan to begin with - people weren't buying the data plans (because they were such a poor value), so the decision was made to start requiring them. To make it more palatable, they called the plans "unlimited" (even though at one point unlimited meant limited to 5GB, but this was later dropped). Then, once the idea of mandatory data settles in, implement data caps with overages, which is what they were shooting for all along. ATT has already leapt, Verizon has said they will, too.

  • Penalty costs in the optimizer

    I have a requirement in my project where the sales orders should have a higher priority than the forecast.  I have defined the delay penalty for the sales orders as 1 per unit per day and the delay penalty for the forecast as 0.01 per unit per day.  I have defined the maximum delay as 45 days for
    both the sales orders and forecast.  This is how I have setup all the products.
    Does the above setting ensure that the sales orders are satisfied all the time before the forecast is
    satisfied by the deployment optimizer.  Could there be a scenario where the forecast is satisfied before the sales orders with the above penalty structure.
    Thanks in advance.
    Regards,
    Venkat

    Hi venkat,
    You must maintain distinctly higher penaly costs in the SNP 1 tab for customer demand than for demand forecast.
    But I do not understand the statement "sale order should always be fulfilled before forecast".  If you are talking about same location, what difference does it make? 
    If yoy are saying that there is a forecast for one product and sale order for another which are manufactured with same resource and that the product wih sale order should be produced in preference over the product with forecast, the above solution should work.
    Regards,
    Nitin Thatte

  • APO Deployment pushing stock in pull scenario

    Hello experts,
    Right now, I am having an issue with my client regarding the deployment optimizer results. Due to some business reasons, it is necessary to keep the stock in the plant for a certain group of products, instead of deploying it to the distribution centers. The only time the stock can be deployed to the DC is when there is a demand.
    The current network is 1 plant and 3 DCs. Push distribution field is setup as blank (pull rule), the pull horizon is set to 10 days and the fair share rule is setup as B.
    I have already tested these scenarios (several times with different values):
    Non delivery penalty higher in the plant than in the DC:
    Product A – plant NDP 2,000,000
    Product A – DC 1 NDP 1,000,000
    Product A – DC 2 NDP 1,000,000
    Product A – DC 2 NDP 1,000,000
    RESULT: the stock is deployed to the DC even when there is no demand at the DC.
    Storage cost higher in the DC
    Product A – plant SC  1
    Product A – DC 1 SC 100,000,000
    Product A – DC 2 SC 100,000,000
    Product A – DC 3 SC 100,000,000
    RESULT: the stock is deployed to the DC even when there is no demand at the DC.
    If pull horizon is extended, the stock will be kept in the plant only if there is demand in the plant, if there is no demand, the stock is deployed to the DC (tested with previous cost definitions).
    Something that is important to know is that max stock level in plant and in DCs is set to 1. This is because normally these products shouldn’t have available stock. Max stock level is a soft restriction (no cost that affects it), so it shouldn’t matter if it is exceeded at the plant (with the result I am getting, the stock is being exceeded at the DC).
    What I find confusing about this whole situation is that when I review the Dep Opt Log, it seems to assume the costs implied in deploying the stock to the DC, even when they are way higher than keeping the stock in the plant.
    Have anyone faced this same issue before? Any ideas on how can this be configured in the system using the Dep optimizer based on costs?
    I would really appreciate your help.
    Thanks in advance.

    Hi Kenichi,
    STO is not generated by a SNP heuristic. You can only generate STR's (Stock Transport Requisition) with SNP heuristic run.
    You can generate STR's either by running SNP heuristics on Manfacturing Plant(Location B in your case). Or you can run Network SNP Heuristics on Location A.
    Then, you can convert the generated STR to STO either in APO or R3. To do it in APO, you have to run Deployment,followed by TLB run. Or to do it in ECC, you may have to use t-codes like ME59 for converison of STR to STO etc.
    Thank you,
    Santosh KB.

  • Explain 'Additional Storage Consumption for a Material' in Optimizer Log

    Can someone please explain what the values in the "Addtional Storage Consumption for a Material' column in the Deployment Optimizer log mean?  The values are negative.
    Thanks.
    Sandy

    Hello Sandy,
    I could see that this thread is too old, but let me try to provide you some help.
    Please go through note 579373. In attachment of this note there are some names from OPT experts.
    They may help you with this question.
    Thanks and regards,
    Michel Bohn
    SCM-APO forum moderator.

Maybe you are looking for

  • Hooking up my late 2011 17 inch MacBook Pro to an ultra tv (4k resolution) for use as a monitor?

    Hey! I am interested in hooking up my late 2011 17 inch MacBook Pro to an ultra tv (4k resolution). My computer has native 1920x1080 but can I reach 4k using that tv as a monitor? I plan on using a thunderbolt to HDMI adapter. Has anyone even tried t

  • ATM clock source does not revert back to priority with NCDP

    We have an ATM network with 8540 and 8510 ATM switches. We have designed NCDP in such a way that that the priority 1,2,3,4 clock sources are the PABX. Priority 5 clock source is the internal system clock. Every few day we believe that there is a PABX

  • Upload Project documents to SOLAR02

    Hi experts, i want to upload a local document to the "Proj. Documentation" Tab in SOLAR02 with an report. I've got  the nodeid of the node in solar02 where my document should be added. And I've got also the local path of my document. Do anybody know

  • Get error every time I try to print a report.

    Keep trying to print a report and it tells me an error occurred. I have restarted the service and server with no luck.  This topic first appeared in the Spiceworks Community

  • Where's the wwsso_api_admin doco?

    Could anyone point me to the documentation for this module? I have found the following: PL/SQL Web Toolkit Reference (http://download-east.oracle.com/docs/cd/B14099_14/web.1012/b15896/toc.htm) PL/SQL API Reference (http://www.oracle.com/technology/pr