COPA vs. BCS design decisions (ex. profitability by customer in BCS)

We are trying to meet a business goal of identifying gross profit by customer.
We realize "customer" as a field in BCS is problematic, so we are thinking of only storing certain customers in BCS with a catch-all "Others" customer - with the goal of keeping the BCS data volume reasonable.
Consider the scenario: US company sells material X qty 1 to Spain company for 100 with cost of 30 (therefore profit of 70)
Spain sells same material X qty 1 to third-party customer for 120.
Spain from a local perspective profits 20, however the group from an overall perspective profits 90 ( the US revenues of 100 eliminate against the Spain COGS of 100 so you are left with revenue of 120, COGS 30, profit 90 - from the group perspective ).
We want to know how to see, on a customer level, the 90 profit from these transactions. 
We do not believe COPA can do this, can this may be accomplished in BCS? 
If you do a "one-sided" elimination (elimination driven by the revenue side only) of the intercompany revenue the system would not be able to reference customer on the elimination. We are wondering if this scenario of analyzing overall profit by customer can be accomplished by BCS functionality and are particularly interested in knowing what functionality you used to accomplish this requirement and in what sequence within the BCS close (BCS monitor).
Thank you in advance for any input you may have.
Also we are interested in any opinions/comments anyone may have about design decisions regarding BCS vs. COPA in BW.  BCS business content identifies a sample design for a BCS data model including item, company, movement type, trading partner, functional area, etc.  COPA (as configured in R3/ECC and extracted to BW) commonly features analysis by customer, material, etc.  Considering BCS features elimination functionality, what design concerns have people faced with respect to fields that they include in both reporting systems?  Obviously a prominent concern is sizing of the systems, but what common characteristics has anyone decided to feature in both systems? What considerations drove the decisions as to what common characteristics to feature in both BCS and COPA?

Hi John,
Reg, your last question - might be useful info in here, if you have not seen it yet:
Re: Reports using COPA cube, BCS Cube

Similar Messages

  • JPA Arhitectural Design Decision

    Hi,
    I'm building a 1 tier web shop, using mostly Ajax, Servlets and JPA and I need your advice on a design decision.
    When a user demands to see the products belonging to a particular category of products, my DAO object returns to the servlet a java.util.List<Product>, where Product is a JPA entity. In the servlet class I "manually" create the Ajax XML response, the user gets to see the products, everything is nice and great.
    I am not happy with the fact that the list of products remains detached in the servlet class, sort of say, and when another user demands to see the same products another list gets greated. These are objects that have method scope, but still they are on the stack, right? For 100 users who want to see 100 products each, the no. of objects created could cause the application to have a slower reponse time.
    So my question is about the design of the application.
    I obtain the list of products in the servlet class and construct the XML response. Right before sending the response, should I pass the list of products back to the DAO, and ask the EntityManager to merge the products? Will this reduce the no. of objects my application creates? Shouldn't I do this because I'm merging entities that have not been changed and the merge operation is time consuming?
    Should I not pass back the products to the DAO and set each product in the list to reference null and call System.gc() ?
    Keeping in mind, that my main concern is application response time, not reduced development time, are there any other suggestions you can make?

    first of all, a merge is only used to synchronize a changed entity that is not managed by an entity manager with the database. Why did you even come to the conclusion that you might need this?
    No you don't nullify the entities in the list. You let the entire list go when you are done with it. Manually nullifying can hinder the garbage collector, just don't do it unless you have a very good reason for doing so.
    Your main problem seems to be that you don't like the fact that you are fetching 100 objects for both users, putting duplicate objects in memory on the server. Are you sure this is a problem? You shouldn't be thinking about optimizations while you are still developing you know. I would wait until you are done, then profile the application to see where bottlenecks are; if fetching those 100 products turns out to take a lot of system resources, THEN optimize it.
    You may want to look into caching. If for example under water you use Hibernate as the persistence provider, search for "hibernate cache" using google.

  • Design decision / purpose / aim of audit trail

    Hi,
    since the audit trail doesn't contain so many data and BAM is great for real time monitoring my question is: What is the design decision, or the purpose / aim of the audit trail?
    What was the main target to implement a audit trail? Is it primarily for debugging? To see the flow the process instance has taken?
    Obviously the audit trail isn't the right way for real time monitoring, right? So maybe you can tell me, why there is an audit trail at all. What was the design decision behind it?
    Greetings
    Mike

    Hi Mike,
    While I am certainly not one of the people who designed it, I think I can answer your question.
    The audit trail is what the name implies - it keeps track of all the steps preformed by the process instance. It lets you view the instance history, variable content etc. and lets you see the current state of an in flight instance or to be more exact lets you see the last dehydration point. You can minimize the trail data, or even disable it.
    BAM however is real time monitoring of business or operational data or KPI. You send data to the BAM engine using sensors, and you only send the data you want to send when you want to send it. IF you don't need real real-time monitoring with all the fantastic visual features, alerts etc. of BAM, you can send the same data to a database or JMS server instead and built your own monitoring.
    hth,
    ~ronen

  • Questioning design decision in java.lang.Character

    Having a look at the source code for java.lang.Character, I have the Character class explicitly extends Object
    I am using jre1.6.0_07 on window XP
    Now the question is what is the reason for such a decision and we all know that any class implicitly extends Object class.
    public final class Character extends Object implements java.io.Serializable, Comparable<Character> {
    }Regards,
    Alan Mehio
    London, UK
    Edited by: alan_mehio on 24-Jul-2009 12:31

    I cannot answer with anything but personal intuition, and give non-conclusive details:
    first this is not a design decision, merely a style decision, since, as you mention, any class implicitly extends directly java.lang.Object if not explicitly extending anything else (and at the bytecode level, the source-level difference is undetectable).
    As far as style is concerned, I would have assumed that the whole JDK team is required to strictly follow consistent rules, but different classes suggest otherwise.
    Sun's public [Code Conventions for the JavaTM Programming Language|http://java.sun.com/docs/codeconv/html/CodeConventions.doc5.html#2991] do not have an explicit rule about this; section +6.4 Class and Interface Declarations+ provide an example with an extends Object clause, but the rule is not explicit in the text; and the previous section 5.2 does provide an example without this clause...
    I went on speculating that the developpers for the Character class had a special intent in mind, as they override Object methods equals() and hascode(), but other class in the same package do the same without the explicit extends Object clause (Void, System, Number). At that step I gave up trying to find a reason other than the developers' own style...

  • HELP !!!!!! Design decision...!!!!!

    Hello,
    I am in a dilemma of making a design decision . We are developing a business tier component. This is going to talk to webservices on the backend. Right now it is going to integrate with 2 different backend systems through web services. In future it might support more of such backend systems.
    And there are clients (web app, xml app) who interface with the component.
    Most of the data elements passed over to backend systems is similar for both the systems, but some are different.
    Now is it a good design to make 2 different client interfaces for 2 backend systems ? so that ,clients upfront decide which interface to use. This is more cleaner and easier implementation.
    Or is it good to have a generic interface, and component then figures out which data to use and to which backend system to talk to.
    Please help,
    Thanks

    There are several patterns that could apply, but the most widly used is probably the MVC (Model View Controller) pattern.
    With the pattern the View layer is the front end (in your case this would be the web app / xml app).
    The Controller would be your middle tier, this layer is responsible for relaying requests of the View layer to the Model layer.
    The Model layer would be your backend webservices.
    As said, the controler is responsible for relaying the requests from the view layer to the correct webservice. This means you need to have some way to know how to do this. You can employ several methods to do this.
    You could have different methods for the different webservices, this is the most straight forward way.
    Or you could look at the provided parameters and decide where you need to go based on that. This is slightly more difficult, but when you have two or more webservices that do almost the same thing, this might be the better way to go.
    If you really wanted to make things fancy, you could employt the second method and have the checks be based on rules you configure through a dynamically loaded file, this way, you could (theoratically) build your middle tier in such a way that you can add new front ends / back ends without having to redo the middle tier. This might eventually be the cleanest / best way to go, but it is also the most difficult and takes a lot of planning beforehand.
    Mark

  • Landscape Design decision

    hi,
    I have an query regarding the design decision.
    There are 2 applications at the target (both serving different division) which would receive the master data from ECC via SAP PI.
    Target structure for PI is identical for both the applications.
    Now the question is should I have only 1 application to dump the data or should connect to both application using only 1 mapping and 2 different end points.
    regards,
    Anirudh.

    Hi Gaurav,
    Thanks for your reply but I am aware of this and it is not a PI technical question.
    My question is more from a design perspective.
    If you have 2 applications serving same business but different division and can accept similar structures, should we make separate end point connection with each or let one become HUB and it can share the data to with another server.
    This way you reduce the end point connection with PI and reduce the development effort. Drawback is that it will introduce a point of failure for another server which will take data from the HUB.
    regards,
    Anirudh.

  • Archiving Design Decisions

    Hi Friends,
    I would like to know the data archiving frequency  & Design decisions for data archiving?
    Thanks in advance,
    Chandu.

    Hi Chandu,
    Frequency depends on nature and residence period of data. Application data like IDOC, Work Items and Application logs has short residence time those can be archive very frequently, whereas business data has long residence time period those can be probably archive annually or Quarterly.
    Archiving Decision depends on business’s legal and technical  retention policies.
    Regards,
    Rajnish Pathak

  • I am a sole proprietor business owner (work from home graphic designer) and I have a customer who wants to pay me via Apple pay. Can I accept her transaction? How do I get that rolling?

    I am a sole proprietor business owner (work from home graphic designer) and I have a customer who wants to pay me via Apple pay. Can I accept her transaction? How do I get that rolling?

    Apple Pay: Merchants FAQ - Apple Support

  • Design decision: good to use delete cascade in database?

    Hello,
    Suppose there are two tables: customer and address.
    These two tables are linked as one (customer) to many (address), and delete cascade. Every time when the application deletes a customer record, the linked address(es) will be deleted as well.
    For my opinion, this is good because of performance and less coding.
    But bad for maintenance, since the address is implicitly deleted, the developer may not know this in advance if the system is not documented well.
    What is your opinion? What facts will affect your decision to use it or not?

    The design should say that when a customer record is deleted, the linked address records must also be deleted. Otherwise your database loses referential integrity.
    So for me, if the database supported cascading deletes, I would use them. Why would I do extra programming which the database is offering to do for me, especially since my version of the code is likely to be less robust than the database's version? The only exception would be if some other action needed to be taken besides just deleting the dependent records, and that action couldn't be handled within the database.
    As for your comments about maintenance, it's true that if the system is not documented then developers may have difficulty in maintaining it. However I don't exactly foresee designers saying to themselves "Oh, this system isn't well documented so I won't use Feature X".

  • Design Decision for Workflow

    Hi all,
    When to go for third party workflow products? and When not to go for third party workflow products?is it still applicable choosing third party workflow products as SP2013 introduced new features for workflow? 
    Regards,
    Swati
    SP Page: http://www.facebook.com/SharePointQ SP Blog: http://swatipoint.blogspot.com

    Hi,
    It will depends on the actual requirement.
    “SharePoint Server 2013 brings a major advancement to workflow: enterprise features such as fully declarative authoring, REST and Service Bus messaging, elastic scalability,
    and managed service reliability”.
    http://technet.microsoft.com/en-us/library/jj219638(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/jj227177(v=office.15).aspx
    If you may have some specific requirements, then you can try to do the customization with SharePoint Designer 2013 or Visual Studio 2013, or you can find if there are already
    suitable products.
    Best regards
    Patrick Liang
    TechNet Community Support

  • Design decision � attributes vs. HashMap

    I want to design a few value objects to hold customer and order informations. In a later step these objects should be used by EJBs and their DAOs. I�m facing two design aproches to do this.
    1) The "classical" aproach. A serializable class with the corresponding attributes + getters/setters.
    2) A class holding a HashMap. For each "attribute" there is a key-value-pair added to.
    Any experiences about usage, performance, maintenance, etc. would be nice.
    Thanks!
    -chris

    Hi.
    You didn't say whether you were still going to have accessors/mutators for the properties in (2). If this is the case then it's just a question of internal representation. If you're not going to have accessors/mutators (instead having put(String name, Object value) and get(String name) methods) then you are almost certainly going to cause yourself a lot of pain in maintenance and it has been my experience that this pain will be felt before you even get a first release out.
    I'm always in favour of a strong object model even for these little convenience type things (maybe that's because I consider myself to be a strong semantic modeller :-). The reason for this is that Java is strongly typed and the strong typing makes itself known to you when you (or worse, the client code) has to cast the thing in the Map, but you subvert the type system somewhat when you upcast and then downcast, as you would be doing in (2).
    So, we are being held to account by the type system but we're throwing away the help it would have given us.
    Now, even putting that argument aside for the moment, suppose you want to refactor your code and the thing in the map is no longer what you're casting it to (this is why it's worse in the client code) which will result in a run-time exception. If instead of (2) you had done (1), then a compile time exception would have occurred.
    Clearly the argument above is lessened if you are still using accessors and mutators but then it seems like such a small effort at that point to add fields for the properties.
    I can't say anything about performance as such I'm afraid, other than write clean code, then measure it to see where the bottlenecks are. By 'clean' I would have to include 'semantically strong'.
    Regards,
    Lance
    Lance Walton - [email protected]
    Team In A Box - Software without Tragedy
    http://www.teaminabox.co.uk

  • PetStore doc, domain model, design decisions, etc

    Hello,
    I've downloaded the PetStore from http://java.sun.com/developer/releases/petstore/ . I installed it and it was ok.
    As it is an example project, I would like to get detailed documentation about this. I mean if a project like this starts what kind of decisions we have, what is the domain model look like, why?, etc.. So a complate description of the project.
    I have checked the web with google but I have not find anything related to this. Would you help me, please?

    Any one know where we can get Business Model, Domain Model and Object Model for Pet store app?
    Thanks.

  • Just-In-Ti​me Advice: Details on the design decisions that lead to today's behavior

    This is the update about Just-In-Time Advice design that I promised last week in an
    earlier post.
    The feedback received from this forum will be helpful in modifying the
    Just-In-Time Advice dialogs (JIT dialogs) in future LV versions. JIT
    first appeared in LV7.0, and I waited until now to really poll for
    feedback so that I could hear how it actually affected users... I
    guessed that if I polled when LV7.0 first came out I'd get the "MS
    Clippy must die!" response. :-) Given the feedback, I suspect there
    will be changes to JIT's behavior in the future, though don't expect it
    in the next release. We generally keep two or three versions of LV in
    the pipeline, and feature sets for the next immediate release would
    have been determined long ago.
    JIT dialogs grew out of a consistent problem we face when we change the
    behavior of any aspect of LabVIEW from one version to the next: how do
    we let the experienced users know about the change? There were two
    comments that appeared repeatedly in the feedback:
    include information in the Upgrade Notes
    I can only take in so much information at one time
    These two conflict with each other. I am fairly certain that everything
    for which a JIT dialog exists is mentioned in the Upgrade Notes, along
    with a host of other changes. Someone pointed out that the Notes were
    21 pages long in a recent release. Our tech writers do an amazing job
    cleaning up the sometimes cryptic notes from developers: "I changed XYZ
    to PDQ. You might want to mention that in the UN." Asking them to make
    the Notes a thrilling read that will be memorable from first page to
    last is a bit much. (Though, tech writers, if you're reading this, I'm
    not adverse to seeing an attempt!)
    So, like several other programs in the world, LV decided to look to a
    system whereby information can be supplied when it is needed. There's a
    lot of bad perception around features like this, so we walked
    cautiously.
    Here are the basic feature requirements:
    We are targeting
    upgrade customers, not new users. Everything in LV is new to a new
    user, and they are the group most likely to actually read documentation
    about a feature. Upgrade customers generally assume nothing has changed
    until they get burned. We want to call their attention to the change
    before that point.
    We want something that takes up very little screen real estate
    We
    want something that is easy to get out of the way while the person is
    working -- Clippy had all these annoying "I'm going away now"
    animations.
    We wanted to be able to be able to update the user whereever in the LV editor they happened to be working.
    It must be easy to turn off.
    Continued in reply...Message Edited by Aristos Queue on 08-26-2005 07:39 PM

    A large part of JIT was trying to draw the compromise between insisting, "Hey, you've never seen this, pay attention" and staying out of the way. The comparison to "backseat driver" made in one post was very accurate. I've posted all this information to try and give insight into what we were up against when designing JIT. I figure it will help if we get feedback that understands the problem we were trying to solve. And, if nothing else, it lets you know that the programmers in R&D who make your software really aren't secret MS Clippy fans.
    To close, I'm going to give a brief description of each of the JIT dialogs found in LV7.0. In each case, we felt that the change in LabVIEW was important enough that a customer proceeding down the old track would appreciate knowing about the new one. Generally, our reasons were in one of two categories: either the user was about to create a bug for themselves by using the old or because we felt the new way was a significant improvement of productivity/usability.
    First Launch -- Immediate direction to the What's New help page and information about the Tools>>Options settings.
    AutoTool -- We disabled the TAB key and made AutoTool, first introduced in LV6.1, the default behavior. If you wish to argue about the wisdom of this change, that's for a different thread. But the fact that the change was made seemed like something we should tell users.
    Timestamp data type -- When you told a Numeric to format for date/time, as you would have done in LV6.1, we wanted to point out that LV actually had a new datatype, with its own control/indicator, that could save you a lot of headaches and improve precision.
    Custom Probes -- When users first create a probe, we wanted to let them know that they had more debugging power in the new version by using custom probes.
    Flat Sequence -- When you drop a stacked sequence structure, we popup to tell about the flat sequence structure. This was an oft requested feature, and it does make diagrams easier to read.
    Custom Error Codes -- One of the uses of the General Error Handler is to define your own error codes. So when you drop it, we popup to mention that the error handling of LV has significantly improved to let you define errors that are portable with your application and across your company, instead of tied into a particular VI.
    Clean Up Wire -- A new feature of LV7.0. Not everyone likes the "route my wires while I'm working" feature, but selective use of "clean up wire" can be very powerful in helping to quickly turn a mess of a diagram into readable code. We pop this up when the user moves wire segments around by hand.
    Automatic Error Handling -- This popup tells you what just happened the first time that an error dialog appears while you're running your VI, even though you didn't drop an error handling subVI into your diagram. This feature did well enough in beta testing that we decided it should be enabled by default. The JIT popup helps clarify what this is.
    Poly VI Selector -- What is that ring control that just appeared under my Poly VI on the diagram? The JIT tells you.
    Front Panel Open -- I mentioned this one in the earlier thread. The old "FP.Open" property was unacceptable for a lot of use cases. It lead to ambiguous situations and buggy code. The new Open FP method and Close FP method and the FP.State property (which is an enum, not a boolean) are more up to the task of correctly controlling your VI front panels.
    DAQmx Code Gen Help -- Gives some information about the code that the new DAQmx generates for you.
    Thank you, everyone, for the feedback. We keep LV improving version over version. The JIT was an idea to solve a problem. With the feedback, we have enough information to evaluate that solution and either refine it or try something else.
    Footnotes:
    1) The state panel is that set of buttons where you find the run arrow and execution hilite button.
    2) Some of you will be happy to know that the "New & Changed in version X" page of Tools>>Options is an idea we've decided to continue in the future. It's been very popuplar. Also, it doesn't appear to be common knowledge that you can carry your config file with you when you upgrade. On Windows and Linux, just copy your the config file from your 6.1 install into your LV7.0 directory. Or 7.1. Or 8.0... :-)

  • Program Design decisions and resource usage.

    OK I have a rather lengthy method, and I am trying to decide whether to make it a static method or an instance method.
    ok, so I have an object called obj of class ObjectA and a method called validate. I expect that at any one time there will be many many instances of class ObjectA in existence.
    I can either make the method an instance method, and refer to it as obj.validate()
    or I can make the method static, change the code slightly, put it anywhere, and refer to it as validate(obj)
    given that there are many many instances of this class of which obj is an instance, are there any performance or resource decisions involved in deciding whether the method will be static or instance?
    Thanks in advance.

    BigDaddyLoveHandles wrote:
    Fguy wrote:
    I don't follow. Why is performance irrelevant?"Performance" isn't irrelevant. If I have a task that needs to run in 1 second and it's currently taking 47 hours, it's certainly relevant.
    But that's not what you brought up. You brought up static versus non-static method, The difference is tiny. It's like someone who trying to get the best mileage out of their car asking if they should listen to AM or FM radio.
    But we understand. Fixating on micro efficiencies is a disease that's endemic among newbies. The sooner you get over it and focus on writing clean simple code, the better.Itsy bitsy means it goes along the lines of "no difference".
    It's possible that it diverges as usage increases but the rate of divergence is also itsy bitsy.
    So I guess you don't need to worry, I recommend just writing cleaner, easier to understand code which would rather increase your productivity in the future.

  • Need suggestion on a Design Decision.

    Hi All,
    One of customer that has a more than 100,000 employee for a leave management solution. If every employee would apply around 26 leaves yearly it means Custom List would have at least 2600,000 items . I am wondering should I propose solution since SharePoint
    would not be able handle so many items in Custom List.
    What is your suggestion guys?
    Regards Restless Spirit

    Hi Restless Spirit,
    You can indeed handle large amount of data in SharePoint 2013 lists. Please have a look in to the external lists and BDC models for handling such kind of scanrios.
    Following links will be helpfull in understanding lists handling of large data.
    http://technet.microsoft.com/en-us/library/cc262813(v=office.14).aspx
    http://office.microsoft.com/en-us/sharepoint-server-help/manage-lists-and-libraries-with-many-items-HA102771361.aspx
    http://www.layer2solutions.com/en/community/FAQs/BDLC/Pages/SharePoint-Large-Scale-External-Data-Integration.aspx
    http://www.ericgregorich.com/blog/2013/7/10/working-with-list-view-thresholds-in-sharepoint-2013

Maybe you are looking for

  • Mapping the Follow-up action to the Usage-Decision

    Hi All,            I have a the following requirement in my development: The Usage decision needs three function-modules for the three cases of the follow-up process. In addition to that this ,service shall have a control table in Customizing, where

  • Why should we go for ODI?

    Hi, I know the Informatica 9.1.0. Now , I am learning ODI so getting some questions. I am working with the Hyperion & ODI is used with the hyperion to fetch data from any source system. I have few questions in my mind related to ODI. why should I go

  • Video Freezing Anomaly

    Here's an odd one... When I have FCP6 open, stop and go to another program (Internet, email, etc.) and leave FCP open for an extended period of time, when I return to the open program the playback video will freeze. The audio plays, but the video jus

  • Oracle Text:Problems in starting

    hi all i am working on Oracle 10g in windows and i want to do Text Mining,but i am having some problems.when i use the JDeveloper and start the text wizard it createa a jsp file but it is not loading properly.is there any document from which i can le

  • 10.5.3 seemingly breaks PyMSNt 0.11.2

    I had a bad upgrade experience with 10.5.3. Software Updated seemed to hang for about 20 minutes after logout at the point it was 81% through "Writing files". I had no choice but to force restart. Upon reboot, my computer reflected 10.5.3, but I appl