Avoid summarization in cj88 to COPA

Hi All,
do you know if is there any way to avoid summarization in cj88 to COPA?
I need to settle a WBS to COPA, and maintain the original splitting of cost of sale from Results Analysis.
Regards
Simone

Hi All,
do you know if is there any way to avoid summarization in cj88 to COPA?
I need to settle a WBS to COPA, and maintain the original splitting of cost of sale from Results Analysis.
Regards
Simone

Similar Messages

  • COPA Configuration for In house producction Material

    Hi all,
    in the normal flow both Revenue and Cost are through value field ERLOS and Material costs or Material External Production. But for the in house production material in addition to the above all the labour cost and maaterial costS, OH Costs are getting posted to COPA. How to avoid this
    What is the COPA Configuration required  for the in house production material.
    Thanks in advance for the valuable inputs in this regard.
    Surya

    Hi
    WHy do you want to avoid it? If material is a trading material - the COGS has only one single figure.. But in the case of In house material, it has a break up like RM, Labor, Overhead, etc
    Why do you want to avoid it? I dont think thats correct
    Still, if you want to avoid it, remove the value field assignment in KE4R... Just specify the VPRS condition type in KE4I and mapp it to a value field
    BR
    Ajay M

  • Share instrument in loops

    Hi,
    I'm wondering if there's a way to share an instrument panel within multiple loops/cases in labview? Basically I'm trying to get an IV chart based on the test the user selects, the only way I can correctly get this to work is have an individual graph within each case. Obviously this is not ideal as it would require numerous charts on the instrument panel. I know there must be an easier way to do this but can't figure out how. Right now I only have an instrument for case 1 and have left the other 4 cases unwired for testing purposes.
    Thanks,
    Konrad
    Attachments:
    Control 1.ctl ‏5 KB
    frontpanel.vi ‏716 KB

    niedk wrote:
    So basically the event would push the data out of the loop if the values changed? Would the loop then continue until complete?
    Konrad
    The loop would be your main loop, taking measurements and/or reacting to UI changes until you close the program. Which would land in one of 3 designs (requiring more or less modifications to the current code):
    State machine
    Event driven program
    Producer/consumer
    I've come to like the producer/consumer a lot lately, though it's also the biggest modification. The 2 other ones can be quite similar, in the state machine the state is the main controlling information, and in 1 state (idle) you can have an event structure handling stuff. In the other case it's the other way around, the Event structure is the main controller and in the timeout or user event case you'll do your measurements.
    In all designs you should have information going through the program in shift registers in the main loop so you can access it and avoid unnecessary controls/indicators/data copies and keep a good wired flow.
    In this case i'd try the State Machine, with an type-defed Enum for states it's easy to expand and keep code modular. The states structure can be direct-, array-, queue- or event-based, but i think the direct form will be good enough, it's rather easy to change to one of the others, should it be needed.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Credit Note without Delivery Doc.

    Hi all,
    I can't find the way to avoid posting the Quantity in COPA when the document generated in SD is not related to a Goods Movement. Sometimes, the client I am working for at the moment, has to post some adjustments (credit/debit) with no delivery document (since the goods have already been delivered) and this is only posted to Sales Revenue (debit/credit). We could get rid of the Cost Determination in COPA from SD but now we are struggling with the Qty.
    They don't want to see any Qty. adjusting the credit/debit since during month-end, when they are reconciling the difference in Qty. between COPA and Stock the "1s" posted to these credits and debits are the difference in their reconciliation.
    Does anyone has any idea on how we can get rid of the Qty. in COPA when posting this kind of SD documents?
    Thanks and Regards,
    Fb.

    flower82,
    Make sure the credit note are different type to the normal ones because the exception configuration, Tcode KE4W, is differenciated by type. And enter the type, respective value fields (the quantity field in your case) and tick the "Reset" indicator.
    Wish above helps

  • Are there any advantages to using a Data Value Reference for LabVIEW Classes?

    Hi
    I recently came across an example where the developer had used a data value reference for the class cluster in LabVIEW.
    What are the advantages of doing this?
    Doesn't the use of LV objects already avoid the creation of multiple copies of data thereby reducting memory usage?
    Thanks
    AD

    LabVIEW's OOP is implemented as a By Value.  This means, as Tst stated, branches in wires could mean copies in the object.  The DVR is a way to make it By Reference.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Correct export color space for wide gamut monitors.

    Running a photography studio I have 4 typical scenarios of how clients or end users will see my photo work.  I create and edit the photos using LR 3 on a HP 2475w (wide gamut) monitor.  I'm aware that there are color shifts, but trying to figure out which export color space to use to be most consistent.
    A) Wide Gamut monitor using color managed software or browser such as Firefox.
    B) Wide Gamut monitor NOT using color managed software such as IE 8.
    C) Standard monitor using color managed software or browser such as Firefox.
    D) Standard monitor NOT using color managed software such as IE 8.
    A) gives the best results and that's what I run myself.  No matter the color space that I export (sRGB, aRGB, or my custom calibrated ICC) the images appear to be correct 100%
    B) gives mixed results...the hosting site for my photos seems to oversaturate a bit when I view the photos in their preview size which is what my clients see, when I view the original photo in full resolution (this feature disabled for my clients to avoid them downloading full rez copies of images), then the images appears a bit dull (70%).  When I try this same scenario using aRGB export, it looks better (90-95%).  When I export it using my monitor profile then the photo is spot on 100% however my monitor profile shows the photo incorrectly when viewing it using the standard Windows Vista photo viewer, it appears lighter and less saturated which I guess I expect since it's not color managed.
    C) On a standard monitor the photos all look the same regardless of color space export so long as I use a color managed browser such as Firefox.
    D) This gives pretty much the same breakdown of results as scenario B above.  At the moment, it appears that when I use my custom ICC profile which is the calibration of my monitor...I get the best web results.
    However my custom ICC profile gives me the worst local results within my windows viewer and when my clients load the photos on their machines, no doubt they will look just as bad on theirs regardless of which monitor they use.  So aRGB seems to be the best choice for output.  Anyone else do this?  It's significantly better when viewing in IE on both Wide Gamut and Standard LCD's when compared to sRGB.
    I would guess that my typical client has a laptop with Windows and they will both view the photos locally and upload them on the web, so it needs to look as close to what it looks like when I'm processing it in LR and Photoshop as possible.  I know that a lot of people ask questions about their photos being off because they don't understand that there's a shift between WG and non-WG monitors, but I get that there's a difference...question is which color space export has worked best for others.

    I am saying that since images on the internet are with extremely few
    exceptions targeted towards sRGB. It is extremely common for those images to
    not contain ICC profiles even if they really are sRGB. If they do not
    contain ICC profiles in the default mode in Firefox, Firefox (as well as
    Safari btw, another color managed browser), will not convert to the monitor
    profile but will send the image straight to the monitor. This means that on
    a wide gamut display, the colors will look oversaturated. You've no doubt
    seen this on your display, but perhaps you've gotten used to it. If you
    enable the "1" color management mode, Firefox will translate every image to
    the monitor profile. This will make the colors on your display more
    realistic and more predictable (since your monitor's very specific
    properties no longer interfere and the image's colors are displayed as they
    really are) for many sites including many photographic ones. This is most
    important on a wide gamut display and not that big of a deal on a standard
    monitor, which usually is closer to sRGB.
    It seems you are suggesting that for a wide-gamut display it is better to
    try using your own monitor's calibration profile on everything out there,
    assuming on images posted with a wider gamat it will get you more color
    range while there would be nothing lost for images posted in sRGB.
    Indeed. The point of color management is to make the specific
    characteristics of your monitor not a factor anymore and to make sure that
    you see the correct color as described in the working space (almost always
    sRGB on the web). This only breaks down when the color to be displayed is
    outside of the monitor's gamut. In that case the color will typically get
    clipped to the monitor's gamut. The other way around, if your original is in
    sRGB and your monitor is closer to adobeRGB, the file's color space is
    limiting. For your monitor, you want to make the system (Firefox in this
    case) assume that untagged files are in sRGB as that is what the entire
    world works in and translate those to the monitor profile. When you
    encounter adobeRGB or wider files (extremely rare but does happen), it will
    do the right thing and translate from that color space to the monitor
    profile.
    Wide gamut displays are great but you have to know what you are doing. For
    almost everybody, even photographers a standard gamut monitor is often a
    better choice. One thing is that you should not use unmanaged browsers on
    wide gamut displays as your colors will be completely out of whack even on
    calibrated monitors. This limits you to Firefox and Safari. Firefox has the
    secret option to enable color management for every image. Safari doesn't
    have this. There is one remaining problem, which is flash content on
    websites. Flash does not color manage by default and a lot of flash content
    will look very garish on your wide gamut display. This includes a lot of
    photographer's websites.

  • Saving a Custom (Manual) Sort

    I searched and searched through Bridge and found NO way to save a custom sort (manual) with a name (or a collection if you prefer). I created a manual custom sort in a folder and ran a PS action to reduce the size of each photo and put them into a new folder. When I opened the new folder the sort was alpha, not like the old folder. I now have to manually sort over 300 photos in new folder.
    Is there a way to do this and if not, can a script be written for it. I create slide shows and no of hundreds of examples of how I would use this. It is really labor intensive to put the photos in a special order and then have no way to save it for future use.
    Any suggestions or help would be appreciated. If I have to pay someone to do this, I will. thanks.

    I am using Bridge (Version 1.0.4) and CS2 on WinXP, SP2. I am extremely disappointment in a limitation in Bridge that seems to be related to JamesAGrover's challenge to save a manual sort that he has created in a collection.
    A review of topics related to manual sorting in these Adobe forums indicate that there is a general limitation to sorting a collection. This limitation seems to exist in any of the Adobe products for Mac or PC that offer the ability to view "collections" of images, for example, Lightroom and Bridge.
    A "collection" seems to be a set of file names that are handled independently of the real files themselves. The magic resides in the user's experience of handling the "collection" of filenames as if they were the actual files themselves.
    Of course, there is no "magic"; the collection is working with some kind of pointer to the actual files, where ever they may truly reside.
    There are many powerful reasons for offering functionality of this kind. The one that primarily interests me is that by working with pointers to the same file from different collections, I can create different subsets of images and unique sort orders without copying the original files. I avoid filling my harddrive with copies, but I can burn thousands of CDs and DVDs each with different selections and sorts.
    But unfortunately, inside a "collection", I am unable to execute a manual sort at all. The problem exists if my collection includes files from a single directory or from several different directories (folders). Since this seems to contradict the basic promise of a "collection" of filenames, I keep thinking that I misunderstand the user's interface.
    I have set the "View - Sort" menu with every permutation of checks and unchecks for "Ascending Order" and "Manually" with no luck.
    I have discovered a workaround to my problem: I have been able to manually sort, and save, a direct view of a real file system directory from the Bridge interface. But that has forced me to create copies of my images in one unique new directory for every selection and sort that I need.
    To tie this back to the first post in this topic, I must observe that for me, after creating and manually sorting a collection, I will want to save the collection with its unique and idiosyncratic, manual sort. I don't want to copy the original files umpteen times! It seems that I will encounter the problem described by JamesAGrover.
    I wonder if any forum member, or any one at Adobe, has a comment. Specifically I wonder about two things:
    1) Am I misunderstanding the user's interface? Is there a setting that I have not discovered?
    2) Is there a basic limitation with "collections" that has been removed in CS3?

  • How to build a working executable of the DataSocket writer example VI

    I'm trying to build an executable of the "DS Writer.vi" example.  The first problem was that the "DataSocket Server Control.vi" was not distributed with my build/install script.  I fixed that (thanks Dennis) but now I get the following error message.
    LabVIEW:  (Hex 0x626) Cannot open a VI with separated compiled code in a Run Time Engine that has no access to the VIObjCache.
    There is a solution to that (http://forums.ni.com/t5/LabVIEW/Getting-LabVIEW-load-error-code-59-using-LabVIEW-2010-with-a-Run/td-...), but it involves changing a property of a VI in the DataSocket library (if I understand that correctly).  I'm reluctant to make a change to a standard NI library as it could cause maintenance problems in the future (other developers not having the same change, or updating LabVIEW versions).
    Has anyone successfully built an executable using DataSocket?  If so, can you offer any advice?  I'm using LabVIEW 2011.
    Thanks, Mark
    Solved!
    Go to Solution.

    Thanks Nathan.  I was able to build and deploy an executable with DataSocket.  The last hurdle was to "Uncheck the 'Separate compiled code from source file' option in VI Properties" for the "DataSocket Server Control.vi".  The only problem with that solution is that I now have a customized DataSocket library.  I think one solution might be to copy that VI or the library into my project folder so that the solution is transferrable to others in the future.  I would like to hear how other LabVIEW developers deal with customized libraries, either by strictly avoiding that, maintaining their own copies, or some other solution.  But that is probably a topic for a new thread.

  • Two Accounts Merging to One Computer?

    Hi there - hope this is the right forum for this:
    My significant other and I will be "merging households" in a few months, we are both avid Mac users, and we both have purchased a very huge chunk of our music via the iTunes store. Once we move in together, we are going to want a solution where we can have our music within a single iTunes library (on a single computer), so that we can have, say, the AppleTV access this one library. We also want to be able to sync the music from both of our iTunes accounts to both of our iPods. Basically, we're trying to avoid having to buy two copies of an album for our one household.
    So far, I've been able to authorize his iTunes music store account on my computer (while maintaining the authorization of my own account), and he's sent me one of his purchased songs to test, and it plays. So multiple accounts on one Library seems to work at this point. If I were to sync my iPod, would this file download onto it and playback alright?
    iBook G3 800MHz Mac OS X (10.4.8)
    iBook G3 800MHz   Mac OS X (10.4.8)  

    You can merge all the files from one computer to another. All the protected songs purchased via the iTunes store will need authorization (which it appears you already tested), but a single iTunes library can be authorized to play songs puchased from multiple users.
    Each iTunes account can be authorized on up to 5 separate computers, and as far as I know there is no apparent limit on the number of authorized accounts on a single computer.
    Any song in your iTunes library that you have authorization to play will (by default) sync with your iPod automatically (assuming your iPod's sync settings grabs your whole library) and play fine on your iPod.
    To merge the two libraries simply copy the music from one computer to the other, then import all the songs either via Import in the File menu or more simply by draging and droping the whole folder into the iTunes window. All the songs will be listed in iTunes and stored inside the library along with all your previous songs.
    To use the same library between the two separate computers is a bit different. You have to first move the iTunes library to an external networked hard drive, then point both of your iTunes to the library in iTunes->Preferences->Advanced. This is an imperfect solution as when one person adds a song it will not nessisarily show up in the other person's library (despite the fact they are stored in the same place).

  • Apple TV hard drive vs streaming

    I have an apple TV, Iphone and Ipod Nano. When converting an video, I am trying to avoid have two or three copies of movies in Itunes. I have a 160 gig ATV, If I sync ATV the movie will be on there but if I delete it from Itunes, will it then disappear from ATV? Is there a way to load movies on to ATV without syncing it with Itunes.

    Deltchiro,
    Although you don't specifically state it, I'm assuming that you're using something like Handbrake to convert your videos. I suggest that you play around with the conversion settings and find one that will play on all three devices.
    I've got an iMac, iPod Touch 2G, iPod Classic, and ATV. Using the Handbrake ATV settings and adding the iPod kernel checkbox with 2500 bitrate, I've gotten a great-looking video conversion that plays on all of my devices. On top of that, the resulting file is quite manageable and it contains both Dolby 5.1 AND 2-channel stereo tracks.
    One movie file -- four devices -- can't beat that!

  • S_ALR_87013148

    Hi
    When I execute the S_ALR_87013148 report following is message.
    No suitable summarization level with data exists
    Message no. KN424
    Please explain what is summarization level.
    thanks

    Hi
    In COPA, COPA Assessments and COPA Reports are read from Summarised records and not line items.
    Hence you need to set up Summarization levels in COPA
    Config Steps:
    1. You can have SAP provide you the propsed Summarization level by TCode KEDVP
    2. Or you can manually create the Summarization levels by TCode KEDV
    3. Run Tcode KEDU to create Summarization levels
    Regards,
    Suraj

  • Architecture of firewall, cache, HTTP, and DMZ

    Assume the Web Cache server is located in the DMZ while the HTTP Server and the (web) application server is located behind the firewall (private LAN).
    1) Some resources mentioned that it is usual to place the web cache server behind the firewall but in front of the application server. If I place the web cache server in the DMZ, is it acceptable?
    2) If the answer to the Que #1 is YES, will the Web Cache server need to be installed with the HTTP Server, too, in order to communicate with the HTTP server and the application server which are located behind the firewall? --Otherwise, how will the HTTP calls from the internet be handled? (If, in this case, we do need the HTTP server in the web cache server, then, we'll have two installations of the HTTP server in DMZ and behind the firewall--is this acceptable?).
    3) Can I only install the HTTP server on the Web Cache server which is located in the DMZ? In this way, the internet calls will be handled by the HTTP Server first then it will route it to the application server behind the firewall, to avoid the installation of two copies of the HTTP servers (in front of and behind the firewall).
    4) Can I use it (The ISA Server 2004) to replace the dedicated firewall?
    If so, I'll have the firewall, the Web Cache, the HTTP server all togerther in one machine, while the Applciation Server and the Database Server will be located bihind this firewall. How feasible and acceptable is this architecture?
    5) Is it feasible and acceptable to have a web cache server to have all the functions of the caching, HTTP forwarding, and firewall functions all togerther in one machine, and place this server in the DMZ?
    Many thanks to help.
    Scott

    Yes, to answer one of your questions, you can place the web cache server in dmz, not an issue. Whether web cache server needs to be installed with HTTP server or not is dependent on how you intend to use it. This is very subjective to whether you want to use it with a browser.

  • How do you avoid unneccesary "array copies"?

    Hello - I have been having significant difficulty trying to optimize my code as I have been finding that LabView tends to make copies of variables almost everytime you access them. This may not be a big deal some times, but in my case, I am using arrays that are 10MB in size or greater, and making duplicates of them not only wastes memory, but also time. I would like to figure out a way to avoid this. From what I can see, using a continuous thread with shift registers and sequence locals can reduce some of the duplication, but generally that becomes a wiring nightmare in the diagram. Also, one thing that I would like to do is keep some of these arrays within a record format, but I think anytime you have a wire coming out of a
    bundle, a new copy of that variable is made. I'd like to be able to use pointers, but someone from NI told me pointers were a "bad word" in LabView and references obviously don't work the same.
    Any ideas would be appreciated (sorry the text is long).

    > I have read that document, and especially in terms of the
    > bundling/unbundling, it more or less just states to avoid it which for
    > my application may not be the easiest solution. Also, in it a phrase
    > reads "Each level of unbundling/indexing might result in a copy of
    > that data being generated". I'm trying to figure out when "might"
    > becomes "does" or "doesn't". Also, I have seen the consequences of
    > using locals, globals and "value" property nodes. I wonder why LabView
    > requires making a copy when these are used (because programatically
    > within the scope of the diagram they are much easier to use then
    > sequence locals and shift registers). thanks.
    First, I'll comment on the local and value property. If the AE really
    said that pointers are a dirty word in LabVIEW, what they really meant
    was that LabVIEW works on dataflow and really doesn't expose the pointers.
    The upside of this is that it is much less likely that you will have
    uninitialized values, stale pointers, memory leaks, etc. You also have
    much clearer idea of what is happening to your data, and parallelism is
    much easier to see/use. The downside is that you have less control.
    Also, it is best not to think of the reference as a pointer/reference to
    data, but instead it is a reference to a UI object, the control or
    indicator.
    Locals and reference->value are convenient when you are used to writing
    lines of code, or expressions that move the data from one location to
    another. This code is typically using a sequence to make lines of code
    where each of them loads what it needs, modifies it, then stores the
    results. Again, this feels familiar, but it is not what works well in
    dataflow like LV. It works much better to largely get rid of the
    sequences and simply let the data dependency do the work.
    Doing this should automatically help your memory usage provided you keep
    the items in the performance chapter in the back of your mind. Also,
    remember that subVIs normally don't hurt memory usage provided their
    panel is not open, and they can even help with memory usage as storage
    buffers can be reused.
    For the unbundle issue, hopefully your have an array of rather small
    records, but in the event you have a record with large arrays, you do
    need to avoid repeatedly accessing the elements, indexing, etc. You may
    want to build a data access subVI that can do the indexing, searching,
    etc in a common location and avoid returning the big arrays. This also
    allows for the storage to be moved to a shift register and the whole
    thing becomes a functional global.
    You might want to spend a little time browsing through the examples for
    smart or functional globals as well as some of the analysis ones that
    process arrays to see how they deal with the sequencing and wiring.
    Greg McKaskle

  • COPA Summarization Levels

    Hi All,
    I have created summarization levels in COPA and when I run the COPA KE30 report with the summarization levels being active or inactive, it produces the same numbers in the Dev system, so no issue there. But in QA system, same summarization levels do not provide the same numbers. The numbers when summarization levels are activated are less than half of the numbers when summarization levels are deactivated. Same problem in the PRD system. I have tried to resolve the issue by creating the summarization levels directly in the PRD system.But the numbers did not match in that case either. The numbers provided by the report when the summarization levels are deactived are less than even half of the numbers when the summarization levels are activated.
    I have checked the report parameters in the DEV, QA and PRD systems and they are same.
    Please let me know what may be causing this inconsistency.
    Can it be because of some basis problem? I mean some memory problem etc which may be causing the summarization tables not to have all the data in QA and PRD systems.

    You wouldnt have any message in reports yaar.
    Just compare the authorisations in the development system with others.
    Also, do one thing., execute transaction SU53 immediately after the report execution, to see for any missing authorisations.

  • Avoid copies in .wlnotdelete?

    Hi people
    When you deploy an enterprise application into Weblogic 6.1, the whole application
    is copied into applications\.wlnotdelete\wl_app49002.ear and into applications\.wlnotdelete\wlap25104\
    in exploded format!
    That means in our case, 150 MB are copied twice!
    How can I avoid this copying?
    The developers are working on NT with the enterprise application in exploded format.
    The enterprise application is not under the WL directory tree, but it is registered
    in the config.xml of the domain.
    In production we may use ear format.
    WL 6.1 sp 1.
    Thanks a lot
    Alex

    Hi
    Try to use a field counter of structure SAPSCRIPT:
    Copy &SAPSCRIPT-COUNTER_0(+)&
    The system should be add 1 to field counter when that command is running.
    But I'm not sure if it work, because if the user choose the number of copies in the dialog (from fm OPEN_FORM), the system'll print the same document for copies number times.
    I think you should call your sapscrpit for copies number times parameter:
    DO COPIES TIMES.
    CALL FUNCTION 'WRITE_FORM'
    ENDDO.
    Max

Maybe you are looking for

  • Can I set up a pro forma for multiple images

    Hi all Well I am a newbie to Keynote and I am really enjoying using it. I am trying to set up a pro forma for class photos at a school, the photos have been handed to me by the teachers but are all slightly different sizes. My question is can I set u

  • Select data from ORacle in an orchestration

    Hi,    I am looking for an example of orchestration with two way send port. The orchestration should select data from an oracle DB using input to orchestration. Based on the query results, orch has different paths to follow. Can someone please point

  • FCP Studio 3 install on Power Mac G5

    Anyone has tried to install the new FCP Studio 3 on a Power Mac G5 Quad computer? I have a PowerMac G5 Quad with FCP Studio 2 and want to find out wether I can upgrade to FCP Studio 3 or not? Thanks, r

  • Unit of measure language problem

    Hi experts, I'm selecting matnr and meins from MARA. In SE11 I see the meins in my language (DB), but when I'm executing my program than I see the Unit of measure as 'ST' (Stück). Could you help me in how can I get the unit of measure in a certain la

  • PDF to Text format

    Hi All, Scenario: PDF to Text format I have to create an interface in which the sender file is in PDF format which has to pass to the recvr system  in text format thru PI. Kindly guie me the adapters to be used at sender side and also tell me if ther