SNP aggregation issue

Hi Experts
Here is a scenario for which i need some help. We have multiple locations and these locations can be supplied by one or more distribution centers. The distribution centers need to be planned in APO, but the plants have to be MRP planned. So the scenrio is demand from multiple plants aggregating to distribution center A and demand from another bunch of plants aggregation to dc B.  Apart from the demand from the plants the DC' also have their own demand .
I have maintained the hierarchy at material - DC level and I tried to plan the DC's using the SNP aggregate book. But the problem is DC's original demand gets overwritten with the demand from the individual plants. How do i overcome this issue. That is how do i make sure the aggregated demand is the sum of DC's original demand + demand placed by DC's on plants. I want to know if there is any straightforward way of achieving this before modifying the macros to the achieve the same.
Thanks
Saradha
Edited by: Saradha Ramesh on Sep 3, 2010 11:12 PM

Datta
Yes the plants do not get planned in APO , only the DC's are APO planned. We run MRP at plant level to create STO's from DC to plants. And we forecast the material in DP (forecast for plants and DC's) and release the forecast to SNP. We transfer the supply / demand (SO's) / stock from R/3 to APO for the material ( plants and DC's transaction data ) . Now we know the net demand value at each plant. We roll up the net demand from the plants to the DC's by using the aggregation / hierarchies.  Till this point everything is fine. But the issue arises when the net demand from the plants overwrites the DC's demand. That is the DC has 10 EA demand from the plants . The DC supplies to customer and the demand placed by customer on DC is say 5 EA. When i aggregate the demand , i should see 10 + 5 = 15 EA , but what i see is 10 EA . This is the issue
Thanks
Saradha

Similar Messages

  • Aggregation issue Bex report

    Hi all,
    I am facing the following aggregation issue at reporing level. BW system 3.5
    Cube1
    Material, Company code, Cost center, Material, Month,   Volume KF
    Cube2
    Material, Company code, Cost center, Material, Month,   Price KF
    Multi provider
    Material, Company code, Cost center, Material, Month,   Volume KF, Price KF
    Report
    - Global Calculated key figure 'Value' is  based on basic KF's  Volume KF, Price KF
    - Time of aggregation is set to " Before aggregation" in propery of Calculated Key Figure.
    -  There is only one characteristics 'Company code' is used in report.
    When, I execute this report, Calculated KF is not working (No values), If I change Time of aggregation " After aggregation" in propery of Calculated Key Figure, then It works but wrong values.Price gets aggregated(add ups) and multiply with Volume which is wrong.
    Can you please give me a Ideal solution to resolve this.
    Thanks,
    Harry

    Hi all,
    Can I assume that there is no solution for this issue ??
    Thanks,
    Harry

  • Aggregation issue for report with bw structure

    Hi,
    I am facing aggregation issue while grouping reports in webi.
    We have a BW query with 16 values, which we bring to bw as structure. Out of 16, 8 are percentage values (agg. type should be average).
    If we bring the data at site level, data is comming properly. But if we use same query and try sum/grouping( on region level), then percentage is getting added.
    Since it's a dashboard report with lots of filters, we cannot go for seperate query at each level(site, region, zone).
    How we can resolve this.. please give me suggestion.
    Regards
    Baby

    Hi,
    Since we were using structure, it was not possible to produce the required result in BO.
    We change structure to keyfigures and bring all of them in to BO. All the column formulas are now in BO side.
    Now it is working fine.
    Regards
    Baby
    Edited by: Baby on May 10, 2010 11:39 AM

  • How to aggregate data in SNP aggregated?

    Dear Expert,
    Now, i want aggregate demand of products( A123,A124 and A A224) for location K410 from two location: 6610 and 6710.
    I have created a loction hierachy with root is K410 and two leaves : 6610 and 6710.
    Now how can i aggregate demand of A123, A124 and A224 in K410 from 6610 and 6710  ?
    thanks

    Hello,
    If the hierarchy master data is correctly created, activated and assigned to the correct model, you can try aggregated planning in standard SNP aggregated planning book 9ASNPAGGR/SNPAGGR(1). Just load the data, and use 'Location Aggregation' function button.
    If you're new to SNP aggregated planning, please review the below online documents for more detailed information. It is very important that you have the correct master data settings and planning book settings.
    http://help.sap.com/saphelp_scm70/helpdata/EN/2c/c557e9e330cc46b8e440fb3999ca51/frameset.htm
    Best Regards,
    Ada

  • Master Data Setup Steps & Execution for SNP Aggregated Planning

    Hi,
    I need some URGENT Help on the Master Data Steps and Execution of SNP Aggregated Planning.  I've read the SCM 5.0 help with quite confusing understanding from it as regards to Master Data setup.
    I want to run SNP for a Group of Products at a Group of Locations (say Customer Group or Group of Distribution Centers) at an Aggregate Level & then later Disaggregate the Generated Supplies from the Aggregate Level to the detailed Product-Locations level.
    I have setup a Location Hierarchy with the Group of Distribution Centers and Product Hierarchy with a Group of Products  & since  SNP_LOCPROD aggregation is assigned to the 9ASNP02 Planning area ... I created a LOCPROD Hierarchy which use the Location & Product Hierarchy.
    The Main intention is to Improve Performance of Planning Runs at a Rough Cut  / Aggregate Level by doing a Rough Cut Capacity Check & Split Distribution Center requirements to Multiple Plants producing the same product.  This would also help increase the Optimizer Performance.
    I am finding there is no place to run SNP Heuristic by Specifiying a Hierarchy in the Selection screen of /sapapo/snp01.  So how is Aggregated Planning even run ?
    (I know there are Aggregation & Disaggregation Functions available in SNP Interactive Planning ... but what do I put in the Selection as 'Hierarchies' are not available in the Selections in the 9ASNPAGGR  Aggregation Planning Book).
    Can someone also Provide me Detailed Steps and point me to a Detailed Training document on this  ?  I could not find anything on the SDN Form as well as BPX Forum as well as on Wiki site from within SDN.
    Please if anybody can Guide me Urgently  ?  I am needing to test this quickly to Demo to someone to demonstrate the APO Capabilities  (else the Customer may lose interest in APO as he is comparing functionality with another tool).  I don't want this to happen.
    Regards,
    Ambrish Mathur

    Harish,
    Read your Blog with great interest. I think you have very nicely captured the Features over there.  It was very helpfull.
    I still have some very very basic questions (sorry if I was not clear earlier) and I will try state with an Example.
    Are you saying ...
    1.  If I have 2 Products FG1 & FG2 .. then I need to FIRST
         create another Product Master called FG1_2_GP (say)
         via Trnx.  /sapapo/mat1  ?
    2. Similarly for the 5 DCs say DC1, DC2 .... DC5 ... I need
        to First Create a Location of type 1002 DC1_5_GP (say)
        via Trnx. /sapapo/loc3 first  ?
    I think I have not done these at all.
    3.  Finally are you saying that I need to now Create the
         Hierarchies ... ZCPGPROD  (using FG1 & FG2)
         and ZCPGLOC (using DC1, DC2 ... DC5)  and
         then create a generated Hierarchy ZCPGLOCPROD
         with Structure SNP_LOCPROD and using the ZCPGPROD
         and ZCPGLOD as Component Hierarchies ?
    4. I am still not clear how would you Link  Product Group
    FG1_2_GP  with  ZCPGPROD  ??  Where do we setup that FG1_2_GP is a Header of FG1 & FG2 ?
    5. Similarly where do you Link Location Group DC1_5_GP with the Location Hierarchy  ZCPGLOC  ?
    6. Finally in SNP Interactive Planning with Planning Book 9ASNPAGGR using the Location-Product (Header) as Selection what will I enter in this field ?
    I read your Blog. More details on the Example would help ... I saw in your Blog the PH_POWDER product Group (containing PH_FG1,2,3 Products) example which I was not clear whether it was a New Product you created via /sapapo/mat1 or a New Hierarchy Code created via /sapapo/relhshow.  Similarly you did create Location Groups PH_DC_AGG but do not seem to have used it.  I think you have assigned PH_POWDER group and assigned to Locations 3000, 3100 & 3400 individually via /sapapo/mat1 ... but was not clear on how you linked it to the Location Product Hierarchies.  You have also not created a Location Produc Hierarchy at all using PH_POWDER and PH_DC_AGG at all ... or have you  ? 
    7. Is the Untold Trick that the Product Hierarchy Name and the Product Group created in Trnx. MAT1 must be same  and the Location Hierarchy Name and the Location Group code created in LOC3 Trnx. should be same ?
    8. I had created a Generated Hierarchy at ZCPGLOCPROD ... is this what I enter in 9ASNPAGGR as Selection of Location-Product (Header)  ?
    9. I want to first understand Standard SNP before I go into the diretion of creating my own Planning areas and Planning Books. Is this really needed for me to Plan 2 Products in these 5 DCs at an Aggregate level.  I will be doing Aggregation of all kinds of Demand (STOs, POs, Forecast, SalesOrders, TLB Order Demand) and want the Supply generated at Aggregated Level to Disaggregate to the 2 FGs at the 5 DC Locations based on Proportion of Demand.
    I think my question is far more basic wrt the Master Data Setup ... would help if you can clarify each of my questions separately in your reply.  The Confusion is due to lack of clarity on the Master Data Setup in linking the Product Group PRODUCT Code & Location Group LOC Code to the respective Hierarchies and also SNP Interactive Planning not having a Place to enter these created Hierarchies.
    I do want to praise and appreciate your contribution to the Blog ,,, well done.  Full 10 Points Guaranteed on reply to above questions.
    Regards,
    Ambrish

  • Aggregation issue on a T5220

    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
    In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
    Regards,
    Jim
    Output of several commands on the T5220:
    uname -a:
    SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
    ifconfig -a (IP and broadcast hidden for security):
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    aggr1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
    ether 0:14:4f:ec:bc:1e
    dladm show-dev:
    e1000g0 link: unknown speed: 0 Mbps duplex: half
    e1000g1 link: unknown speed: 0 Mbps duplex: half
    e1000g2 link: up speed: 1000 Mbps duplex: full
    e1000g3 link: up speed: 1000 Mbps duplex: full
    dladm show-link:
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    aggr1 type: non-vlan mtu: 1500 aggregation: key 1
    dladm show-aggr:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
    duplex link state
    e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
    e1000g3 <unknown> 1000 Mbps full up attached
    dladm show-aggr -L:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
    device activity timeout aggregatable sync coll dist defaulted expired
    e1000g2 active short yes yes yes yes no no
    e1000g3 active short yes yes yes yes no no
    dladm show-aggr -s:
    key: 1 ipackets rbytes opackets obytes %ipkts %opkts
    Total 464982722061215050501612388529872161440848661
    e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
    e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5
    Edited by: JimBuitt on Sep 26, 2008 12:04 PM

    JimBuitt wrote:
    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s.Is this with multiple backup streams or just one?
    I would not expect to get higher throughput with a single stream. Only with the aggregate throughput of multiple streams.
    Darren

  • BIGINT aggregation issue in Hana rev 91

    Hi,
    I have a BIGINT value field that isn't aggregating beyond 2147483648 (the max INTEGER value).
    I'm seeing results as follows:
    Period
    Value
    5
    320,272,401
    6
    635,021,492
    7
    515,993,660
    8
    546,668,931
    9
    702,138,445
    10
    438,782,780
    11
    459,387,988
    12
    722,479,250
    Result
    -2,147,483,648
    We've recently upgraded from rev 83 to 91. I'm pretty sure this is a new issue - has anyone else seen this?ect
    I'm hoping there is some kind of fix as I don't want to have to convert fields throughout our system to a longer DECIMAL.
    thanks
    Guy

    I've figured out this issue only affects Analytical Views that have calculated attributes.
    Such views generate a CALCULATION SCENARIO in _SYS_BIC, which seems to incorrectly define my field (which is in the data foundation, modelled as a BIGINT) as SQL Type 4, sqlLength 9, as per the following:
    {"__Attribute__": true,"name": "miles","role": 2,"datatype": {"__DataType__": true,"type": 66,"length": 18,"sqlType": 4,"sqlLength": 9},"kfAggregationType": 1,"attributeType": 0}
    I also have calculated measures modelled as BIGINT's in the Analytical View. These are correctly defined in the CALCULATION SCENARIO with an SQL length of 18, for example:
    {"__Attribute__": true,"name": "count","role": 2,"datatype": {"__DataType__": true,"type": 66,"length": 18,"sqlType": 34,"sqlLength": 18},"kfAggregationType": 1,"attributeType": 0}
    This looks like a bug to me. As a work around I had to define a calculated measure BIGINT which simply equals my "miles" field. Then hide the original field.

  • ESSBASE Aggregation Issue.

    Hi,
    I am facing a serious problem with Essbase. Iam implementing Hyperion Planning 11.1.2.2 for one of our client and first time I am implementing this version.
    The aggregation is not working in my setup. I have written a rule to aggregate the hierarchy. I have tried with AGG, CALC DIM, etc. But still the same issue.
    I have also tried running the Calculate web form rule file, but still aggregation is not happening.
    I have also noticed that in Planning Dimension maintenance, even the level 0 members showing the consolidation operation.
    Any body has clue?
    Please help me as I am unable to proceed further.
    Thanks in Advance.
    Regards,
    Sunil.

    It is probably worth testing your script as a calc script and then run it directly against the essbase database using EAS, then check the data with Smart View, this process should eliminate any issues in planning or calc manager.
    If you are still having problems then post your script and I am sure somebody will give you some further advice.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • SNP Optimizer Issue

    Hi Experts,
    We are working on SNP Optimization scenario with Transportation, Procurement, Strorage and penalty Cost. Our business process trading industry (we procure and sell).
    We maintained Optmizaion profile with linear and Primal simplex Algorithm. We took a sample scenario where our product has four suppliers with diffrent transportation costs maintained in transportation lanes.
    In SDP94, Optimizer behaving diffrently.
    Case 1: When we run directly at destination location (product + Destination) level optimization running smoothly with out picking any transportation lanes, costs. Subsequently no purchase requistions are created.
    Case 2: When we select all locations (Product + Source and Destination) and run optimization the following errors are coming which we are not able to identify the origin.
    Error 1: Cost function 051MhWG07j6MwgLZXQcXSm not found
    Error 2: Conversion from STD to EA for product WDE-MATL1A not maintained
    Error 3: Error occurred when reading data
    We checked CCR no errors are identified. Also there is no UOM called STD. We couldn't understand why Optimizer is proposing STD to EA conversion error.
    Can you please throw some light.
    Thanks in advance

    Hi Ugameswara Rao
    When you run optimizer,you need to run as a whole including all teh Network...So,the first observation is standard
    Regarding teh second issue,It is a Master data inconsistency,So,please identify the 'concerned location product' and maintain the cost accordingly(Check teh log to find the concerned location product)
    Regarding error "Error occurred when reading data",please raise a seperate thread,as the raesons for this are multiple...but try this first run //om17,model consistency check and correct the data model set...but,the reason for this could also be a softwre error....hence,if the above don't help,please rasise a seperate thread with all details like: STEP in which thi serror occur,and also teh message log details
    Thanks and Regards
    Suresh

  • Exceptional aggregation on Non *** KF - Aggregation issue in the Query

    Hi Gurus,
    Can anyone tell me a solution for the below scenario. I am using BW 3.5 front end.
    I have a non cumulative KF coming from my Stock cube and Pricing KF coming from my
    Pricing Cube.(Both the cubes are in Multiprovider and my Query is on top of it).
    I want to multiply both the KF's to get WSL Value CKF but my query is not at the material level
    it is at the Plant level.
    So it is behaving like this: for Eg: ( Remember my Qty is Non-*** KF)
                   QTY  PRC
    P1  M1      10     50
    P1  M2       0     25
    P1  M3      5      20
    My WSL val should be 600 but it is giving me 15 * 95 which is way too high.
    I have tried out all options of storing the QTY and PRC in two separate CKF and setting the aggregation
    as before aggregation and then multiplying them but it din't work.
    I also tried to use Exceptional Aggregation but we don't have option of  ' TOTAL' as we have in BI 7.0
    front end here.
    So any other ideas guys. Any responses would be appreciated.
    Thanks
    Jay.

    I dont think you are able to solve this issue on the query level
    This type of calculation should be done before agregation and this feature doesnt exist in BI 7.0 any longer. Any kind of exceptional aggregation wont help here
    It should be be done either through virtual KF (see below )  or use stock snapshot approach
    Key figure QTY*PRC should be virtual key figure. In this case U just need to one cbe (stock quantity) and pick up PRC on the query run time
    QTY PRC
    P1 M1 10 50
    P1 M2 0 25
    P1 M3 5 20

  • Require Very Urgent Help on Aggregation Issue. Thanks in advance.

    Hi All,
    I am new to essbase.
    I have got an issue with aggregation in Essbase. I load data at zero level and then when I aggregate using CALC DIM i do not get any value.
    The zero level load being:
    Budget,Version,Levmbr(Entity,0),Levmbr(Accounts,0),NoRegion,NoLoc,NoMod,Year,Month.
    When I use default calc or give Calc Dim for the above no aggregation takes place at the parent level.
    Requirement :
    Values at Version,Region,Location,Model,Year,Month.Budget Level.
    Please advice.
    Thanks in advance.
    Bal
    Edited by: user11091956 on Mar 19, 2010 1:07 AM
    Edited by: user11091956 on Mar 19, 2010 1:10 AM

    Hi Bal,
    If you had loaded without an error , and after that your default calc is resulting in not aggregated values. Then I can imagine only one way, it cannot happend is through your outline consolidations.
    Check,if data which is loaded at the members does have IGNORE or ~ as the consolidation
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Inspoke aggregation issue

    I created an infospoke with a BADI.
    Basically I am extracting data from a MP to a flat file and I need the data aggregated. In the BADI I do sum the data.  However, I noticed in the monitor when it runs, the data is written to the file for each data package so even though I sum the data I will still have records with duplicate keys not aggregated.  If the number of records extracted is less than the number set in the data package size then I have no issue.  But I have a lot of data.
    How do I fix this?

    Hi,
           On buddy you need to create an temporary internal table with the same structure as E_T_DATA_OUT.. then transfer the raw data there.. then remove all data from E_T_DATA_OUT then loop on temporary internal table then tranfer the records on E_T_DATA_OUT via COLLECT statement this will aggregated (sum) the figures.
           Assign poinst if it helps...
    Thanks and regards,
    Raymond

  • SNP cost issue

    hi
    To shift a material from source location plant X to destination location DC - Y  -I incur a cost of Z amount-.in what all screens/fields do I declare the cost Z to my knowledge it has to be in
    SAPAPO/TL
    CSNP11 COST DIRECTORY
    SNP1 OF MAT1
    Apart from these can you think of any other screens .
    In essence during SNP optimizer run the cost factor has to be take into account .How do I map this scenario.
    experts   plz mention the screen shots /t codes alone from that point I will try it out ,is that “ maintain SNP global parameters profile”” in any way relevant to this scenario. Bit perplexing ….would you please provide a helping hand.

    Hi All,
    Thanks for your prompt response. I have started using second option but still I am facing the above issues. I dont see any issues in OM17, CCR.
    Since my client belongs to trading industry, we dont maintain any PPM and PDS. We are running with Transportation, Storage, procurement and penalty cost.
    I really dont understand why and how system throwing STD to EA converstion error in the optmization log. I dont see any UOM with STD. Also what is source for other errors. Can any one throw some light?
    Thanks & regards

  • Aggregation Issue when we use Hierarchy InfoObject in the Bex Query.

    Hi All,
          I have created a bex Query having some Char's and one hierarchy Infoobject in Rowa and RKF's. I haven't used any Exception aggreagation Objects in RKF. but, when I execute a query in the Over all result it's showing the Exceptional aggregation based on the Hierarchy Object.
    Briefly Illustrated my problem here.
    OrgUnitHierarchy     EmpID       RKF
    Root                          1                1
    RootA1                     1                1
    RootA2                     1                1
    Root                          2                1
    RootB1                     2                1
    RootB2                     2                1
    Root                          3                1
    RootC1                     3                1
    RootC2                     3                1
    Over all result                              3
    In the above example the Sum of the RKF is 9. but its showing only 3. When I Connect this with crystal report is the sum of RKF is showing 9. Please help me which is the correct one and why it's not aggregating child nodes?
    Is there any Config needs to be done to aggregate all the nodes of the Hierarchy? Thanks for your support in advance
    Regards,
    Shiva

    Hi,
    is this related to BEx Analyzer or BEx Web Reporting ? if so then I would suggest to post the entry into BEx Suite forum as this forum is for the SAP Integration Kit from BusinessObjects.
    Ingo

  • DATA AGGREGATION ISSUE IN REPORT

    Hi,
    when we were running query by selcting the the version  . the data is aggregation dono where it went rong
    Explanation :
    we  alredy loaded data with version for the year 2010.. we started laoding from April 2010 to March 2011.. as 12 fiscal periods, with 53 versiions.  here version means week(1to53 weeks for year ) but we started laoding from April means 14th week of year 2010. to March 2011. til here no probelm with data every thing is matching in the report/
    now the turn comes for laoding data from  April 2011..Again 14 version came which is alredy in system  for April 2010
    now what is happening ..
    when we laod data for 14 week of 2011 (April). data is Aggregating  or some mixxing up  with data..
    so what we had done is we had added calyear in the filter so that user can see only the data for version with respective year.
    even though data is Aggregation with previous year(2010) for the same version alredy in system ..
    nothing is working outt..
    can u pls sugest us , any probelm in the back end , need to do any changes for modelling ..
    till now what we had done is deleating the data from cube and dso for April2010  for 14 version  and laoding data for April 20110  with version 14 . which is toatlay new version in system.
    So this Keep on continuying every time..when data not matching.
    now for the Month May with version 20 we have to laod  the same probelm should not repeat .
    pls help with your valuable sugestion detailly.
    And this forecast summary report , we have data for next 2 years. planning is doing for every 2 years from currentt date
    Any queries pls let me know
    Edited by: afzal baig on May 11, 2011 4:00 PM

    Hi
    is your data stored in a cube or DSO? If it is DSO - is version and period / calmonth a key field?
    what type of key figure are you using and what is the aggregation rule for this key figure - in the infoobject definition and in the query definition?
    is there anything unusual in the definition of your version characteristic? any compounding?
    Why did you not use 0calweek?
    regards
    Cornelia

Maybe you are looking for

  • How can I setting up the auto number in Webi report?

    I used to "RowIndex()" which mixed another dimension will duplicate the result in same block. Please help to fix this problem or using any work around to do Thanks,

  • Could somebody tell me how to merge or insert pages of Acrobat 9.0 files in VB6?

    I know that Acrobat do not support VB6. The PDDoc.InsertPages method worked with Acrobat 6.0 files but is not working with Acrobat 9.0 I looked into the Scripting Quicktest Professional and unfortunately I could not find how to create script using PD

  • Function in report

    Hi, i created a function in my report, in the ''program unit'' and called that function in the query in data model but getting an error <invalid identifier...>, do you have any idea why this error? any help would be appreciated...Thanks.

  • Ken burns effect & photo size

    when using the ken burns effect---I'm customizing the start & end settings for each photo/slide.....some of the photos/slides I'm not able to actually "zoom out" show that the entire photo fits into the screen......It seems that...if I have the "Imag

  • No 3GS support for VGA Adapter anymore

    After the iOS 6 update I've realized that my VGA adapter became an accessory not anymore compatible with my iPhone 3GS. Does anyone know what happened? I have checked the adapter with an iPhone 4 (iOS 6) and it works fine. Same with iPad 2. Thanks, A