Aggregate by Date

Hi,
Someone know how i make a SELECT statement
using aggregate functions(MIN, MAX, COUNT, AVG, SUM)
GROUP BY date interval(DAY, WEEK, MONTH, YEAR)?
thanks...

I think no one answered your question because they'd like you to try to do some reading on your own.
-- Show bunny count born by year.
SELECT  TO_CHAR ( Birth_Date, 'YYYY' )  AS Birth_Year
    ,   SUM ( Number_Of_Bunnies )       AS Number_Of_Bunnies
FROM    someTable
GROUP BY TO_CHAR ( Birth_Date, 'YYYY' )
-- Show bunny count born by month/year.
SELECT  TO_CHAR ( Birth_Date, 'YYYY-MM' )  AS Birth_Month_Year
    ,   SUM ( Number_Of_Bunnies )          AS Number_Of_Bunnies
FROM    someTable
GROUP BY TO_CHAR ( Birth_Date, 'YYYY-MM' )
;Some places to start reading
http://technet.oracle.com/docs/products/oracle9i/doc_library/901_doc/server.901/a90125/functions2.htm#81312
http://technet.oracle.com/docs/products/oracle9i/doc_library/901_doc/appdev.901/a89856/06_ora.htm#940
Good Luck,
Eric Kamradt

Similar Messages

  • Aggregate exported data in /CPMB/EXPORT_TD_TO_FILE

    is it possible to aggregate results data in chain /CPMB/EXPORT_TD_TO_FILE?
    One of my packages uses this chain - it exports several fields from BPC model. Model has 8 dimension, but only 4 are used in export process - it causes many combinations are repeated in file.
    Off course sum of values is correct, but it would be better to have a choise whether to aggregate data or not.

    Hi Michal,
    I just tested custom process chain approach - first run script then export to file (with the same scope):
    The new chain was created based on the copy of standard chain
    The advanced script for the package was:
    PROMPT(MEASURELIST,%MEASURES%,"Please select measures")
    PROMPT(SELECTINPUT,,,,"%DIMS%")
    PROMPT(TRANSFORMATION,%TRANSFORMATION%,"Transformation file:",,,Import.xls)
    PROMPT(OUTFILE,,"Please enter an output file",Data files (*.txt)|*.txt|All files(*.*)|*.*)
    PROMPT(RADIOBUTTON,%ADDITIONINFO%,"Add other information(Environment,Model,User,Time)?",1,{"Yes","No"},{"1","0"})
    INFO(%DIMVALUE%,E=24)
    INFO(%TEMPNO1%,%INCREASENO%)
    INFO(%TEMPNO2%,%INCREASENO%)
    TASK(/CPMB/SCRIPT_LOGIC,SUSER,%USER%)
    TASK(/CPMB/SCRIPT_LOGIC,SAPPSET,%APPSET%)
    TASK(/CPMB/SCRIPT_LOGIC,SAPP,%APP%)
    TASK(/CPMB/SCRIPT_LOGIC,SELECTION,%SELECTION%)
    TASK(/CPMB/SCRIPT_LOGIC,LOGICFILENAME,PREPEXPORT.LGF)
    TASK(/CPMB/APPL_TD_SOURCE,SELECTION,%SELECTION%)
    TASK(/CPMB/APPL_TD_SOURCE,DIMENSIONVALUE,%DIMVALUE%)
    TASK(/CPMB/APPL_TD_SOURCE,MEASURES,%MEASURES%)
    TASK(/CPMB/APPL_TD_SOURCE,OUTPUTNO,%TEMPNO1%)
    TASK(/CPMB/EXPORT_TD_CONVERT,INPUTNO,%TEMPNO1%)
    TASK(/CPMB/EXPORT_TD_CONVERT,TRANSFORMATIONFILEPATH,%TRANSFORMATION%)
    TASK(/CPMB/EXPORT_TD_CONVERT,SUSER,%USER%)
    TASK(/CPMB/EXPORT_TD_CONVERT,SAPPSET,%APPSET%)
    TASK(/CPMB/EXPORT_TD_CONVERT,SAPP,%APP%)
    TASK(/CPMB/EXPORT_TD_CONVERT,OUTPUTNO,%TEMPNO2%)
    TASK(/CPMB/TD_FILE_TARGET,INPUTNO,%TEMPNO2%)
    TASK(/CPMB/TD_FILE_TARGET,FULLFILENAME,%FILE%))
    TASK(/CPMB/TD_FILE_TARGET,ADDITIONALINFO,%ADDITIONINFO%))
    The scope prompt was filled only once:
    The script successfully processed scope data and then the processed data was exported to the file.
    B.R. Vadim

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

  • Best way to aggregate large data

    Hi,
    We load actual numbers and run aggregation monthly.
    The data file grew from 400k lines to 1.4 million lines. The aggregation time grew proportionately and it takes now 9 hours. It will continue growing.
    We are looking for a better way to aggregate data.
    Can you please help in improving performance significantly?
    Any possible solution will help: ASO cube and partitions, different script of aggregation, be creative.
    Thank you and best regards,
    Some information on our enviroment and process:
    We aggregate using CALC DIM(dim1,dim2,...,dimN).
    Windows server 64bit
    We are moving from 11.1.2.1 to 11.2.2
    Block size: 70,000 B
    Dimensions,Type, Members, Sparse Members:
    Bold and underlined dimensions are aggregated.
    Account
    Dense
    2523
    676
    Period
    Dense
    19
    13
    View
    Dense
    3
    1
    PnL view
    Sparse
    79
    10
    Currency
    Sparse
    16
    14
    Site
    Sparse
    31
    31
    Company
    Sparse
    271
    78
    ICP
    Sparse
    167
    118
    Cost center
    Sparse
    161
    161
    Product line
    Sparse
    250
    250
    Sale channels
    Sparse
    284
    259
    Scenario
    Sparse
    10
    10
    Version
    Sparse
    32
    30
    Year
    Sparse
    6
    6

    Yes I have implemented ASO. Not in relation to Planning data though. It has always been in relation to larger actual reporting requirements. In the new releases of Planning they are moving towards having integrated ASO reporting cubes so that where the planning application has large volumes of data you can push data to an ASO cube to save on aggregation times. For me the problem with this is that in all my historical Planning applications there has always been a need to aggregate data as part of the calculation process, so the aggregations were always required within Planning so having an ASO cube would not have really taken any time away.
    So really the answer is yes you can go down the ASO route. But having data aggregating in an ASO application would need to fit your functional requirements. So the biggest one would be, can you do without aggregated data within your planning application? Also, its worth pointing out that even though you don't have to aggregate in an ASO application, it is still recommended to run aggregations on the base level data. Otherwise your users will start complaining about poor performing reports. They can be quite slow, and if you have many users then this will only be worse. Aggregations in ASO are different though. You run aggregations in a number of different ways, but the end goal is to have run aggregations that cover the most commonly run reporting combinations. So not aggregating everything and therefore quicker to run. But more data will result in more time to run an aggregation.
    In your post you mentioned that your actuals have grown and the aggregations have grown with it, and will continue to grow. I don't know anything about your application, but is there a need to keep all of your actuals loading and aggregating each month? Why don't you just load the current years actuals (Or the periods of actuals that are changing) each month and only aggregate those? Are all of your actuals really changing all the time and therefore requiring you to aggregate all of the data each time? Normally I would only load the required actuals to support the planning and forecasting exercise. Any previous years data (Actuals, old fcsts, budgets etc) I would archive and keep an aggregated static copy of the application.
    Also, you mentioned that you did have calc parallel set to 3 and then moved to 7. But did you have the TASK DIMS set at all? The reason I say this is because if you didn't then your calc parallel would likely give you no improvement at all. If you don't set it to the optimal value then by default it will try to paralyze using the last dimension (in your case Year), so not really breaking up the calc ( This is a very common mistake that is made when CALC PARALLEL is used). Setting this value in older versions of Essbase is a bit trial and error, but the saying goes it should be set to at least the last sparse aggregating dimension to get any value. So in your case the minimum value should be TASK DIM 4, but its worth trying higher, so 6. Try 4 then 5 and then 6. As I say, trial and error. But I will say one thing, by getting your calc parallel correct you will save much more than 10% on aggregations. You say you are moving to 11.1.2.2, so I assume you haven't run this aggregation on that environment yet? If so the TASK DIM setting is not required in that environment, essbase will calculate the best value for you, so you only need to set CALC PARALLEL.
    Is it possible for you to post your script? Also I noticed in your original email that for Company and ICP your member numbers on the right are significantly smaller than the left numbers, why is this? Do you have dynamic members in those dimensions?
    I will say 6 aggregating dimensions is always challenging, but 9 hours does sound a little long to simply aggregate actuals, even for the 1.4 millions records

  • Can we download aggregate's data through open hub?

    Hi,
    Can we download Cube Aggregate data to file or infospoke? If yes then send me detailed steps?
    Ritika

    Hi Ritika,
    Thier is no staright forward, way of extracting data from Aggrgates table, as the OHD can be build on following Object types: Datasource, infosource, DSO, Infoobject & infocube.
    So you can it a try by creating a generic datasource on the desired aggregates table, and use it in a Open hub destination.
    Hope this helps,
    Regards,
    Umesh

  • Creating Aggregate--Master data not showing:

    In a Cube, following this instruction I get the table below:
    Goto rsa1>Infocube>manage>select the Infocube>Click contents>select tab "field selection for output">selectall>execute>execute.
    CustID CustID MatNo MatNo SalesRepID SalesRepID SalesRegion SalesRegion SalesOff SalesOff
    cust001 -
    2mat001---2 -srep0120--
    0
    cust002 -
    3mat002---3 -srep0230--
    0
    cust002 -
    3mat003---4 -srep0230--
    0
    cust003 -
    4mat003---4 -srep0340--
    0
    cust004 -
    5mat004---5 -srep0450--
    0
    cust004 -
    5mat005---6 -srep0450--
    0
    I have already verified and both Sales Region and Sales Office have the master data but browsing the cube gives the above figure, showing Sales Region and Sales Office as blank in the first column and 0 in the second.
    The master data for all these characteristics are already active
    a) What can I do to fill the Sales Region and Sales Office columns, because that may be the solution to this problem in b) below
    b) I have added three infoobjects to a new aggregate that I have created. Any reason why for Sales Rep, Sales Region and Sales Office, maintaining master indicates that they all have master data; and each of these InfoObjects has been activated, but while creating the aggregate, attempts to assign Fixed Value to each of these IOs works only for Sales Rep. i.e. only Sales Rep showed the master data. Why are the other two IOs not showing their master data?
    What do I need to do to see the master data of these 2 infoobjects in the Aggregate maintenance screen, just like the Sales Rep infoobject?

    <u>What I did based on your advice:</u>
    Thanks for the hints, it looks like we are up to something.
    Under the InfoSource tree, I selected the InfoSource on which the Cube is based. Under this InfoSource, I selected “Communication Structure”, right-click on blank space in InfoObject column, then selected “Choose Possible” entries, then selected Sales Office, and later used the option “Choose” in the context menu to populate the row with the details. Same was repeated for the Sales Region.
    Under the “Transfer Structure/Transfer Rules”, the same process was used to insert the Sales Office and Sales Region in the Transfer Structure and Transfer rules tabs. i.e. the field to InfoObject mapping was maintained in the transfer rules.
    There was a RED X, but after I selected “Propose Transfer Rules”, the RED X went away.
    <u>Result of above active:</u>
    I thought I was good to go but still on checking at the Aggregate Maintenance options, the values for these two Infoobjects were not available.
    Now back to the Cube: Under the Cube, on the InfoSource beneath it, I right click and choose “Update Transfer Rules” but got a message that
    “due to changes in other objects, this update rule was set to ‘inactive’
    …Do you want to reactivate?”
    I chose Yes but got an error:
    “IC=IC_D  IS=IS_D error when checking the update rules”
    It is basically stopping me from reactivating the Update Rules, so what is the work around up to this point?

  • Aggregate text data of multiple runs

    Our system test data is saved as text.
    Sample text Run file(simplified):
    "Test 1 of 100
    Pass: reading 1 XX
    Pass: reading 2 XX
    Pass: reading 120 XX
    End of Test 1"
    I want to read in multiple files and generate histograms for each of the readings. My problem is there are 120 readings per test and they have very little common structure from reading to reading
    Typical readings:
    Pass: V/UHF Reference Downconverter Synth cal tone amplitude is 1.5 dB from nominal @ 35 MHz in narrow band
    Pass: UHF Antenna Sample channel element 1 cal tone amplitude is -42.1 dBm (nominal -40 dBm)"
    The runs are generally in order but I need to parse each line one at a time, pull the value(s) from the line and assign it to the proper 2d array. I'll end up with an array with 120 rows(readings) and as many columns as tests(100's). Once I get the 2d array the histograms are easy.
    I have two approaches
    1. parse each line through a series of vi's. Each vi is specific for a single reading. (120 vi's; that's ugly)
    2. a large vi (Attached) that parses a section of the test at a time. Only gets about 15 readings.
    Both of these don't scale well from the current situation of watching only a few problem readings to generating distrubutions of them all.
    Can anyone point me in the right direction?
    Norm
    Attachments:
    Diag Txt Extract Data v4.vi ‏42 KB

    I want to read in multiple files and generate histograms for each of the readings. My problem is there are 120 readings per test and they have very little common structure from reading to reading
    Bullseye!  there is no cheap solution to poor archetecture.  The test data should have been consistant by design.  Lacking that- either approach you offered will work and there is not a lot you can do about finding an elegant solution.  It is a pigs ear and it will be a pigs ear- no silk purse for you today.
    Jeff

  • Aggregate inventory data from SCCM and Intune devices

    Hi,
    We currently have a large SCCM 2012 installation and are looking at into to manage our non-domain Windows devices.  I believe the intune client gathers hardware/software inventory.  Is it possible to report, in SCCM, on this data from both sources
    as once?  
    e.g. If for licencing purposes I needed a report of all machines with software title X installed could this cover devices discovered by both technologies?
    Thanks

    ConfigMgr can manage non-domain devices just fine -- it truly doesn't care about domain membership of the managed systems. If instead of domain-joined, you actually mean remote, then the Internet Based Client Management (IBCM) feature set of ConfigMgr is
    the preferred choice.
    Ultimately, no ConfigMgr has no way to gather or collect information from systems it does not manage but there is no need for it to because it can directly manage any of Windows systems without Intune -- MDM does require Intune and the Intune connector however.
    Jason | http://blog.configmgrftw.com

  • Aggregate within date but repeating codes

    Hi; I'm trying to write something that is more sequential in Oracle SQL- a SQL solution would be preferred, but a PL/SQL block would also be OK.
    I have data like this:
    ID, service_date, procedure_code
    001, 1/1/2009, 90801
    001, 1/2/2009, 90801
    001, 1/3/2009, 90801
    001, 1/4/2009, 90802
    001, 1/5/2009, 90801
    001, 1/5/2009, 90803
    002, 1/1/2009, 90840
    002, 1/2/2009, 90840
    002, 1/3/2009, 90801
    What I want is the counts aggregated, but note that the same procedure code can repeat later on, and I want that to be a separate group, so the result wanted is like this
    ID, first_service_date, last_service_date, procedure_code, count
    001,1/1/2009, 1/3/2009, 90801, 3
    001,1/4/2009, 1/4/2009, 90202, 1
    001,1/5/2009, 1/5/2009, 90801, 1
    002,1/1/2009, 1/2/2009, 90840, 2
    002,1/3/2009, 1/3/2009, 90801, 1
    We are trying to get at how many of one procedure code people have before they go to another, but don't want to do a group by since the same procedure code can appear later on. I'm a statistical programmer (more SAS), and can do base Oracle SQL fairly well, but am not sure how to handle this with Oracle SQL, or whether I have to go to a block with a cursor to do it. Suggestions would be appreciated!
    -ML

    Hi,
    Using only 1 sub-query:
    WITH     got_grp_id     AS
         SELECT  id, service_date, procedure_code
         ,     ROW_NUMBER () OVER ( PARTITION BY  id
                                   ORDER BY          service_date
                             ,                procedure_code     -- See note below
               -     ROW_NUMBER () OVER ( PARTITION BY  id
                                         ,                    procedure_code
                                   ORDER BY          service_date
                           )         AS grp_id
            FROM    t
    --     WHERE     ...     -- If you need any filtering, put it here
    SELECT       id
    ,       MIN (service_date)     AS first_service_date
    ,       MAX (service_date)     AS last_service_date
    ,       procedure_code
    ,       COUNT (*)          AS cnt
    FROM       got_grp_id
    GROUP BY  id
    ,            procedure_code
    ,       grp_id
    ORDER BY  id
    ,            first_service_date
    ,       procedure_code
    ;How do you want to treat multiple rows with the same id and service_date? For example, in the sample data, id='001' has 2 procedure_codes, 90801 and 90803, with exactly the same service_date. The query above treats the lower procedure_code as if it were slightly earlier than the later procedure_code.
    ml23 wrote:
    What I want is the counts aggregated, but note that the same procedure code can repeat later on, and I want that to be a separate group, so the result wanted is like this
    ID, first_service_date, last_service_date, procedure_code, count
    001,1/1/2009, 1/3/2009, 90801, 3
    001,1/4/2009, 1/4/2009, 90202, 1
    001,1/5/2009, 1/5/2009, 90801, 1
    002,1/1/2009, 1/2/2009, 90840, 2
    002,1/3/2009, 1/3/2009, 90801, 1Like Solomon's solution, this includes the output row:
    ID  FIRST_SERVICE_DATE   LAST_SERVICE_DATE    PROCEDURE_CODE        CNT
    001 05-Jan-2009 00:00:00 05-Jan-2009 00:00:00          90803          1which you said you didn't want. Why don't you want that row? Is it because it hast the same service_date as another row with the same id, that is
    ID  FIRST_SERVICE_DATE   LAST_SERVICE_DATE    PROCEDURE_CODE        CNT
    001 05-Jan-2009 00:00:00 05-Jan-2009 00:00:00          90801          1? How do you decide which one to diplay, and which one to exclude?

  • Aggregate list data from multiple subsites to parent site

    I am working on implementing a project management site. I simply have a site collection with multiple subsites (each subsite is a unique project) that all have the same list named "Project Status" which includes project health and comments.
    I want to rollup only the most recently added item from the Project Status list from each subsite into the site collection parent site main page.
    For example:
    Project Portfolio Status
    Project 1 - Green - <comments>
    Project 2 - Yellow - <comments>
    Project 3 - Red - <comments>
    Can this be done using OOB tools? I know Bamboo Solutions has a product that does something like this, but it's $1000.

    You can use Content Search Web Part in SharePoint 2013. There are many ways you can use CSWP and its query filters, you can view them here. 
    A query such as below - 
    path:"https://YourSiteCollection/SubSite*" ListID:xxxxxx-9511-4746-xxxx-E12BC81ECCA9 ListID:5xxxxC1B4-EE4D-4xxxx-BC5B-032EB7D03E09 ListID:xxxxE18-xxxx-4C3C-xxxx-AC14EFBB2A12 -Filename:AllItems.aspx
    will give the result that will look like the image below.
    Srini Sistla Twitter: @srinisistla Blog: http://blog.srinisistla.com
    Thank you. I have been working on my query and have some good results but I need to add some more parameters to my query in order to block/filter the following two items that show up in the results:
    Item 1 - .../Add Status Report.aspx
    Item 2 - .../AllItems.aspx
    *Note: I also need to block/filter all items in the list EXCEPT for the 1 most recent item
    My current query is:
    path:"<subsite url>" ListID:<list id> -Filename:<view name>.aspx
    Where can I find/read about other parameters that I can use to block out the other items? Thanks!

  • How to delete aggreagetd data in a cube without deleting the Aggregates?

    Hi Experts,
    How to delete aggreagetd data in a cube without deleting the Aggregates?
    Regards
    Alok Kashyap

    Hi,
    You can deactivate the aggregate. The data will be deleted but structure will remain.
    If you switch off the aggregates it wont be identified by the OLAP processor. report will fetch the data directly from the cube. Switching off the aggreagte won't delete any data,but temporarly the aggregate will not be availbale as if it is not built on the info cube. No reporting is not possible on swtiched off aggregates. The definition of the aggregate is not deleted.
    You can temporarily switch off an aggregate to check if you need to use it. An aggregate that is switched off is not used when a query is executed.This aggregate will be having data from the previous load's. If the aggregate is switched off means it wont be available for reporting but data can be rolled up into it.
    If u deactivate the aggregates the data will be deleted from the aggregates and the aggregate structure will remain the same.
    The system deletes all the data and database tables of an aggregate. The definition of the aggregate is not deleted.
    Later when you need those aggregate once again you have to create it from scratch.
    Hope this helps.
    Thanks,
    JituK

  • How to aggregate data in SQL Query

    Hi,
    I have Table1 field1 and field2. Combination of these fields form the key of this table.
    Next I have Table2 with field3 and field4. field1 is the unique key for this table.
    My query is:
    select T2.field4||','||T1.field2 from T1 inner join T2 on T1.field1 = T2.field3;
    In the result I want to aggregate the data by T2.field4
    How do I that? Please help
    Thanks in advance,
    Raja

    How to aggregate data in SQL Query By using aggregate functions and group by:
    SQL> select object_type, count(*), sum(decode(status,'VALID',0,1)) inv_obs
      2  from all_objects
      3  group by object_type;
    OBJECT_TYPE                     COUNT(*)              INV_OBS
    CONSUMER GROUP                         2                    0
    INDEX PARTITION                      970                    0
    TABLE SUBPARTITION                    14                    0
    SEQUENCE                             226                    0
    SCHEDULE                               1                    0
    TABLE PARTITION                      349                    0
    PROCEDURE                             21                    0
    OPERATOR                              57                    0
    WINDOW                                 2                    0
    PACKAGE                              313                    0
    PACKAGE BODY                          13                    0
    LIBRARY                               12                    0
    PROGRAM                                9                    0
    INDEX SUBPARTITION                   406                    0
    LOB                                    1                    0
    JAVA RESOURCE                        771                    0
    XML SCHEMA                            24                    0
    JOB CLASS                              1                    0
    TRIGGER                                1                    0
    TABLE                               2880                    0
    INDEX                               4102                    0
    SYNONYM                            20755                  140
    VIEW                                3807                   72
    FUNCTION                             226                    0
    WINDOW GROUP                           1                    0
    JAVA CLASS                         16393                    0
    INDEXTYPE                             10                    0
    CLUSTER                               10                    0
    TYPE                                1246                    0
    EVALUATION CONTEXT                     1                    0

  • Report data is from Cube or Aggregates ??

    Hello Friends,
    Can any one please tell.
    The data in the BW Report is comming from a cube where Aggregates are bulid on it. The data is loading into the cube daily and roll up of aggregates are done one month back, since a month roll up was not taking place.
    When i run report now which data i can see in the report only rollup data which is one month old (or) fresh updated data in the cube ??
    Please tell me in detail.
    Thanks in advance..
    Tony
    null

    HI tony,
    If request has not been rolled up, then it would not be available for reportin ...check roll up flag in manage of cube...
    So data is coming from aggregates, if your query can access aggregates( drilldown data is avail in aggregates),..otherwise from cube...
    But there would be data sync b/w cube and aggregates...
    Also you can check RSRT ( display aggregates found) to check if your quer is accessing aggregates or not.
    Let me knw if you hve more doubts
    Gaurav

  • Looking at aggregate data

    Hi Guys,
    I created an aggregate from a cube, there is a way to see the aggregate data as I see the cube or DSO data.
    Thanks

    Hi,
    The active aggregate that is filled with data can be used for reporting. If the aggregate contains data that is to be evaluated by a query, the query data is read automatically from the aggregate.
    For the more information:
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/1a/f1fb411e255f24e10000000a1550b0/frameset.htm
    regrads...KP

  • Unable to consolidate data from two DSOs into an InfoCube for reporting.

    Hello Experts
    I am working on BW 7.0 to create BEx Report as below:
    This report will have data coming from two different sources, some data from COPA DSO [such as Customer Number, Product Hierarchy1, Product Hierarchy2, Product Hierarchy3, Product Hierarchy4. Product Hierarchy5, Product Hierarchy6 and a few other Key Figures] and the rest [such as Product Hierarchy, Reference Document, Condition Type (both Active & Inactive), Condition Value and a few other Key Figures] from another DSO (ZSD_DS18) which is a copy of the BCT DSO (0SD_O06). I've chosen this DSO because this is the BCT DSO which is used to store data from a Standard Extractor 2LIS_13_VDKON.
    Below are the screenshots of these 2 DSOs:
    I have successfully extracted the data from 2LIS_13_VDKON (includes PROD_HIER but not Customer Number) and loaded into a DSO (ZSD_D17).
    All the testing is done using only one Sales Document No (VBELN).
    First test that I tried is.. to create an Infocube and loaded data from ZCOPA_01 and ZSD_DS18 and when the LISTCUBE was run on this InfoCube, the data is coming up in two different lines which is not very meaningful. Screenshot below:
    Therefore, I have created another DSO (ZSD_DS17) to Consolidate the data from ZCOPA_01 & ZSD_DS18 establishing mappings between some of common Chars such as below:
    ZCOPA_01                    ZSD_DS18
    0REFER_DOC  <->        0BILL_NUM
    0REFER_ITM    <->        0BILL_ITEM
    0ME_ORDER    <->        0DOC_NUMBER
    0ME_ITEM        <->        0S_ORD_ITEM
    51 records are loaded from ZSD_DS18 into ZSD_DS17 and 4 records are loaded from ZCOPA_01 into ZSD_DS17 for a particular Sales Document Number.
    When I am using a Write-Optimized DSO, data is coming in just 1 line but it is showing only 4 lines which is aggregated which is as expected since W/O DSO aggregates the data. However, when I use Standard DSO, the data is again splitting up into many lines.
    Is there something that I am missing in here while designing the Data Model or does this call for some ABAP being done, if so, where or should I have to talk to the Functional Lead and then Enhance the Standard Extractor, and even if I do that, I would still have to bring in those Key Figures from ZCOPA_01 for my reporting?
    Thank you very much in advance and your help is appreciated.
    Thanks,
    Chandu

    in your (current) InfoCube setup, you could work with "constant selection" on the key figures
    for the COPA key figures, you'll need to add product hierarchy to the key figures with a selection of # and constant selection flagged
    for the SD key figures, you'll have to do the same for customer & the product hierarchy levels (instead of product hierarchy)

Maybe you are looking for

  • Error 150:30 ?

    Hello, Since I installed Yosemite I cannot reach Adabe photoshop Elements any more. I become error 150:30 ! How can I fix it? Regards Jean-Marie Dupont Depuis que j'ai installé Yosemite sur mon Imac je ne peux plus ouvrir Adobe Photoshop Elements j'e

  • How to align text {center/left/right} in  the column of ALV grid

    Hai all, I am displaying one check box in the ALV grid, but default it is coming left align in the column, how can i assign it to the center of the column. Thanks, SREEVATHSAVA.G Edited by: SREE on Aug 14, 2009 7:53 AM

  • Can a BEx query meet this reporting need ?

    Data in the cube Fiscal Period Vendor Amount 2008012    V1        10$ 2009001    V1        1$ 2009002    V1        4$ Results if I use exception aggregation to define Total Amount : Fiscal Period Vendor Amount  Total Amount 2008012    V1        10$  

  • CLA Tags - will adding a '#' in front of the tag screw up Requirements Gateway tracking?

    I find it useful in the practice exams to add a '#' in front of the tag comment - eg. #[Covers: UI1] This adds the comment to the bookmarks list so I can easily see and jump to my tags later. Will adding the # upset the marking? Technically the comme

  • [SOLVED]startx error with nvidia 740M

    I have very usual problem with my nvidia driver, but none of the solution that I found works. I do search for using bumblebee and other but still not work. I still have same error "no device detected" and "no screen found". I also try to use nvidia-x