Aggregate data

Hi All
I loaded data into Aggregate. Now i want to see the data in that aggregate.How to see the data.
thanks in advance.
reddy

Hi,
1. Right click on the cube -> select <i>Maintain Aggregates</i> -> in the screen that follows, select your aggregate -> Goto -> Aggregate Data (Shift+F9).
2. Execute Tcode LISTCUBE -> Enter 'CUBE' and aggregate name.
Regards,
Vikrant.

Similar Messages

  • Can I store only the aggregate data in OLAP cube

    Hi All,
    I know that the OLAP cubes store the leaf data and then builds the aggregate data on top of it and stores with in it. I have huge amounts of data ( like billions of data in my fact table and 6-8 dimension tables). If you keep the leaf data along with the agg data with in the cube would be too large to build.
    So I am thinking to have to store only the agg data within the OLAP cube and for the leaf data it should still read from the Relational tables. Something like Hybrid OLAP.
    (what I mean is
    1. create the dimensions and cube in the AWM on 11g.
    2. I will also specifiy the levels that I want the agg data to be calculated and stored in the OLAP cube.
    (what I want is
    1. I want to store only the agg data in the cube and leaf data still in the relatlional tables.
    2. when I read the cube and drill down to the leaf level , it should fetch the leaf level data.
    Is this possible in Oracle OLAP, if yes please provide some pointers
    Thanks
    S

    First you should try out storing and aggregating data to atleast see if the cube-loading time, query-time and AW-size is within acceptable limit or not. 11gOLAP especially 11gR2 OLAP is very efficient on all three fronts.
    Regarding specifying levels, you can either use Cost-based-aggregation and pick the %age that should be pre-aggregated OR you can use Level-based-aggregation and pick the levels from each dimension that should be pre-aggregated.
    Try these out before venturing into anything else. You will be surprised by the results. There are other ways to store the data in smaller multiple-cubes and joining those cubes through formulas. Make sure you don't mistake an attribute for a dimension.
    Not sure what you mean by just storing aggregated data in olap cubes. You can just do a SUM .. GROUP BY .. of relational data before loading it into olap cubes. For example, if you source data is at DAY level, you can SUM.. GROUP BY .. at MONTH-level and then load month-level data into olap cubes where leaf-level is month-level.
    The sql view (used by reporting tools) could then be a join between month-level "olap" data and "day-level" relational data. When you are drilling-down on the data, the sql view will pick up data from appropriate place.
    One more thing. Drill-Thru to detail data is a functionality of reporting tools also. If you are using OBIEE or Discoverer Plus OLAP, then you can design the reports in a way that after you reach the olap leaf-level, then it will take the user to a relational detail-report.
    These are all just quick suggestions (on a Friday evening). If possible, you should get Oracle OLAP Consulting Group help, who can come up with good design for all your reporting needs.

  • How to aggregate data in SQL Query

    Hi,
    I have Table1 field1 and field2. Combination of these fields form the key of this table.
    Next I have Table2 with field3 and field4. field1 is the unique key for this table.
    My query is:
    select T2.field4||','||T1.field2 from T1 inner join T2 on T1.field1 = T2.field3;
    In the result I want to aggregate the data by T2.field4
    How do I that? Please help
    Thanks in advance,
    Raja

    How to aggregate data in SQL Query By using aggregate functions and group by:
    SQL> select object_type, count(*), sum(decode(status,'VALID',0,1)) inv_obs
      2  from all_objects
      3  group by object_type;
    OBJECT_TYPE                     COUNT(*)              INV_OBS
    CONSUMER GROUP                         2                    0
    INDEX PARTITION                      970                    0
    TABLE SUBPARTITION                    14                    0
    SEQUENCE                             226                    0
    SCHEDULE                               1                    0
    TABLE PARTITION                      349                    0
    PROCEDURE                             21                    0
    OPERATOR                              57                    0
    WINDOW                                 2                    0
    PACKAGE                              313                    0
    PACKAGE BODY                          13                    0
    LIBRARY                               12                    0
    PROGRAM                                9                    0
    INDEX SUBPARTITION                   406                    0
    LOB                                    1                    0
    JAVA RESOURCE                        771                    0
    XML SCHEMA                            24                    0
    JOB CLASS                              1                    0
    TRIGGER                                1                    0
    TABLE                               2880                    0
    INDEX                               4102                    0
    SYNONYM                            20755                  140
    VIEW                                3807                   72
    FUNCTION                             226                    0
    WINDOW GROUP                           1                    0
    JAVA CLASS                         16393                    0
    INDEXTYPE                             10                    0
    CLUSTER                               10                    0
    TYPE                                1246                    0
    EVALUATION CONTEXT                     1                    0

  • How to aggregate data with BI Publisher using an OBIEE analysis

    Hello,
    I'm new with BI Publisher, and I have a concern about the ability for BIP to aggregate data.
    I am creating a data source from an OBIEE analysis containing the columns "Year", "Month", and "Revenue".
    I want to use this source for 1 report containing several pages. One of the page has a simple table displaying only the Year and the Revenue.
    But I get as many rows as number of months is the year.
    And I cannot find any way to have the data aggregate inside my PIB table.
    Can someone help me finding the solution?
    Many thanks in advance

    Hi,
    Unfortunalty BIP doesn't aggregate anything, like the BI server does. It will always show data on the lowest level.
    If you use a query on the BI-server let the BI server do the aggregation and just remove the "Month" column. If you don't want to remove it from your OBI analysis, copy the logical SQL from this analysis to BIP. And select OBIEE as a datasource.
    Then remove month from your query.
    Regards, Machiel

  • Problem in displaying detail along with Aggregate data.

    Hi
    I am new to BIP and I am facing problem to display detail as well as aggregate values at a time.
    My data is like below
    Security Value
    S1 10     
    S2 20     
    S3 30     
    S3 40     
    S4 50     
    S5 60     
    S5 70     
    I want to display data in report as
    Security Value
    S1 10     
    S2 20     
    S3 30     
    S3 40     
    Total S3 70
    S4 50     
    S5 10     
    S5 70     
    Total S5 80
    I tried using for <?for-each-group:G_2;./Security?> but I am getting following output:
    Security Value
    S1 10     
    S2 20     
    S3 30     
    Total S3 70
    S4 50     
    S5 10     
    Total S5 80
    Template:
    <?for-each-group:G_2;./Security?> <?Security?>:<?Value?>
    <?if:count(current-group()/SECNAME)>1?> Total <?Security?>:<?sum(current-group()/Value)?> <?end if?> <?end for-each-group?>
    The problem is I need to display detail as well as aggregate data. Please suggest.

    Hi Kavipriya
    Thanks for your response.
    I tried the code you provided but I am getting blank PDF report from RTF template.
    Also I didn’t understand <?variable:GRP;G2?>, is this declaration of group variable or something?
    Below is my xml:
    <?xml version="1.0" encoding="UTF-8"?>
    <!--Generated by Oracle BI Publisher 11.1.1.3.0-->
    <DATA_DS>
    <G_1>
    <SECURITY>S1</SECURITY>
    <VALUE1>10</VALUE1>
    </G_1>
    <G_1>
    <SECURITY>S2</SECURITY>
    <VALUE1>20</VALUE1>
    </G_1>
    <G_1>
    <SECURITY>S3</SECURITY>
    <VALUE1>30</VALUE1>
    </G_1>
    <G_1>
    <SECURITY>S3</SECURITY>
    <VALUE1>40</VALUE1>
    </G_1>
    <G_1>
    <SECURITY>S4</SECURITY>
    <VALUE1>50</VALUE1>
    </G_1>
    <G_1>
    <SECURITY>S4</SECURITY>
    <VALUE1>30</VALUE1>
    </G_1>
    </DATA_DS>
    And following is the code I used in RTF template
    <?variable:GRP;G1?>
    <?for-each-group:G1;./SECURITY?> <?xdoxslt:set_variable($_XDOCTX,'SEC',./SECURITY)?>
    <?for-each:$GRP[./SECURITY= xdoxslt:get_variable($_XDOCTX,'SEC')]?>
    <?SECURITY?>:<? VALUE1?> <?end for-each?>
    <?if:count(current-group()/SECURITY)>1?> Total <?SECURITY?>:<?sum(current-group()/VALUE1)?> <?end if?>
    <?end for-each-group?>

  • Use of semantic groups to aggregate data

    In a number of threads - e.g. Semantic Groups in DTP it is stated that you cannot use semantic groups to aggregate data. In others, there are statements that this is only for handling the error stack.
    This I find puzzling, as the SAP Help says:
    +Choose Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package.
    This setting is only relevant for DataStore objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.+
    The experience of one of my clients is that you can indeed use Semantic Groups, and you do not need to define an error-DTP. ( Change the error handling on the Update tab to, e.g. "1 Valid Records Update, No Reporting" - then the key fields displayed after perssing the semantic group button become available ).
    Any comments? Have I misunderstood the point that the others have been making?
    matt

    Hi Matt,
    Semantic group is used to define the groupiing of records by data package.  If you select 0PLANT as a Semantic group, each packet of data will contain all values of 0PLANT until the Max record is reach.  Example, Package Size is set to 50K.  If you have 30K of 0PLANT = 0002 and 22K of 0PLANT = 0003.  First packet will contain 52K records of plant 0002 & 0003.  The next packet will start with plant 0004 and will contain all records with 0004.  If this doesn't make 50K records then plant 0005 will be included in packet 2. 
    Regards,
    Dae Jin

  • Summary Vs. Aggregate Data

    Hi,
    Could anyone tell me the difference between summary and aggregate data?
    Thanks in advance,
    AJ

    yes.
    http://books.google.com/books?id=aIs8drBVdaoC&pg=PA355&dq=%22Before+you+can+query+SSAS+from+your+web+application%22&hl=en&sa=X&ei=z_HUUtD2L4juoASzuYGIBw&ved=0CEgQ6AEwAA#v=onepage&q=%22Before%20you%20can%20query%20SSAS%20from%20your%20web%20application%22&f=false
    Tatyana Yakushev [PredixionSoftware.com]
    Download Predixion Insight 3.0 - World class predictive platform for big data

  • Not sure if this is the right place...but is there any aggregate data repository/dictionary out there?

    And what I mean is...Is there any aggregate data dictionary out there that tells you how long and what type a data column should be?
    For example...Medical Provider Name...what is the normal length for something like this? Provider ID? Tax ID? Address Line 1? City? State?...Etc...
    Is like EDI X12 considered the Bible of Data Dictionaries? Yet you have to pay for that don't you? Is there anything else I can reference???
    Thanks for your review and am hopeful for a reply.
    PSULionRP

    Provider ID is well defined:
    "National Provider Identifier Standard (NPI)
    The National Provider Identifier (NPI) is a Health Insurance Portability and Accountability Act (HIPAA) Administrative Simplification Standard. The NPI is a unique identification number for covered health care providers. Covered health care providers and
    all health plans and health care clearinghouses must use the NPIs in the administrative and financial transactions adopted under HIPAA. The NPI is a 10-position, intelligence-free numeric identifier (10-digit number). This means that the numbers do not carry
    other information about healthcare providers, such as the state in which they live or their medical specialty. The NPI must be used in lieu of legacy provider identifiers in the HIPAA standards transactions."
    LINK:
    http://www.cms.gov/Regulations-and-Guidance/HIPAA-Administrative-Simplification/NationalProvIdentStand/index.html?redirect=/NationalProvIdentStand/
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Aggregates Data

    Hi,
      I have  loaded the data few days back and i have roll up the data to aggregates.Now i need to delete the data from cube and need to load the fresh data.Whetheri need to delete my aggregates data.If so how i can proceed.please advice.
    Regards
    Arunkumar

    This can be done in 2 ways.
    1. Directly delete the request from the manage tab. This will automatically delete the data from aggregates as well
    2. Selective Deletion: This will also automatically delete the data from aggregates.
    So you just need to delete the data from the infocube and automatically the data will be deleted from the aggregates.

  • 0CITYP_CODE_ problems doesnt aggregate data into report

    Hi gurus!
    I hope all fine.. well I have a little problem the 0CITYP_CODE doesnt aggregate data, I think the problem is the relation between 0CITY_CODE.... Somebody can helpme please?
    What mistake I had
    BR

    Hi!
    The problem was in SAP, the data was wrong... the city code = citypcode.. therefore I couldnt aggregate data... so we have to make a change into the datasource
    Thanks a lot for your help

  • Looking at aggregate data

    Hi Guys,
    I created an aggregate from a cube, there is a way to see the aggregate data as I see the cube or DSO data.
    Thanks

    Hi,
    The active aggregate that is filled with data can be used for reporting. If the aggregate contains data that is to be evaluated by a query, the query data is read automatically from the aggregate.
    For the more information:
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/1a/f1fb411e255f24e10000000a1550b0/frameset.htm
    regrads...KP

  • Non-compressed aggregates data lost after Delete Overlapping Requests?

    Hi,
    I am going to setup the following scenario:
    The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
    Now, the flow in the process chain should be:
    Delete indexes
    Load delta
    Load full
    Create indexes
    Delete overlapping requests
    Roll-up
    Compress
    In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
    Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
    Therefore, I expect that after the compression it should look like this:
    RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
    110 |                    |                    | X      | F
    109 |                    |                    | X      | D
    108 |                    |                    | X      | D
    107 |                    |                    | X      | D
    106 |                    |                    | X      | D
    105 |                    |                    | X      | D
    104 |                    |                    | X      | D
    103 |                    |                    | X      | D
    102 |                    |                    | X      | D
    101 |                    |                    | X      | D
    100 | X                  | X                  | X      | D
    099 | X                  | X                  | X      | D
    098 | X                  | X                  | X      | D
    If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
    My question is:
    What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
    Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
    I hope that my question is clear
    Any answer is highly appreciated.
    Thanks
    Michal

    Hi,
    If i get ur Q correct...
    Compress After Roll-up option is for the aggregtes of the cube not for the cube...
    So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
    If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
    Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
    This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
    And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
    even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
    Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
    here in urcase i guess there is no need to remove the step from the chain...
    correct me if any issue u found......
    rgds,

  • How to aggregate data in SNP aggregated?

    Dear Expert,
    Now, i want aggregate demand of products( A123,A124 and A A224) for location K410 from two location: 6610 and 6710.
    I have created a loction hierachy with root is K410 and two leaves : 6610 and 6710.
    Now how can i aggregate demand of A123, A124 and A224 in K410 from 6610 and 6710  ?
    thanks

    Hello,
    If the hierarchy master data is correctly created, activated and assigned to the correct model, you can try aggregated planning in standard SNP aggregated planning book 9ASNPAGGR/SNPAGGR(1). Just load the data, and use 'Location Aggregation' function button.
    If you're new to SNP aggregated planning, please review the below online documents for more detailed information. It is very important that you have the correct master data settings and planning book settings.
    http://help.sap.com/saphelp_scm70/helpdata/EN/2c/c557e9e330cc46b8e440fb3999ca51/frameset.htm
    Best Regards,
    Ada

  • Aggregate Data on Existing Query

    Greetings All. I am missing a concept here. I have the
    following code and it works except one thing. I am running a report
    against a trouble ticket database to show what each engineer closed
    for the month so I group it by person. I also want a summary of how
    many each person closed and I can't get that to work. Here is the
    code I am using. I know the summary is wrong as it only shows me
    the total tickets closed for the month.
    I have the following coldfusion query setup and can't figure
    out how to get summary / agreegate data on one part of this. I have
    it showing the closed tickets grouped by user but I also want it to
    show a summary of how many tickets that were closed by each user.
    The total value gets returned (1509) for the time period but I want
    the values for the individual users.

    Dan
    thanks for the reply. I do know how to do an aggregate to add
    up how many records were closed I just don't know how to group the
    output of the count by employee. Could you give me a code snippet?
    thanks for your initial reply. I have read some Ben Forta books in
    the past and they were great.
    Erik

  • Query to aggregate data

    Hi, I need some help coming up with a query for department analysis. I am providing test cases below. My data is currently being taken from a Oracle 10g materialized view. What I need to do is give a date interval and find out how many people were in each dept (over all depts that exist in the view) at the beginning and end of the interval. If a person was in more than 1 dept during that period, only the last one is considered. Also, how many new people were added to the company
    For example, using the data below and 13-AUG-04 through 30-AUG-04, I would get:
    Unit #Beg #End #New
    18 0 0 1
    33 1 1 0
    56 0 0 0
    70 1 1 0
    71 1 1 0
    The last two columns in the test table (First/Last) refer to the person's employment in the company. There will be for sure either Y/Y (if the person has one period of employment only) or both Y/N, N/Y.
    Where I'm having problems is to keep track of both current departments as well as new to the company. Someone can be new to the company (evidenced by First='Y'), but if he 's only moving to a different department then he's not new to the company.
    Thanks,
    Rob
    create table test_dept
    dept number,
    head varchar2(20),
    wrkid number,
    name varchar2(50),
    unitid number,
    startdate date,
    enddate date,
    firstentry char(1),
    lastentry char(1)
    insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',71,to_date('09-AUG-04','DD-MON-YY'),to_date('11-AUG-04','DD-MON-YY'),'Y','N');
    insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',71,to_date('11-AUG-04','DD-MON-YY'),to_date('04-OCT-04','DD-MON-YY'),'N','N');
    insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',33,to_date('04-OCT-04','DD-MON-YY'),to_date('05-OCT-04','DD-MON-YY'),'N','N');
    insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',33,to_date('05-OCT-04','DD-MON-YY'),to_date('19-APR-05','DD-MON-YY'),'N','Y');
    insert into test_dept values(20812,'JONES',27265,'SMITH, Jack',71,to_date('09-AUG-04','DD-MON-YY'),to_date('11-AUG-04','DD-MON-YY'),'Y','N');
    insert into test_dept values(20812,'JONES',27265,'SMITH, Jack',33,to_date('10-AUG-04','DD-MON-YY'),to_date('05-OCT-04','DD-MON-YY'),'N','N');
    insert into test_dept values(20812,'JONES',27265,'SMITH, Jack',33,to_date('05-OCT-04','DD-MON-YY'),to_date('20-APR-05','DD-MON-YY'),'N','Y');
    insert into test_dept values(28022,'BABS',39220,'RUPERT, A',70,to_date('11-AUG-04','DD-MON-YY'),to_date('29-OCT-04','DD-MON-YY'),'Y','N');
    insert into test_dept values(28022,'BABS',39220,'RUPERT, A',33,to_date('19-OCT-04','DD-MON-YY'),to_date('25-OCT-04','DD-MON-YY'),'N','N');
    insert into test_dept values(28022,'BABS',39220,'RUPERT, A',33,to_date('25-OCT-04','DD-MON-YY'),to_date('23-NOV-04','DD-MON-YY'),'N','N');
    insert into test_dept values(28022,'BABS',39220,'RUPERT, A',70,to_date('23-NOV-04','DD-MON-YY'),to_date('27-JAN-05','DD-MON-YY'),'N','N');
    insert into test_dept values(28022,'BABS',39220,'RUPERT, A',33,to_date('08-FEB-05','DD-MON-YY'),to_date('13-JUL-05','DD-MON-YY'),'N','N');
    insert into test_dept values(28022,'BABS',39220,'RUPERT, A',56,to_date('13-JUL-05','DD-MON-YY'),to_date('31-OCT-05','DD-MON-YY'),'N','Y');
    insert into test_dept values(20812,'JONES',10000,'B',18,to_date('15-AUG-04','DD-MON-YY'),to_date('29-AUG-04','DD-MON-YY'),'Y','Y');

    this?
    var d1 varchar2(100)
    var d2 varchar2(100)
    exec :d1 := '13-AUG-04'
    exec :d2 := '30-AUG-04'
    select unitid,
    sum( case when startdate <= to_date(:d1) and enddate >= to_Date(:d1)
                and nextstart >= to_date(:d2) then 1 else 0 end ) beg#,
    sum( case when startdate <= to_date(:d2) and enddate >= to_Date(:d2) then 1 else 0 end ) end#,
    sum( case when firstentry='Y' and startdate between to_Date(:d1) and to_date(:d2) then 1 else 0 end ) new#
    from (
      select unitid, wrkid, name,
       startdate, enddate, firstentry,
       last_value(startdate) over (partition by wrkid order by startdate desc
          rows between unbounded preceding and 1 preceding) nextstart
      from test_dept
    group by unitid
    /you should have test data where a dept has different beg and end numbers.

Maybe you are looking for

  • Data Guard Broker configuration in oracle10g r2

    Hi, I am facing difficulties while configuring data guard broker. Our setup is RAC primary and single standby database. Show configuration is raising the following error... ORA-16607: one or more databases have failed on dcSTANDBY.log file it shows b

  • Oracle JVM in iAS (8i or 9i)

    How does the Oracle JVM work in iAS? I installed Oracle EE 8.1.7 and deployed java classes, looked at OSE, and deployed some EJB's. The classes are stored in the database as objects and JNDI is used to reference the servlets and EJBs. So if I install

  • Customer Ageing Statement

    My client is using SAP B1 2005 SP1 patch 11. They want to be able to schedule automatic email customer statement. 1.  Has anyone encountered this request and how do you resolve it? 2.  An option that we could think of is to create the customer statem

  • Removing Leading zeros from query output.

    Hello Experts, Is it possible to remove leading zeros from the query output for a particular column? I have a characteristics 'Reference Document' which has values like '000001386'. The users just need '1386' to be displayed and have asked us to remo

  • Program SAPKKA12

    Hi, In program SAPKKA12, we have created certain variants. Please throw some light when this program gets executed and the purpose. In the variant, "if no cost or revenue posting (except settlement of RA data) since, we have period and year. The peri