Cumulative Query

Hi,
I am writing logic to fetch and display summary of transactions between two given dates. The required columns are shown below(Please paste in excel to display correctly). Have used analytic function to fetch summary of cumulative units fund wise.
I am looking to display the Total Cumulative Value, which is Fund wise Cumulative Units* Fund NAV as on date.
Need assistance on same.
Regards
Date of     Nature     Type of     Number     NAV     Cr          Fund Wise     Fund Wise     Cumulative Value          
Transaction     of Transaction     Fund     of Units          (Rs.)          Cumulative Unit     Cumulative Value(Rs.)     (Rs.)          
23/01/2008     Contribution     Bond     421894.645     11.8513     5,000,000.00          421894.645     5,000,000.00     5,000,000.01          
23/01/2008     Initial Allocation     Bond     4218.946     11.8513     50,000.00          426113.591     5,050,000.00     5,050,000.00          
10/06/2008     Claims     Bond     168571.928     12.0707          2,034,781.17     257541.663     3,108,708.15     3,108,708.15          
27/06/2008     Contribution     Bond     377919.73     11.9073     4,500,003.60          635461.393     7,566,629.44     7,566,629.44          
27/06/2008     Initial Allocation     Bond     3779.198     11.9073     45,000.04          639240.591     7,611,629.48     7,611,629.49          
27/06/2008     Contribution     Money Market     483774.085     13.0226     6,299,996.40          483774.085     6,299,996.40     13,911,625.89          
27/06/2008     Initial Allocation     Money Market     4837.741     13.0226     62,999.96          488611.826     6,362,996.36     13,974,625.85          
05/07/2008     Claims     Money Market     33949.336     13.0415          442,750.27     454662.489     5,929,480.85     13,534,078.69          11.8963
28/07/2008     Claims     Money Market     23274.276     13.1736          306,606.00     431388.214     5,682,935.77     13,332,344.38          11.9664
25/08/2008     Claims     Money Market     31151.175     13.3189          414,899.39     400237.038     5,330,717.09     13,208,013.36          12.0174
12/09/2008     Contribution     Money Market     119540.367     13.3846     1,600,000.00          519777.405     6,957,012.66     14,683,194.06          12.0865
12/09/2008     Initial Allocation     Money Market     1195.404     13.3846     16,000.00          520972.809     6,973,012.66     14,699,194.06          12.0865
15/09/2008     Claims     Money Market     13437.455     13.3946          179,989.33     507535.355     6,798,233.06     14,529,720.17          12.0948
03/10/2008     Claims     Money Market     34659.451     13.4604          466,530.08     472875.903     6,365,098.81     14,128,867.55          12.1453
12/11/2008     Claims     Money Market     48728.777     13.6749          666,361.15     424147.126     5,800,169.54     13,599,224.36          12.2005
16/12/2008     Claims     Money Market     35102.913     13.9264          488,857.20     389044.214     5,417,985.34     13,885,877.60          13.2468
15/01/2009     Contribution     Bond     369332.377     13.8087     5,100,000.00          1008572.968     13,927,081.54     19,399,610.88     Money Market     14.0666
15/01/2009     Initial Allocation     Bond     3693.324     13.8087     51,000.00          1012266.292     13,978,081.54     19,450,610.89          14.0666
21/01/2009     Claims     Money Market     71451.717     14.2494          1,018,144.10     317592.497     4,525,502.52     16,460,527.02     Bond     11.7904
23/01/2009     Bonus Allocation     Bond     2849.454     13.7493     39,178.00          1015115.746     13,957,130.93     18,497,496.78     Money Market     14.2962
05/03/2009     Claims     Money Market     40448.383     14.5089          586,861.55     277144.113     4,021,056.22     18,201,106.57     Bond     13.9689
Query
SELECT trdt.EFF_DT AS DateOfTransaction
     ,trcd.tr_cd AS trcd
     ,' '||trim(DECODE(trcd.tr_cd,'1000','Contribution','1001','Contribution','1004','Contribution','1901','Switch From','1902','Switch To','1903','Claims','1904','Admin Charges','1905','Bonus Allocation',trcd.tr_cd_desc)) AS Nature_of_Trans
--     ,' '||SUBSTR(fndd.long_fd_nm,INSTR(fndd.long_fd_nm,'GROUP')+7) AS typ_of_fund
,(SELECT UDDT.USER_DATA
     FROM USER_DEF_LABELS UDLB,
     USER_DEF_DATA UDDT
     WHERE UPPER(UDLB.user_label_alias) = UPPER('FUNDNAME')
     AND UDLB.UDLB_KEY = UDDT.UDLB_KEY
     AND UDDT.REF_KEY = fndd.fndd_key) AS typ_of_fund
     ,ROUND(ABS(trdt.unit_ct),3) trunitct
,(SELECT ROUND(uvda.unit_value,4)
     FROM uval_detl_act uvda,
unit_values_ii uvls,
uval_types uvtp
-- bat_grp      btgp
WHERE uvda.uvls_key = uvls.uvls_key
AND uvda.rec_stat_cd = '0'
AND uvda.uvtp_key = uvtp.uvtp_key
AND uvtp.typ_cd = '01'
AND uvls.case_key IS NULL
AND uvls.fd_desc_id = trdt.fd_desc_id
-- AND btgp.dflt_cyc_dt_cd = '1'
AND uvls.calendar_dt = trdt.eff_dt
AND ROWNUM = 1) AS trn_unit_price
     ,TO_NUMBER(DECODE (SIGN(trdt.amt), -1, '', trdt.amt),'999999999.99') AS CR
     ,ABS(TO_NUMBER(DECODE (SIGN(trdt.amt), +1, '', trdt.amt),'999999999.99')) AS DB
, SUM(trdt.unit_ct) over (PARTITION BY trdt.fd_desc_id
ORDER BY trdt.eff_dt, trdt.fd_desc_id, trdt.tdtl_key) cum_unit_count
,ROUND(SUM(trdt.unit_ct) over (PARTITION BY trdt.fd_desc_id
ORDER BY trdt.eff_dt, trdt.fd_desc_id, trdt.tdtl_key) * (SELECT ROUND(uvda.unit_value,4)
     FROM uval_detl_act uvda,
unit_values_ii uvls,
uval_types uvtp
-- bat_grp      btgp
WHERE uvda.uvls_key = uvls.uvls_key
AND uvda.rec_stat_cd = '0'
AND uvda.uvtp_key = uvtp.uvtp_key
AND uvtp.typ_cd = '01'
AND uvls.case_key IS NULL
AND uvls.fd_desc_id = trdt.fd_desc_id
-- AND btgp.dflt_cyc_dt_cd = '1'
AND uvls.calendar_dt = trdt.eff_dt
AND ROWNUM = 1),2) cum_fund_value
, ROUND(SUM(trdt.unit_ct) over (ORDER BY trdt.eff_dt, trdt.tdtl_key)*(SELECT ROUND(uvda.unit_value,4)
     FROM uval_detl_act uvda,
unit_values_ii uvls,
uval_types uvtp
-- bat_grp      btgp
WHERE uvda.uvls_key = uvls.uvls_key
AND uvda.rec_stat_cd = '0'
AND uvda.uvtp_key = uvtp.uvtp_key
AND uvtp.typ_cd = '01'
AND uvls.case_key IS NULL
AND uvls.fd_desc_id = trdt.fd_desc_id
-- AND btgp.dflt_cyc_dt_cd = '1'
AND uvls.calendar_dt = trdt.eff_dt
AND ROWNUM = 1),2) transact_amt
FROM transact_details trdt
,tr_code_rules trcd
,fund_desc fndd
WHERE trdt.case_key = :cp_case_key
AND trdt.case_mbr_key = :pi_case_mbr_key
AND trdt.rvsl_cyc_dt IS NULL
AND trdt.tr_cd NOT IN ('1909')
AND trdt.tr_cd = trcd.tr_cd
AND trdt.fd_desc_id = fndd.fd_desc_id
AND fndd.fd_desc_id NOT IN ('8888','9999')
AND trdt.eff_dt BETWEEN :cp_from_dt AND :cp_to_dt
ORDER BY trdt.EFF_DT, trcd.tr_cd_desc, shrt_fd_nm

I have used placeholders for each fund, to store unit values at report global level and when ever cum total is required, i sum up all the values and print the same. The revised count is overwritten in the respective fund place holder.

Similar Messages

  • Cumulative query and apd extraction

    Hello everyone!
    i'm experiencing some trouble, hope you can help!
    I have a bex query with a cumulative result.
    I want to extract the query result to a csv using the apd
    but the extraction only provide the values in a NOT cumulative way.
    Is there something i have to set inside the query or something that i can insert building the apd?
    Thank you in advance for your answers and your time!
    B.r.
    Adrians

    When you use APD, the system does not use the OLAP processor which is used when the query is run online and some of the features are not available. There are several OSS notes that document the exact differences if you search on APD limitations or RSANWB or RSCRM.
    I am not sure but the display cumulative is likely to be one of the settings. You could get what you need by adding an ABAP process to the output of the query in the APD and then code the desired logic in there. I know it is not as clean as you would like it, but unfortunately APD and BEx are not the same.
    Edited by: Vineet Gupta on Mar 5, 2012 2:40 PM

  • Oracle dictionary view 2 find the queries run and it's execution time

    Hi All,
    I s there any oracle dictionary view which captures the queries being run by users on the database and time taken to execute those queries?
    We need to find out the OS user not the database user since we have to identify the users who are executing long running queries.
    We require this basically to monitor the long running queries on the database.
    Thanks in Advance

    Hi,
    welcome to the forum!
    Oracle doesn't store information about individual executions of SQL queries (that would've been too expensive), but you can find cumulative query execution stats in V$SQL. If you are interested in queries by a specific OS user, then Active Session History can help you (provided you have the Diagnostic Pack License).
    Best regards,
    Nikolay

  • I have request in the report level but the same is missing in the infocube

    Dear Experts,
    I have request in the report level but the same is missing in the compressed infocube level. What could be the cause? does the compressed infocube deletes the request ? if so, I could able to view other requests under infocube manage level.
    Kindly provide with enough information.
    Thanks.............

    Hi
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Edited by: Allu on Dec 20, 2007 3:26 PM

  • Frequency at which compression should be done

    Hello Experts,
    I have a process chain running in production which is scheduled twice on a daily basis. The service report provided us a recommendation to compress the data in the cubes which are a part of this process chain.
    I would like to know at what frequency /intervals should I compress the requests.As in should I go on for Monthly/Weekly compression or should I be going for a compression each time the load is run?
    The data load pattern on a daily basis is :
    1st data load :around 50K records are loaded
    2nd data load :around 3L records are loaded
    4 mins difference b/w the 2 loads.
    Could someone kindly provide some basic guideline on which I can proceed on this.
    I have checked the sdn posts but have not been able to get an answer on the frequency at which compression should be done.
    Regards
    Dipali

    Dear Dipali,
    Please have a look at below points.
    . You can schedule compression as part of a process chain.
    Compressing one request takes approximately 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. With non-cumulative InfoCubes, the marker for non-cumulatives is also updated. This means that less data has to be read for a non-cumulative query which reduces the response time.If you perform compression for a non-cumulative InfoCube, the compression time (including the time to update the markers) is about 5 ms per data record
    So, Take an acount of this step to Compres the Data. Go for a weekly basis...and when the load times are not running....also u can schedule in Back Ground Job as well..
    Also,
    For performance reasons, and to save space on the memory, compress a request as soon as you have established that it is correct and is not to be removed from the InfoCube.
    Hope this helps u...
    Best Regards,
    VVenkat..

  • Problem on cube while reporting

    hello SDNs,
    i want to know that when we report on a cube wher does the data comes from E fact table or F fact table?
    and if i compress a cube what happens?
    where does the data comes from E or F fact table?
    If two requests were compressed and now the third request is in F table ,i want to report on this.which request should be in the reporting?
    thanks in advance
    sathish

    Hi,
    Compressing InfoCubes
    Before compression report read the data in F table.
    After the compression its read the data from E table as data is moved from F table to E table.
    After compression when you have queries fetching uncompressed data (data from both E and F) then query hits both the tables.
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    https://forums.sdn.sap.com/click.jspa?searchID=7281332&messageID=3423284
    https://forums.sdn.sap.com/click.jspa?searchID=7281332&messageID=3214444
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Thanks,
    JituK

  • Compression without partition.

    Hi,
    Would it be useful to compress an infocube even if there is no Fiscal partion on the cube.
    Thanks.

    Hi,
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    If you compress the Cube all the duplicate records will be summarized.
    Otherwise it will be summarized during Query runtime effecting the Query performance.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Hope this helps.
    Thanks,
    JituK

  • Drawbacks of Infocube compression

    Hi Experts,
    is there any drawbacks of infocube compression??
    Thanks
    DV

    Hi DV
    During the upload of data, a full request will always be inserted into the F-fact table. Each request gets
    its own request ID and partition (DB dependent), which is contained in the 'package' dimension. This
    feature enables you, for example, to delete a request from the F-fact table after the upload. However,
    this may result in several entries in the fact table with the same values for all characteristics except the
    Best Practice: Periodic Jobs and Tasks in SAP BW
    request ID. This will increase the size of the fact table and number of partitions (DB dependent)
    unnecessarily and consequently decrease the performance of your queries. During compression,
    these records are summarized to one entry with the request ID '0'.
    Once the data has been
    compressed, some functions are no longer available for this data (for example, it is not possible to
    delete the data for a specific request ID).
    Transactional InfoCubes in a BPS environment
    You should compress your InfoCubes regularly, especially the transactional InfoCubes.
    During compression, query has an impact if it hits its respective aggregate. As every time you finish compressing the aggregates are re-built.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced.
    "If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing."
    Hope this may help you
    GTR

  • Error in query due to exc aggregation with non-cumulative KF

    Hi All,
    when i am trying to execute a query I am getting the following errors.
    SAPLRRI2 and form RTIME_FUELLEN_06-01-
    Program error in class SAPMSSY1 method: UNCAUGHT _ESCEPTION.
    I have calculated KF of non-cumulative KF. If I delete the calculated the query is ok.
    I am working on BI 7.Did anybody got this error? any idea?
    I have implemented notes 976550 and 991563
    Thank you

    Hi,
    Any one with any suggestions?
    Mohammed

  • Difference in Cumulative balance amount of ERP and BW query report

    Hi all,
    We have a report to display the Cumulative balance amount based on G/L account.
    The issue we are facing here is, when we check a May month's cumulative balance in ERP side ( using Transaction code FAGLB03 by giving G/L account number, Company code and Year as 2010) we are getting 1000 EUR, and the May month's Balance is 300 EUR.
    When we run our Qury report in BW side for the same conditions( same G/L account, Key due date as 31.05.2010 and same Company code) we are getting the Cumulative Balance as only 900 EUR. we are missing some 100 EUR in our report for our Cumulative balance. But,When I check the Cube on which our Query report runs, I could get the May month's balance as 300 EUR.
    Kindly help me to rectify this issue.
    Thanks.
    Regards,
    Jayaprakash J

    Hi,
    Check is there any restrictions are there . the data must be displayes if the data is there in infocube.
    Regards
    sivaraju

  • Query for Future period on a non cumulative cube

    Hi Colleagues,
    I have a non cumulative cube. It has data till June,2006. When I query on the cube to report the balance with period as "July,2006" it doesnot return anything. What I expected is that it would show me the balance value as of June,2006 which is the last period for which there is data.
    Am I missing anything ?
    Regards,
    priyadarshi

    If you filter by jul06 and there is no data in cube, no matches with selection happens. You need to select any data to apply calculation of balance, try to select a range of months, with an offset.
    Maybe if you put the month in free characteristics instead in filter section ir worked.

  • Cumulative Quantities in a BEx Query

    Hi Friends,
    I have to get the Cumulative Values for a Key Fig Actual Quantity in a BEx Query.
    The scenario is as follows:
    I have the following data in the cube:
    YearWeek       ActQuantity
      200601                     35
      200602                     40
      200651                      30
      200652                      35
      200701                      20
      200702                      35
      200703                      45 
      200747                      30
      200748                      45
      200749                      40
      200750                      25
      200751                      40
      200752                      35
      200801                      40
      200802                      35
      etc.,
    The Report Req is as follows:
       The user will enter any range (Eg: 200748 - 200802)
    and he wants to see the report as follows:
    YearWeek ActQnty  CumQnty
                                   (Cumltd from 1st wk of the year)
    200748                    45             2500  (2455 + 45)
    200749                    40             2540  (2500 + 40)
    200750                    25             2565
    200751                    40             2605
    200752                    35             2640
    200801                    40                40 (since it is 1st week)
    200802                    35               75
    Important thing is the report should show the quantities only for that period (weeks), the user entered. But the cumulative values must be from the first week of that calender year.
    Thanks for any help/suggestion.
    Regards,
    Ranjith

    Hi Rakesh,
       Actually the user enters two values (eg: 200740 - 200748) and he wants to see the actual quantities and cumulative quantities only for the weeks between the user entered i.e., between 200740 and 200748.
      But when I execute the query with user exit variable I defined, I am getting actual quantities and cumulative quantities from 200701 to 200748. The values are correct but I want to restrict this report only to the period between 200740 and 200748.
      The user entry range may also cross between two different years (eg: 200748 - 200812)
    Thanks for your help.
    Ranjith

  • SQL Query to find cumulative values for a Financial Year

    Dear users,
    My requirement is to create a SQL query for a table/view where I have day-wise data. I want to find out cumulative values for financial year by giving any date. It should add the values from start of financial year till that date in the financial year.
    I think creating a view of such type will post heavy burden on resources since accuimulation will be done for each day upto that day.
    Thanks

    Dear users,
    My requirement is to create a SQL query for a
    table/view where I have day-wise data. I want to
    find out cumulative values for financial year by
    giving any date. It should add the values from start
    of financial year till that date in the financial
    year.
    I think creating a view of such type will post heavy
    burden on resources since accuimulation will be done
    for each day upto that day.
    ThanksKumar's solution will serve yours purpose but m not agreed
    I think creating a view of such type will post heavy
    burden on resources since accuimulation will be done
    for each day upto that day. Khurram

  • Cumulative balance (FC) show ****, can perform query?

    Dear Experts,
    Hi,
    In the Chart of Accounts -> Account Balance, the cumulative balance (FC) column shows **** when there
    are multiple foreign currency transactions being posted to the GL account.
    I understand that this is a standard system behaviour but is there a way or any workaround to find out how much balance for each foreign currency in the account?
    Is it possible do in query to add up all foreign currecy amount?
    Thanks.
    Regards,
    Danny

    Dear Danny,
    This is not only a standard system behavior but also the only way to display.  Simply because, the number stand for different values that could not be added together.
    You may create a query based on each individual FC to get separate meaningful totals.
    Thanks,
    Gordon

  • Subsequent use of cumulated value in query

    Hi All,
    We are using BEx Query Designer to create a query having one formula variable “Stock on Hand” as a cumulated value (along the columns).Now we need to use this cumulated value in another formula variable. Is this possible to do? If not could you please suggest an alternate for the same?
    Stock on hand (for any given week) = (Total Receipts – Total Demand) + Stock on Hand (for previous week).
    Inv. Carrying Cost (for any given week) = Stock on hand (cumulated values as above) * Per unit inv. cost
    Data for Total Receipts, Total Demand and Per unit Inv. Cost is present in the infocube at daily level. A sample report template has been given below:
    Product    Location       Week                      W1   W2    W3
       P1                L1        Total Demand           10     10     10
                                      Total Receipts           20      0       5
                                      Stock on Hand          10      0      -10
                                      Per unit Inv. Cost        2       2       2
                                      Inv. Carrying cost       20      0      -20
    Points assigned in advance for your replies
    Regards,
    Bansi.

    Hi PV,
    Thanks for the prompt reply. But we have already created two different formulae variables for the above requirement.
    The formula variable for Stock on hand is displaying the desired cumulative values. But the formula for Inv. Carrying cost is not able to use the display-value (cumulative value) of Stock on hand and it just uses the value corresponding to that particular week. For eg.
    for w2 if stock on hand =20 (cumulative value displayed on screen)
    and the original value of stock on hand for w2 = 0
    then the formula for Inv. carrying cost is using the original value(i.e. 0) but not the display value(i.e.20), whereas we want the cumulative value (i.e. 20) for our calculation.
    Any help in this regard would be highly appreciated.
    Regards,
    Bansi.

Maybe you are looking for