Non-compressed aggregates data lost after Delete Overlapping Requests?

Hi,
I am going to setup the following scenario:
The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
Now, the flow in the process chain should be:
Delete indexes
Load delta
Load full
Create indexes
Delete overlapping requests
Roll-up
Compress
In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
Therefore, I expect that after the compression it should look like this:
RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
110 |                    |                    | X      | F
109 |                    |                    | X      | D
108 |                    |                    | X      | D
107 |                    |                    | X      | D
106 |                    |                    | X      | D
105 |                    |                    | X      | D
104 |                    |                    | X      | D
103 |                    |                    | X      | D
102 |                    |                    | X      | D
101 |                    |                    | X      | D
100 | X                  | X                  | X      | D
099 | X                  | X                  | X      | D
098 | X                  | X                  | X      | D
If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
My question is:
What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
I hope that my question is clear
Any answer is highly appreciated.
Thanks
Michal

Hi,
If i get ur Q correct...
Compress After Roll-up option is for the aggregtes of the cube not for the cube...
So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
here in urcase i guess there is no need to remove the step from the chain...
correct me if any issue u found......
rgds,

Similar Messages

  • Cube Compression - How it Affects Loading With Delete Overlapping Request

    Hi guys,
    Good day to all !!!
    Our scenario is that we have a process chain that loads a data to infocube and that has delete overlapping step. I just want to ask how does the cube compression affects the loading with delete overlapping request. Is there any conflict/error that will raise? Kindly advice.
    Marshanlou

    Hi,
    In the scenario you have mentioned:
    First the info cube would be loaded.
    Next when it goes to the step i.e delete overlapping request:  in this particular step, it checks if the request is overlapping (with the same date or accd to the overlapping condition defined in the infopackage, if the data has been loaded). 
    If the request is overlapping, then only it deletes the request. Otherwise, no action would be taken.  In this way,it checks that data is not loaded twice resulting in duplicasy.
    It has nothing to do with compression and in no way affect compression/loading. 
    Sasi

  • Proc Chain - Delete Overlapping Requests fails with aggregates

    BW Forum,
    Our weekly/daily load process chain loads several full (not delta) transaction infopackages. Those infopackages are intended to replace prior full loads and are then rolled up into aggregates on the cubes.
    The problem is the process chains fail to delete the overlapping requests. I manually have to remove the aggregates, remove the infopackages, then rebuild the aggregates. It seems that the delete overlapping request fails due to the aggregates or a missing index on the aggregates, but I'm not certain. The lengthy job log contains many references to the aggregate prior to it failing with the below messages.
    11/06/2004 13:47:53 SQL-END: 11/06/2004 13:47:53 00:00:00                                                 DBMAN        99
    11/06/2004 13:47:53     SQL-ERROR: 1,418 ORA-01418: specified index does not exist                        DBMAN        99
    11/06/2004 13:47:59 ABAP/4 processor: RAISE_EXCEPTION                                                       00        671
    11/06/2004 13:47:59 Job cancelled                                                                           00        518
    The raise_exception is a short dump with Exception condition "OBJECT_NOT_FOUND" raised.
    The termination occurred in the ABAP program "SAPLRRBA " in
    "RRBA_NUMBER_GET_BW".                                    
    The main program was "RSPROCESS ".                        
    I've looked for OSS notes. I've tried to find a process to delete aggregates prior to loading/deletion of overlapping requests. In the end, I've had to manually intervene each time we execute the process chain, so I've got to resolve the issue.
    Do others have this problem? Are the aggregates supposed to be deleted prior to loading full packages which will require deletion of overlapping requests? I presume not since there doesn't seem to be a process for this. Am I missing something?
    We're using BW 3.3 SP 15 on Oracle 9.2.0.3.
    Thanks for your time and consideration!
    Doug Maltby

    Are the aggregates compressed after the rollup?  If you compress the aggregate completely, the Request you are trying to delete is no longer identifiable once it is in the compressed E fact table (since it throws away the Request ID).
    So you need to change the aggregate so that it the most recent Requests remain in the uncompressed the F fact table.  Then the Request deletion should work.
    I thought what was supposed to happen if the aggregate was fully compressed and then you wanted to delete a Request, the system would recognize that the Request was unavailable due to compression and that it would automatically refill the aggregate - but I'm not sure where I read that. Maybe it was a Note, maybe that doesn't happen in a Process Chain, just not sure.
    The better solution when you regularly backout a Request  is just not the fully compress the aggregate, letting it follow the compression of the base cube, which I'm assuming you have set to compress Requests older than XX days.

  • Delete Overlapping Requests from InfoCube: Before or After the Generate Ind

    Hi,
    Delete Overlapping Requests from InfoCube Before or After the Generate Index of the Infocube? Why?
    I think "After", but the system (transaction RSPC)suggest 1.Generate Index 2.Overlapping Requests from InfoCube ...
    Thanks
    Alessandro

    Hi Alessandro,
       Bottom Line Index will speed up the Process. While loading the Data you need to delete the Index.
    Index will degrade the performence while updating or modifying DB Entries(Loading). Index will improve the performence while reading the DB(Reporting).
    It's not with BW. all RDBMS need this.
    Regards,
    Nagesh.

  • Deletion of data target contents Vs delete overlapping requests

    hi,
         when do we  go for <b>delete overlapping requests</b>? if it is applicable for full load as well as delta load then i would like to first come up with the full load concept, we have the other option called <b>delete data target contents</b> with this we can delete daily full load without going for delete overlapping requests.
    Pls let me know exact difference between DELETE OVERLAPPING REQUESTS FROM INFOCUBE with DELETE DATA TARGET CONTENTS.

    hi
    When you have delta upload twice daily..the date in the previous request and the second request is same....so you might be giving this option to delete the overlap such that data is not loaded twice
    Assign points dont forget
    Regards
    N Ganesh

  • Data Transfer Process and Delete Overlapping Requests

    Hi All,
    We are on BW 7.0 (Netweaver 2004s).  We are using the new data transfer processing and transformation.  We want to use the ability to delete overlapping requests from a cube in a process chain.  So lets say we have a full load from an R/3 system with fiscal year 2007 in the selection using an infopackage.  It gets loaded to the PSA.  From there we execute the data transfer process and load it to the cube.  We then execute the delete overlapping requests functionality.  My question is, will the DTP know that the infopackage selection was 2007 so it will only delete requests with selections of 2007 and not 2006 from the cube?  Basically, is the DTP aware of the selections that were made in the infopackage?
    Thanks,
    Scott

    Hi Everyone,
    Figure it out...on a data transfer process you can filter the selection criteria - go to the extraction tab of a DTP and click on the filter icon.  Enter your seleciton conditions to pull from the PSA....these seleciton conditions will be used to delete the overlapping requests from the cube.
    Thanks

  • Oracle: Expanded non LONG bind data supplied after actual LONG or LOB colum

    I am getting this error message when I try to insert clob into oracle table.
    ORA-24816: Expanded non LONG bind data supplied after actual LONG or LOB column. This error message is kind of misleading. For this error message, I should reorder the list of columns which means that the column with LONG RAW should come at end. So I reordered the list to make the LONG RAW column come at end. But I was still getting this error message. So I found out that data that needs to be inserted into the clob is causing this error.
    Here is my code for inserting clob.
                        byte[] bytes1 = .....
                        statement.setAsciiStream(index, new ByteArrayInputStream(bytes1), bytes1.length);I don't know what is wrong with this code. I have been using this for a while and now it is throwing an exception.
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:213)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:952)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1160)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
         at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3390)
    I am using JDK5 and Oracle 10g driver.
    Please help me.

    I have these columns,
    ROW_DESC - Char
    Table_id - Char
    Blob_desc - Char
    Blob1 - Blob
    LOB_DATE - Date
    CLOB1_DESC - Char
    CLOB1 - Clob
    CHAR_25_Col - Char
    VBIN_400_Col - Long Raw.
    But what I noticed is that the one causing the problem is not actually Long Raw column. "CLOB1" is the one causing a problem. The database is configured as unicode (AL32UTF8). When I tested it against another database with non-unicode, it works fine with same table description. So somehow it is unable to bind unicode large clob data.
    I ran into this problem while I was inserting data from the source table to the target table.
    Here is how I read from the resultset.
                        InputStream inputStream = resultSet.getAsciiStream(index + 1);
                        if (inputStream == null) {
                            return null;
                        ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
                        byte[] buffer = new byte[1024];
                        int length;
                        do {
                            length = inputStream.read(buffer);
                            if (length > 0) {
                                outputStream.write(buffer, 0, length);
                        } while (length > 0);
                        byte[] resultBytes = outputStream.toByteArray();Here is how I bind parameters.
                        statement.setAsciiStream(index, new ByteArrayInputStream(resultBytes), resultBytes.length);If i use "((OraclePreparedStatement) statement).setStringForClob", then it works, but it will impact the performance, because I need to convert the clob to string.
    is there any way to do it without converting to string object?
    Thanks.

  • Data lost after several days running!

    data lost after several days running
    ENV:
    Service Pack 1 for Crystal Reports for Eclipse 2.0
    Tomcat5.5
    Problem:
    There is no problem to export the report as PDF files in the first few days after i upgraded the CR4E.
    But after several days running,the problem appeared : the exported PDF file was incomplete!
    It's supposed to 3 pages,but only 1 page(the first page) exported actually.
    When i restarted the tomcat,the problem disappeared.In several days running,
    the problem appeared again...
    Code:
    reportClientDoc = new ReportClientDocument();
    reportClientDoc.open(Messages.getString("tmpltPath")+tmplt_name, OpenReportOptions._discardSavedData);     
    RptHelp  rptHelp = new RptHelp();
    rptHelp.setDatabaseCtrl(reportClientDoc);
    rptHelp.addDiscreteParameterValue(reportClientDoc,"","userName","admin");
    rptHelp.addDiscreteParameterValue(reportClientDoc,"","start_date",start_date);
    rptHelp.addDiscreteParameterValue(reportClientDoc,"","end_date",end_date);
    for(int i=0;i<6;i++){
         if(!p_name<i>.equals("")){
              rptHelp.addDiscreteParameterValue(reportClientDoc,"",p_name<i>,p_value<i>);
    String exportPath = Messages.getString("reportExportPath");
    String exportName = "test";
    rptHelp.export(reportClientDoc, exportPath+exportName, file_extention);
    File file = new File(exportPathexportName"."+file_extention);
    FileInputStream in = new FileInputStream(file);
    int len = (int)file.length();
    byte[] data = new byte[len];
    int read = 0;
    while (read <len) {
        read += in.read(data, read, len-read);
    in.close();
    response.setContentType("application/x-msdownload");
    response.setHeader("Content-Disposition", "attachment; filename="+ URLEncoder.encode(exportName"."file_extention,"ISO-8859-1"));
    OutputStream ops = response.getOutputStream();
    ops.write(data);
    ops.flush();
    ops.close();
    out.clear();
    out = pageContext.pushBody();

    What is the data? What is displaying the data (graph or chart)? What does your code look like? Why did you post a 4M image to the forum?
    NaN in the data will appear as gaps in plots.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Delete overlapping request in PC - Request from previous month got deleted

    Hi Everyone,
    We are using a 'Delete overlapping request' step in a process chain. Under the Deletion Selections, we have checked the following options:
    1) Only Delete Requests from Same DTP
       |_ Selections are - Overlapping
    2) Request Date - Is in the Current Month
    3) Exceptions:
       |_ New Request will be loaded on - 1st Day of a Month
    The datasource in this case is used for a Full update into a Cube everyday. We've used the above selections so that the request loaded on the last day of a month is not deleted by the subsequent load. We need this to retain a snapshot of the data for each month.
    This month, when the process chain executed on 1st Feb, the request loaded on 31st Jan was not deleted. However, on 4th Feb, the Process chain deleted the request loaded on 3rd Feb and also the one from 31st Jan. There seems to be no reason at all for the 31st Jan request to get deleted. We've made sure of that by choosing the option 'Request Date - Is in the Current Month' in the Deletion Selections.
    Is there any explanation why the request was deleted?
    Thanks,
    Ram

    Hi,
        If you dont have any selections , it will delete the previous request. Based on the selections only overlap will work.  For your scenario , it wont delete previous month request after changing it to current Fiscal period. Once check the selection options , is it displaying month wise or empty selections.
    Regards
    Pcrao.

  • Delete Overlapping Request

    Hi All,
    I am trying to use the process type Delete Overlapping Requests from cube in Process chain after loading the data from the Infopack full load and DTP Delta .  With the infopackage selection in process type the message after execution is no overlapping request found.
    Deletion selection I used is Full or Init loads, Same Datasource. and Selection are same or more comprehensive.
    And in the infopackage I dont have any selections.
    Please help me to solve this error.
    Regards
    PV

    C i'll explain our scenario..
    there was data loaded from 2 data sources to one cube thru 2 dtp's from 2 DS's in process chain...
    Soon after the 2 DTP steps, delete overlapping request step is defined from both the ends.in chain...in whcih variant is
    object type: the same DTP and its name which loads from PSA to cube.
    radio button selected as: edit all infocubes with following delete selections.
    selections: delete existing request under which Only delete request from the same DTP.
    same or cmprehensive.
    the other variant is defined as the same but with another DTP name in object type..
    the Output of the cube will be only latest(current day) requests..
    rgds,

  • Process Chain Help - Delete Overlapping requests

    Dear Experts,
    I have a requitement where I want to delete the ' previous days' request from the cube. This has to be from the data coming from only one DSO.
    I can use the Delete overlapping request process type but I want to know how.
    And again, this has to be only for the current month.I want to delete the overlapping requests of current month i.e now since I am in April I want to delete only the April requests. When I am in  month May I dont want to delete april request anymore.
    In other words I want to delete the april request until April 30th. On May 1st I dont want to delete the april 30th request. On may 1st it shouldn't delete anything. On May 2nd it should delete the May 1st request.
    Can anyone help me with this.
    Thanks,
    KK

    Hi KK,
    If I have understood you correctly, you mean to say your cube is getting loaded from various Datasources and you want to delete the requests only for a particular Datasource and not for others.
    Please correct me if i am wrong.
    If I am right ! Then on the window "Delete Request from Infocube after update" you can "Delete Existing Requests - > Is current month" and on the bottom of that screen you can see a checkbox for Request Selection Through Routine. Check this and you can simply write a routine to do the deletion for only requests loaded from a particular datasource.
    Hope it helps.
    Regards
    Hemant Khemani

  • Duplicate records: Process : Delete Overlapping Requests from InfoCube

    Hi Experts,
    We are loading data in standard costing cube with standard available option Full upload. In our process chain we have included process type "Delete Overlapping Requests from InfoCube". In our scenario we always load yesterday and today's data. In this case after loading yesterday's data, we need to check and delete the overlapping requests and then upload todays data.
    Many a times this deletion process is failing due to message "Couldn't lock cube" because it is already locked by user "ALEREMOTE". This cause system to duplicate the records in cube.
    How we can avoid this?
    Alok

    I tried running again and it again failed. Checked in SM12 and found this entry
    800     ALEREMOTE     08/14/2007     E     RSENQ_PROT_ENQ     CREA_INDX      ZCCA_C11                      DATATARGET     CREA_INDX                     ######################################     0     1
    This locked is not released since 14th. Is there way to remove the lock using some process.

  • Process Type - Delete overlapping requests from an Info Cube

    I have read many threads on this topic, still have some questions. To give you background, I will be doing delta load (“delta” process chain). But before delta, I want to do setup initialization (“init” process chain). I want to use this process type in my “init” process chain” to delete all data in my cube loaded from a specific ods. I cannot use delete all contents process type because many ods’s feed the same cube.
    1.     Do you put this process type after Load Data or before Load Data. I have seen both cases in SDN forums.
    2.     Variant of this process type is an info package. Does this info package have to be same as in load data step or it can be different.
    3.     I want to delete all data in cube which was loaded from my ods. Term overlapping is confusing to me. Will this process type delete all data or not.
    Tanks in advance.

    Yes, you can do that. Goto RSPC - under process types Load Process and Post Processing, you will see DELETE OVERLAP REQUESTS FROM INFOCUBE option is there.
    Select the proper check boxes like same sourcesystem, same datasource etc...
    If it is one time deletion, why not do that manually?
    Also check: How to delete most recent request in a Cube by using process chain
    Hope it helps..

  • Problems with delete overlapping requests from InfoCube in PC

    Hi guys,
    Iu2019m using delete overlapping requests from InfoCube in Process Chains, but Iu2019m not being able to adjust it to my specific requirement.
    For example:
    I execute DTP to load InfoCube XPTO with Fiscal Year 2008 and 2009. After this I have to load again, but only for 2009.
    In this specific example I want my process chain to delete the 2009 data from my first load, because it is overlapped, and leave 2008 data.
    Is this possible? If yes how?
    Thanks in advance
    Jão Arvanas

    It will not work that way.
    It will look if the selections are same then it wil delete if not then it will not do that activity.
    Overlapping settings which you might chosen is based on the delete overlapping for the same selections..
    So in this case the selections are different and hence its not possible.
    Thanks
    Murali

  • Delete Overlapping requests not paying attention to filters

    Hi Experts,
    I have followed all the steps suggested by Chetan in the next thread to cnfigured deletion of overlapping requests.
    Process Chain  Delete a previous request with overlapping values
    In my case I need to delete requests when the data is from the same country and the same day, so I added these fields to the filters in the DTP (I didn't put any value or variable, just added the filters). However the process chain is deleting all requests coming for that DTP and is not paying attention to my filter. What could it be?
    Points will be awarded,
    Regards,
    Raimundo Alvarez

    Hi Amruta,
      Good to hear that your Overlapping Request Deletion process i working fine now
    1) The difference between More Comprehensive & Overlapping  (What is the difference between same or more comprehensive and Overlapping?)
    If overlapping is selected then:
    existing requests are also deleted from the InfoCube if the selection criteria of the new request partially or wholly overlap the selection criteria of the request to be deleted.
    If More Comprehensive is selected then:
    requests are only deleted from the InfoCube if the selection conditions of the new request are the same as or more comprehensive than the selection conditions of the request to be deleted.
    Also do check the below link, It will be very useful
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0431c48-5ba4-2c10-eab6-fc91a5fc2719?quicklink=index&overridelayout=true
    2) The index creation process should be kept after the overlapping request deletion because if it is placed after the overlapping request deletion then all the overlappings will be removed and then the create index process will take less time to create index for only required request else more time will be taken to create indexes.
    In case if the create index step is included before the Overlapping Request deletion then the performance will decrease instead of increasing..
    Instead you have option of including BIA indexes in case if the reporting performance needs to be improved.
    Please change the status of question to answered in case if all answers are received...
    Thanks
    Pawan

Maybe you are looking for

  • How to fix Adobe Creative Cloud error message U44M1|11 (e.g. when updating Photoshop CC)

    After spending hours with Adobe Support staff on both chat and phone I hope I can spare some of you the troubles I had to go through. If you run into the  error "U44M1|11 - Update could not be installed" on a Mac when trying to update an application

  • Macbook Pro Retina (2013) keeps crashing.

    My 2013 MacBook Pro Retina keeps crashing. This has been happening every other day it seems. Usually while I am using Premiere Pro CC. Here is the crash report. I would love some help getting to the bottom of this as I'm not entirely sure how to find

  • Second Display

    HP Pavilion 23T AIO - Windows 8.1 Pro I see some settings for connecting and detecting a second display in the window 8.1 display settings.  There are multiple USB ports but no video out ports like HDMI or DisplayPort or DVI.  Is there an adaptor tha

  • Vector selection is not consistent with the rest of the UI.

    When I select a text layer, it doesn't automatically select all of the text for me.  When I select a layer mask, it doesn't automatically go into quick mask. Yet when I select a vector, I'm automatically presented with an outline of the shape. At the

  • Scrollbar is missing form Adobe Dropdown List.

    Hi, Scrollbar is missing form interactive Adobe Dropdown List. Same design is working in other system very well. What could be the reason? Thanks. Ali