Delete similar request option in the infopackage

Hello BW Experts,
I have to load Period 01 many times a month. each it loads to  the cube it has to delete the similar request from the cube. Where can i do this setting in the infopackage.
Please advice.
Thanks,
BWer

Hi BWer,
You can do this on the Data Targets tab in the InfoPackage > third columns from the right (Automatic Deletion of Similar Requests > Click this and select Delete Existing Requests > Same or more comprehensive
Hope this helps...

Similar Messages

  • Wht is the relevant for BW Flag option in the infopackage for loading Hier

    Hello Gurus,
    Whta is the relevant for BW Flag option in the infopackage for loading Hierarchies? What happens when you turn this on?
    Thanks
    Simmi

    As the hierarchies are stored in combination with the OLTP system, when you tend to load from a flat file, the system doesnt allow you to maintain this feature, as this hierarchy would be relevant to BI.
    Diagnosis
    The relevant hierarchies in BI are stored according to the combination DataSource/source system.
    If you want to change the selection of BI-relevant hierarchies, this change is effective for all InfoPackages that have been created for the same DataSource/source system combination.
    Procedure
    Check all other InfoPackages for DataSource 0COSTCENTER_0101_HIER and source system PRDCLT100 to see whether the change executed is also appropriate for these InfoPackages.
    Procedure for System Administration
    If you want to load a hierarchy in a different InfoPackage (which will be marked as non BI-relevant as a result of your changes) the selections made for this InfoPackage will be lost and The InfoPackage can no longer load.
    Errors will occur when you start the InfoPackages. The monitor produces an error message to say that a valid hierarchy selection has to be made.

  • How could we delete transport request and task(the released one)

    Hello
    We have a released transport task(not request). Transport request is not released.
    How could we delete transport request and task(the released one). Namely we would not like to transport it to production

    Hi, never delete directly in table.
    There are more tables which depends each other...
    Here, you can find nice example how to do it: Help on ABAP: Few Tips for Transport Request Manipulation
    Using program: RDDIT076 and changing status.

  • How can I delete all loads for a specific infopackage in a process chain?

    Good day all,
    I've got a BW 3.5 system. and I have a cube which is loaded by two infopackages, I will call them "Current" and "History" which load each month.  On an ongoing basis I only want one "Current" infopackage load and many "History" infopackage loads.  At the beginning of a month before I load a new "Current", I would like to delete any loads in my cube that were loaded by "Current".  It's not an overlapping selection request, so the automated delete won't work - so I'm open for suggestions.
    Thanks,   Ken

    In the InfoPackage settings - Data Target tab, use the Automatic loading of similar/identical requests option.
    Instead of deleting "overlapping", you can request selection through the routine using the following tables:
    rsreqdone contains the request numbers and the InfoPackage technical name.
    rsseldone contains request numbers and their selection criteria.
    Use something like this:
    TABLES: rsreqdone, rsseldone.
    DATA:   itab_rsseldone TYPE rsseldone,
            itab_rsreqdone TYPE SORTED TABLE OF rsreqdone WITH NON-UNIQUE
            KEY rnr WITH HEADER LINE,
            v_date TYPE sy-datum.
    *selecting all requests from RSREQDONE that were loading from this infopackage
    select rnr from RSREQDONE into table itab_rsreqdone
    where LOGDPID = 'Current Infopackage Technical Name'.
    loop at itab_rsreqdone.
    *selecting all these requests + selection criteria from RSSELDONE
    select single * from rsseldone into itab_RSSELDONE
    where rnr = itab_rsreqdone-rnr.
    if sy-subrc EQ 0.
    v_date = sy-datum.
    *if the Month Selection criteria is less than the current system month
    *then delete request from target.
    if itab_RSSELDONE-low(6) < v_date(6).
    *all requests remaining in l_t_request_to_delete are deleted from the data target
    l_t_request_to_delete-rnr = itab_rsreqdone-rnr.
    append l_t_request_to_delete.
    ENDIF.
    ENDIF.
    endloop.
    *do not delete the actual request
    DELETE l_t_request_to_delete WHERE
    rnr = l_request.
    clear p_subrc.

  • Non-compressed aggregates data lost after Delete Overlapping Requests?

    Hi,
    I am going to setup the following scenario:
    The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
    Now, the flow in the process chain should be:
    Delete indexes
    Load delta
    Load full
    Create indexes
    Delete overlapping requests
    Roll-up
    Compress
    In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
    Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
    Therefore, I expect that after the compression it should look like this:
    RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
    110 |                    |                    | X      | F
    109 |                    |                    | X      | D
    108 |                    |                    | X      | D
    107 |                    |                    | X      | D
    106 |                    |                    | X      | D
    105 |                    |                    | X      | D
    104 |                    |                    | X      | D
    103 |                    |                    | X      | D
    102 |                    |                    | X      | D
    101 |                    |                    | X      | D
    100 | X                  | X                  | X      | D
    099 | X                  | X                  | X      | D
    098 | X                  | X                  | X      | D
    If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
    My question is:
    What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
    Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
    I hope that my question is clear
    Any answer is highly appreciated.
    Thanks
    Michal

    Hi,
    If i get ur Q correct...
    Compress After Roll-up option is for the aggregtes of the cube not for the cube...
    So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
    If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
    Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
    This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
    And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
    even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
    Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
    here in urcase i guess there is no need to remove the step from the chain...
    correct me if any issue u found......
    rgds,

  • Delete overlapping requests runs for 5+ hours ...This is too long

    We are on BW 3.5 and have a process chain that loads 3 years of data from a cube on the APO Server and AOP data from the BW server into a cube on the BW server.  Each load is in its own infopackage.  This loads happens every week.  We want to delete the loads that occurred the week before using the delete overlapping requests.  It is set up as follows:
    <b>1.</b>Drop index of Cube
    <b>2</b>.Load Current Year data into Cube via Infopackage #1.  Loads approximately 32 million records.  Selections on calweek: 200701 – 200752.  Infosource: InfoSource for APO-DP Cube from APO(ZAPODP_I11). Datasource: APO-DP Backup Cube(8ZAPODPC73).  Source System: APP Client 200(APPCLNT200).  The Data targets tab is set up as follows: ‘Automatic Loading of similar/identical requests from the cube’ wherein Delete Existing Requests Conditions: Infosources are the same.  Selections are overlapping.
    <b>3</b>.Load Current Year + 1 data into Cube via Infopackage #2.  Loads approximately 32 million records…Selections on calweek: 200801 – 200852.  Rest is same as above #2.
    <b>4.</b>Load Current Year +2 data into Cube via Infopackage #3.  Loads approximately 32 million records…Selections on calweek: 200901 – 200952. .  Rest is same as above #2.
    <b>5</b>.Load AOP data into Cube via Infopackage #4.  Loads approximately 135,000 records.  Infosource: AOP Plan(8ZAOPC01). Datasource: AOP Plan(8ZAOPC01). Source System: BW Production(BWPCLNT100).  The Data targets tab is set up as follows: ‘Automatic Loading of similar/identical requests from the cube’ wherein Delete Existing Requests Conditions: Infosources are the same.  Selections are overlapping
    <b>6.</b>Create index on cube
    <b>7</b>.Delete overlapping requests from cube.  The process chain step is set up to ‘Use Delete Selections for Infocubes from the Infopackages’
    <b>The issue is that step #7 runs for more than 5 hours.</b>  That is way too long.  When we manually delete the requests, it takes 5 minutes.  How can I fine tune this processing?  Thanks in advance!
    <b></b>

    What is your dbms? Does it support partitioning? If it does then deleting requests merely drops the corresponding partitions, which should be very fast.
    If it does not support partitioning then you're running SQL delete statements to delete the data, which could take a long time if you're deleting tens of million of records. BTW how long does it take to load the same data?
    P.S. Just saw your last point about manually deleting the requests taking only 5 minutes. Did you delete one request at a time or multiple request at once? The previous poster wanted to know whether you had background processes to run the deletion job. i.e. Did the deletion really took 5 hours, or did it wait a few hours to start the job?
    Message was edited by:
            Rick Chau

  • Delete Overlap Requests

    Hi Gurus:
    I need your help so I try to write it down short, for clearer pictures.
    I'd like to know the behaviour of delete overlapping requests.
    I have a basic infocube (ZMYCUBE) and I load it with first requests, for simpliciy, said it request 165 and it has 5 rows in fact table.
    The next request (i.e. request 170) is apparently an indentical request except with 1 row difference (i.e 4 rows identical, 1 row difference, total is still 5 rows).
    Within infopackage with data target ZMYCUBE, and infosource ZMYSOURCE, I try to use delete overlapping request cus ZMYSOURCE The source system has 20 keys so ODS cannot help me.
    Does anybody has experiences how to deal with that issue ? Will "delete overlapping requests" really works on "content of data packet" ? Or it will delete the whole request 165 and upload the whole 5 records again ?
    My goal is to find a solution not to delete Request 165 cuz ZMYCUBE is a simulation for 100.000 lines single request so deleting such lines is not cost effective.
    My expectation is for BW to only add and transfered 1 record. not 5
    Points will be awarded for satisfactory answers ...
    Thank you

    Hi,
    Option "Deletion of over-lapping request'' simply deletes the entire Package from Cube based on the selections and it won't go thro' the content of the data. In your terms this option does the deletion of whole request 165 and upload the whole 5 records again.
    This option looks at the selection conditions specified for request 165 and then look at the selection conditions of request 170 and then look at deletion options specified, based on these options system would delete the Info package 165.
    Hope this helps.
    Bala

  • "Delete Overlapping Requests from InfoCube" in a Process Chain

    Dear all,
    I encountered a problem when I building the process chain in BW 3.0.
    In a process chain, I schedule daily an InfoPackage to load data to an InfoCube and then delete the previously loaded data by the same InfoPacckage from the InfoCube.
    For example, I have 2 InfoPacakge, A and B, and schedule to load data to InfoCube C. In the process chain, I scheduled daily the following tasks in sequence:
    - Delete index in InfoCube C
    - Load data through InfoPackage A to InfoCube C
    - Delete duplicated request previously loaded through InfoPackage A in InfoCube C
    - Load data through InfoPackage B to InfoCube C
    - Delete duplicated request previously loaded through InfoPackage B in InfoCube C
    - Create index in InfoCube C
    However, when I activate the process chain, a warning message is prompted: No type "Delete Overlapping Requests from InfoCube" process allowed infront of process "Execute InfoPackage" variable ZPAK_4GMFI
    Would anyone tell me why such warning message is prompted?
    Many thanks!
    Best regards,
    Marcus

    Hi all,
    Re Ranjan:
    I previously config the process chain like that:
    (1)   Delete index in InfoCube C
    (2)   Load data through InfoPackage A to InfoCube C
    (3a) Delete duplicated request previously loaded through InfoPackage A in InfoCube C
    (3b) Load data through InfoPackage B to InfoCube C
    (4)   Delete duplicated request previously loaded through InfoPackage B in InfoCube C
    (5)   Create index in InfoCube C
    where (3a) and (3b) is executed simultaneously.
    However, (3b) often reports error. I guess it may be due to the parallel run of (3a) and (3b). So that's why I set the process chain to be executed in serial.
    Re Jacob:
    I found that the process chain can be run even a warning message is prompted when activating the process chain!.
    Thanks you all for your kind support!!
    Best regards,
    Marcus

  • Delete Overlaping Request

    Hello Experts
    Please read my requirements.
    I have an infocube with Full Load DTP.  It gets the records directly from PSA.  Its all in BI 7.X.  My infopackage is on a datasource that is Delta Capable.  Due to the design flaw, I get the records directly from PSA into InfoCube without DSO and hence the duplicate records.
    What I did was, I initially ran, infopackage with "Initialize Delta with Data Transfer" and ran my DTP.  All the records are in my InfoCube.  So far so good.  Later I created a process chain, in this Order.
    Run Delta InfoPakcage
    Run DTP (Full all the time) - This time it gets the full as well as delta records from PSA. 
    Delete Overlaping Requests
    What I observerd was some of the records are still getting duplicated and some not after couple of days.  Any ideas Why?
    In "Delete Overlaping Request" process I selected the DTP while defining it and not the InfoPackage.  How this process work with Delta InfoPackage.  Please explain?
    Thank you
    BW User 999

    Hi,
    You can avoid using Delete overlapping request,and load the data correctly to the cube via Full DTP.
    Only if you dont need to previous days data in the PSA.
    Every time when the process chain is executed
    1) After the start process delete the PSA.
    2) Load the data through the Infopackage.
    3) Load it to the cube with the Full DTP.
    In this way you wont need to delte the overlapping request. Hope this help...
    Regards,
    Mahesh.

  • Combine with similar request dived values

    Dear All,
    I have one report with using Combine with similar request.(Union)
    TableA
    Name Amount SAR
    TableB
    Name Amt SAR
    Result Column:
    Name Salary
    My requirment in salary result column i shoul use divided by 100.
    When i clicl fn button i am not able to see in result column.
    How will divide /100 then Multipla with another result column SAR
    Thanks
    Govind R

    Hi ,
    When you are using the combine with similar request..
    under the results columns we wil not have the fx there will be only formarting options avilable.
    If you want to perform any calculation you have to do on the each criteria ..then it will work
    Thanks,
    Ananth

  • Filter with similar request

    Hi Gurus,
    What is the use of filter with similar request option in which scenarios we will use these option.
    Thanks,

    Hi, David
    Yes, this same one is the problem. But as indicates Obiee 1 Kenobi is a problem recognized by Oracle (Based on further research, it is not possible to use a union table to pass the filter to a target table.Bug 6067587 has been raised as a fix for this, and the "fixby" version is 11.1.1.2)
    At the moment solve the problem using filters with AND-OR.
    Thank you for your help.
    Nora

  • Sum lost after combine with similar request

    Hi experts
    I've used the 'combine with similar request' option to combine 2 reports that have exactly the same columns.
    The only difference between them is that they're using different filters.
    The combine action is done correctly but the The sum over the years is missing
    When I run the 2 reports separatly for both of them the sum is shown in the report but when I combine them It's missing.
    Any one can explain why??
    Edited by: aharrab_be on Apr 30, 2009 10:32 AM

    hi..
    even in table view, go to fx of particular measure.. and
    below you find option Aggregation Rule, set it to sum and try...
    And it would be nice if you assigned points for correct answer.. ;)

  • Unable to delete DSO request.

    Hi Everyone,
    We have Real time Data Acquisition (RDA) implemented in our system. Yesterday RDA failed due to invalid characters. We corrected data at PSA level and tried to run repair process chain, the request is not deleting with a
    diagnosis message "Insertion of a data record into the active table failed. The data record already existed. This indicates manual changes to the database table".
    Could somebody help in what way we can delete this request and load the corrected request. Any suggestion is highly appreciated.
    Regards,
    Shashidhar.

    Hi Everyone,
    Thanks a lot for your valuable suggestions.
    Atlast the request was deleted successfully. I have following request related information to update to all of you.
    Apart from the standard tables mentioned, viz., RSICCONT, RSMONICDP and RSODSACTREQ, . There is an other table RSBKREQUEST. This table contains the details of DTP requests.
    Even after deleting from other tables mentioned before, we have to set the status field of the table RSBKREQUEST to "DELETED". Ours was in "INCORRECT" status, this was the reason because of which it was unable to run repair process chain, even after DTP request was removed from Infoprovider Administration.
    So please note above table, which in very important for DTP requests and first 3 tables are enough only incase DTP is not involved and only infopackage is used to load to data target (3.x version).
    Regards,
    Shashidhar.

  • Delete Overlapping Requests - by Filename via ABAP Routine

    Hi SDN Community
    Do you know if it is possible to set the delete overlapping request parameters to recoginse the file name, and remove it via the derivation of the file name via an ABAP Routine.
    I am using an ABAP routine to derive the flat flat file upon loading, but do not know the syntax, or if it is possible to set this equivalent code into the delete overlapping request Routine area
    (The code basically derives the first day of the calendar week, for previous weeks in the Do n times Loop
    Thank you.
    Simon
    DATA: ld_CWEEK         TYPE scal-week,
          ld_DATE          TYPE SY-DATUM,
          ld_DATE1         TYPE SY-DATUM,
          lc_DIRECTORY(30) TYPE c,
          ln_YYYY(4)       TYPE n,
          ln_WW(2)         TYPE n.
    *Derive week from sy-datum
    ld_date = SY-DATUM.
    Determine the calendar week from the entered calendar date
      CALL FUNCTION 'DATE_GET_WEEK'
        EXPORTING
          date           = ld_date
        IMPORTING
          week          = ld_cweek
        EXCEPTIONS
          date_invalid = 1
          OTHERS        = 2.
    Get the First day of the week
            CALL FUNCTION 'WEEK_GET_FIRST_DAY'
              EXPORTING
                week         = ld_CWEEK
              IMPORTING
                date         = ld_DATE1
              EXCEPTIONS
                week_invalid = 1
                OTHERS       = 2.
      Need to find the previous calendar week and reconvert to the first
      day in order to accomodate weeks less than 7 days
      Get the last day of the current calendar week - 2
        DO 2 TIMES.
            ld_DATE1 = ld_DATE1 - 1.
    Determine the calendar week from the last day of the previous week
            CALL FUNCTION 'DATE_GET_WEEK'
              EXPORTING
                date         = ld_DATE1
              IMPORTING
                week         = ld_CWEEK
              EXCEPTIONS
                date_invalid = 1
                OTHERS       = 2.
    Get the First day of the week
              CALL FUNCTION 'WEEK_GET_FIRST_DAY'
                EXPORTING
                  week         = ld_CWEEK
                IMPORTING
                  date         = ld_DATE1
                EXCEPTIONS
                  week_invalid = 1
                  OTHERS       = 2.
        ENDDO.
    *ln_YYYY = ld_CWEEK(4).
    ln_YYYY = ld_DATE1(4).
    ln_WW = ld_CWEEK+4(2).
    *DIRECTORY represnts path where file is stored .
    lc_DIRECTORY = '/interfaces/EDW/data/CSM/'.
    CONCATENATE lc_DIRECTORY
    ld_date '_WEEK' ln_WW '_c1_pri_' ln_YYYY '.csv' into p_filename.
      'MIC_NT_' ld_date1 '_' ln_YYYY '.csv' into p_filename.

    Thank you for your response Debanshu
    However, i could not find this process type in the process chain area.
    Is this where you meant, can you please give me more detailed steps including long syntax of names of process types
    We are on BW 3.50
    i assumed the filename had to be constructed via abap according to some of the sdn replies i've searched through.
    Thank you.
    Simon

  • Unable to delete the last request in full load infopackage

    Hi All,
    I have full load infopackage with many requests with green status  and thier request genetaed is 0.
    Because of last failure request ia m aunable to activate dso.
    i Made last failure request green and triggered. but not successful.
    i made it red and tried deleting, no sucess. when i delete its giving Dump
    Now i am unable to delete the first request in green status also.its not deleting.
    can i delete whole data in dso and do full load again? how to check if there any other infopackages  on this dso..
    ALL This issues camw across triggering process chian...plz let me know oyur answers....
    Thanks,
    Venkat

    Hi Venkatesh,
    I tried deleting through  RSODSACTREQ.. but the delete option was disabled in the  TABLE ENTRY TAB.
    The last request is failed one with red and not tranfered with full records..
    All the other records which are in green status  does not have symbol genated for reporting purpose and
    Request id generated up on activation is zero..
    I  tried deleting the first request.. when i change its satus to red...
    QM action on PSA Z[DSONAME] must have now:Checked to see if automatic activation of the M version should be started.
    The M version is then activated if necessary  Now?
    Req. 0001439786 in DataStore Z[DSONAME]must have QM
    status green before it is activated
    Request 0001427251 is not completely activated.
    Please activate it again.
    But i am unable to find  both the requests under psa and  in adminstartion data target tab... there are some other requests with red status iin psa...
    When i try to delete failed request its giving dump
    Thanks,
    Venkat.

Maybe you are looking for

  • Sharing SharedObjects between multiple .swfs

    Relatively simple problem; I have a project that I think may be more manageable if I break it up into multiple small .swf files rather than having a gargantuan mega .swf. Now, I know that I could use a root .swf and load data from the others, but I t

  • Setting a timeout for HttpConnection?

    Hi, Is there any way to set a timeout for an HttpConnection instance? I've got something like: HttpConnection conn = (HttpConnection)Connector.open("www.mysite.com"); but the timeout on my particular device waits 2 minutes! I'd like to make it someth

  • EJB Clients

    How can i invoke EJBs in a different JVM

  • Remove CS2, keep CS3, and install CS4

    Before I install upgrade to Design Premium CS4, I want to remove/uninstall CS2 and leave CS3. CS2 does not have an uninstaller (except for Acrobat) like CS3 does. So do I have to uninstall all CS2 applications, files, prefs, etc manually. If so, how?

  • Onlocation cs4 crash - any solution

    Hi, I have Onlocation install on a pc laptop with vista 64bit, cpu is core duo 2 2.16 and I have 4 go of ram. My hdv sony hvr-z1u was plug into the firewire and my target hard drive was an external HD plug with eSata port for maximum speed. I do a lo