Cube indexes deletion

hi experts Liam facing one error (ABAP/4 PROCESSOR :DBIL_RSQL_SQL_ERROR ) actually this is coming for delete datatargetcontents people from sdn are saying it tablespace but i contacted basis people they are saying it is not tablespace error .
my doubt is how it is happening for deletedata targetcontents step before this step delete index step is there and i went to cube-manage -ckeck delete db indexes it is showing green is it correct or not and is there any other way to check indexes are deleteed succesfully or not please give exact guidence.
Thanks,
Bharati.

hi experts my shortdumpn message  is
1 FUNCTION DOCU_EXIST_CHECK_MASS.
2 *"----
3 ""Lokale Schnittstelle:
4 *" IMPORTING
5 *" VALUE(ID) LIKE DOKHL-ID
6 *" VALUE(LANGU) LIKE DOKHL-LANGU DEFAULT SY-LANGU
7 *" VALUE(TYP) LIKE DOKHL-TYP DEFAULT 'E'
8 *" VALUE(SEC_LANGU) LIKE DOKHL-LANGU DEFAULT 'EN'
9 *" CHANGING
10 *" VALUE(DOCU_OBJECTS) TYPE DOCU_EXIST_T
11 *"----
12
13 DATA: WA_DOCU_OBJECTS TYPE DOCU_EXIST.
14
15 LOOP AT DOCU_OBJECTS INTO WA_DOCU_OBJECTS.
16
>>> SELECT SINGLE * FROM DOKIL
18 WHERE ID = ID
19 AND OBJECT = WA_DOCU_OBJECTS-OBJECT
20 AND TYP = TYP
21 AND LANGU = LANGU.
22
23 IF SY-SUBRC = 0 AND DOKIL-DOKSTATE 'N'.
24 WA_DOCU_OBJECTS-DOCU_EXIST = C_YES.
25 MODIFY DOCU_OBJECTS FROM WA_DOCU_OBJECTS.
26 ELSE.
27 IF NOT SEC_LANGU IS INITIAL.
28 SELECT SINGLE * FROM DOKIL
29 WHERE ID = ID
30 AND OBJECT = WA_DOCU_OBJECTS-OBJECT
31 AND TYP = TYP
32 AND LANGU = SEC_LANGU.
33 IF SY-SUBRC = 0 AND DOKIL-DOKSTATE 'N'.
34 WA_DOCU_OBJECTS-DOCU_EXIST = C_YES.
35 MODIFY DOCU_OBJECTS FROM WA_DOCU_OBJECTS.
36 ENDIF.
thanks
Bharati.

Similar Messages

  • Got hung in the cube index deletion

    Hi Gurus-
    I got an issue,. where in my cube indicies got hung,.
    In daily PC monitoring the PC failed at the index deletion of a cube. I just repeated the process again failed.
    So finally i went into the cube manage and analysed , there was a red request in the cube manage four days before, It was a file upload which went to red,. So i deleted the red request which was between the green requests.
    Now went to the PC and tried to repeat the cube's del process, monitored the same in sm37  the drop index job in sm37 is running for a long time,. Killed the job and tried to delete indicies manully from the performance tab of the cube manage.,
    Even this is running quite long time in the dialog mode.So I stopped the Transction and scheduled for the delta,. now the request is in yellow mode for days for 3000 records,..
    --> checked with basis ,. No locks , deadlocks were found in SAP and DB level,.
    -->No action is being traced by the sys on this cube,. I mean for any action ,.the jobs in sm37 is with two lines,.nothing is ther to analyse,.
    Plz help me in this regard ,.
    need the delta in the cube,.users are screwing me ,.
    Rgards,
    Vishwa.

    Hi,
    Try running RSRV and whether there is any error in the cube.
    In BI7.0 RSRV they have given so many new option go for combined test for that cube which will tell you whats the error.
    Else ask the basis folk to  try to create the index at database side if required.
    Also Check if there is any open HUB or infospoke related to that cube having dependancy with that cube.
    Hope this helps for you.
    Thanks,
    Arun.

  • Index Deletion taking more time

    Dear BWers,
    I am facing a problem with Index deletion of some of the cubes (total 10). Two weeks back index deletion of all the cubes used to complete in less time. But from the last two weeks, index deletion process is taking more time, almost triple the time it used to take.
    Any idea why this could have happened or any pointers to documents on improving the time taken by index deletion processes will be highly appreciated and will also be rewarded.
    Thanks in advance.
    Madhav

    Madhav,
       Make sure you required Background Process available.
    One more thing... you can ask your DBA/Basis folks to look into it. They can tell you what is happening at the DB Level.
    You try to analyze that from SM51.
    Nagesh Ganisetti.
    Assign points if it helps.

  • Cube content deletion is taking more time than usual.

    Hi Experts,
    We have a Process chain which ideally should run in every two hours. This chain has a delete data cube content step before the new data is loaded in the cube. This chain is running fine for one instance & the other instance is taking more time so it is quite intermittent.
    In the process chain we are also deleting contents from the Dimension tables (in the delete content step). Need your inputs to improve the performance of this step.
    Thanks & Regards
    Mayank Tyagi.

    Hi Mayank ,
    You can delete the indexes of the cube before deleting the contents of the cube . The concept is same as of data loading that data loads happens faster when indexes are deleted .
    If you are having aggregates over this cube , then that aggregate will be also adjusted .
    Kind Regards,
    Ashutosh Singh

  • Cube Request Deletion - Other loads possible or LOCK problems

    Hi Folks,
    can somebody tell me if there could be a lock situation if I have in one process chain the cube request deletion step running (cleaning out some request) and at the same time another load is running into the cube? We would not drop/re-create the index in load situation so no lock by the index operation.
    Any experience?
    From what I have seen there shouldn'T be a problem as the request deletion will be "scheduled" and simply execute if lock is release (if there is any) and the deletion it self places only a quick lock to determine the request ID but doesn't much impact other loads.
    Is this all correct? Or any other views?
    Thanks,
    Axel

    Hi,
    While Loading
    It is not possible to:
    ·        Delete data
    ·        Archive data
    ·        Delete indexes and reconstruct them
    ·        Reconstruct statistics
    For more information:
    http://help.sap.com/saphelp_nw70ehp2/helpdata/en/bb/bdd69f856a67418962d74bfd7bd8af/frameset.htm
    Regards,
    Anil Kumar Sharma .P

  • Scheduling Cube Indexes

    On a cube can I schedule an index deletion and repair/creation?

    Hi Niten,
    You can do that by setting up a job from the Cube > Manage > Performance tab.
    It can also be done in Process Chains using processes Delete Index and Generate Index  under Data Target Administration.
    Hope this helps...

  • IP- Planning CUBE data deletion.

    Dear Experts,
    We have a planning enabled cube and the data is loaded and distributed as per reference (planning Function) is used and the planning sequence is run on the cube.
    Unfortuntely due to some issue in CRM we have to delete data from the cube for the last two months and reload.
    My doubt is that if i just delete the data requests and also planning requests and then reload and run my planning sequence which will reporcess the data from last two months again or else will their be an issue.
    Because in the normal cube its quite simple just delete all the requests from the cube and DTP will take care about the rest.
    So please suggest is their anything i need to take care.
    Thanks and Regards
    Neel

    Hi,
    If you know what data you want to delete (Selection criteria is clear to you ),as you specified last two months data then i will suggest go with selective deletion.
    You can follow the following steps for correct deletion:-
    1) Check the data in the cube by using listcube,with same selction criteria (In u r case last two months) .This will confirm you know and checked what you want to delete.
    2) Select the cube--> Right click --->Change Real time load Behavior.
    3)Select Planning not allowed.
    4)Select the cube,right click manage option.
    5) Select contents tab ,click on delete selection button.
    6)Specify same selction which you checked in cube and delete the contents.
    7)Change the load behavior back to load not allowed.

  • How to delete aggreagetd data in a cube without deleting the Aggregates?

    Hi Experts,
    How to delete aggreagetd data in a cube without deleting the Aggregates?
    Regards
    Alok Kashyap

    Hi,
    You can deactivate the aggregate. The data will be deleted but structure will remain.
    If you switch off the aggregates it wont be identified by the OLAP processor. report will fetch the data directly from the cube. Switching off the aggreagte won't delete any data,but temporarly the aggregate will not be availbale as if it is not built on the info cube. No reporting is not possible on swtiched off aggregates. The definition of the aggregate is not deleted.
    You can temporarily switch off an aggregate to check if you need to use it. An aggregate that is switched off is not used when a query is executed.This aggregate will be having data from the previous load's. If the aggregate is switched off means it wont be available for reporting but data can be rolled up into it.
    If u deactivate the aggregates the data will be deleted from the aggregates and the aggregate structure will remain the same.
    The system deletes all the data and database tables of an aggregate. The definition of the aggregate is not deleted.
    Later when you need those aggregate once again you have to create it from scratch.
    Hope this helps.
    Thanks,
    JituK

  • Difference bet cube indexes and dso indexes

    hi,
    1.can any tell me the difference between cube indexes and dso indexes?
    if we have aggregates on the cube it improves the performce of the query created on it
    2.so why to create indexes on the cube?
    for creating indexes on the dso i right click on dso and click create indexes
    it is asking two options
    1.unique key 2.non unique
    4.what is the functionality of these two?
    i will assign points if ur answers clear my questions

    The BW automatically defines multiple indexes on your cubes, based on the dimensions you have defined.  You do not need to create any additional indexes on your Fact tables.  It can somtimes be helpful to create a secondary index on dimension tables or master data tables depending on their size and the queries.  There is not BW wkbench tool to do this, it usually requires a DBA in most shops.
    Secondary indexes on DSO/ODS can help some queries substantially, again, depending on the data and the queries.  You can define secondary indexes on DSO/ODS from the BW workbench.
    Aggregates are another tool for query performance, by summarizing the data, it can reduce the number of rows that must be read by a query.  Again, it dpends on the query and the data as to how much an aggregate helps.

  • Creating cube index in process chain

    Hi,
    From my previous post I realized that we build index of cube first and then delete the overlapping request from cube.
    (http://help.sap.com/saphelp_nw04/helpdata/en/d5/e80d3dbd82f72ce10000000a114084/frameset.htm)
    If I design a process chain in which I delete the overlapping request first and then build index then in the checking view it does not give me any error.
    The process chain also works fine.
    How does it hurt to have it this way or what is the concept behind having the sequence as recommended by SAP.
    Thanks,
    sam

    Hi Sam,
      Writing performance is increased when there is no index --- So we delete the index before updating data.
    Reading Performance is increased when there is a index -- So we create Index so that when queries are executed quickly by reading the data in the info-cube.
    BI Best solutions suggests that old requests are to be deleted before loading new one.So we delete ovelapping requests.
    Hope it helps.

  • Cube indexation

    Hello,
    I have noticed that the part in my process chain that takes the longest time is the deletion of the index (first sub process) and the generation of a new index (last sub process).
    Sometimes the deletion of index or the generation of index takes too long time, so that the next scheduled load collides with the previous and causes an error in load.
    What can I do to speed up the process chain (mainly the indexation parts)?
    Thanks,
    Fredrik

    Hello again,
    Since the number of records per data load is very small compared to the total amount of data in the cube I would like to skip the indexing.
    However, when modifying the process chain, I'm forced to keep the 'create index' sub process. I could however delete the 'delete index' sub process.
    The chain is ok like this according to the system.
    1. starter
    2. load info package 1
    3. load info package 2
    4. create index
    why is it not possible to delete step 4?
    Best regards,
    Fredrik

  • Non Cumulative Cube selective deletion and Reload

    Hi,
    We have a typical scenario, where in the company code displays # when we execute the inventory reports at plant level,
    Upon analysis, we did find the Plant to Company code mapping was not maintained at R/3. Now this was fixed, hence we are planning to do selective reload for that particular plant alone.
    Keeping in mind the loading sequence/scenarion for Inventory, can any one advise if we need to do stock init again for this plant and reload the data after selective deletion.
    Or can we directly load from the material movement datasource. Will there be any impact ie: at marker update etc.
    Note: We already have data into this inventory cube for last 2 years.
    The cube contain aggregate compression.
    Thanks
    Ramesh

    Any inputs please
    Ramesh

  • Info cube data deletion fact only

    When we want to delete the data in the info cube, we get an option dialog box to choose
    Do you want to delete only the contents of the fact table
    in InfoCube IC_SDGR1 or do you
    want to delete the dimension tables as well?
    1)fact table only
    2)fact table and dimension table.
    Could anyone tell me when we use only fact table , is anyone encountered with this scenario.
    Because generally we use fact table and dimensional table option for deletion.
    Thanks for the replies

    as long as you haven't changed anything to the dimension you can leave the data in the dimension tables when deleting.Your reload will go a lot faster as less (or none) entries are to be created. on the other hand if you have made a change to your cube and you transport to production and you need to reload, then you need to delete the entries in the dimension table, as you have a new key in the structures and all old entries are useless...
    M.

  • How to update Cube witha deletion of a record in ECC

    Hi
      I am extracting Data from ECC using a ZZ Datasource based on a View
    Data Flow in BW
    ECC(ZZ EXTractor)-->DSO1(Full)->DSO2(Delta)----.Cube(Delta)
    Scenario:
    Lets say today i loaded Records A,B,C,D to Cube
    In ECC C & D Records are deleted
    How can i replicate those deletions in CUBE
    Thanks

    if you do a complete full load from r/3 to your dso1 (assuming it's not W-O), then with the activation of the dso request, the generated delta will contain the deletion of the two records (ie all KF put to 0). your next delta will be correct. if you actually want to delete the line from cube, i think you have no other option then to do full delete of the cube data and only do full loads from T/3 to cube
    M.

  • Dimension tables not deleted when cube data deleted

    We are using a process chain to delete the entire contents of the cube prior to the next load.  Over the course of time repeating this everyday, we see that the RSRV check tells us that the "MAX % ENTRIES IN DIMS COMPARED TO F-TABLE"  = 4718% !!  How can this be if each day the data is getting deleted from all related tables?  The max could only be 100% then, correct?
    Please advise.  I could not find an OSS note that the dimensions are not deleted as well with that step in a process chain.
    Thanks, Peggy

    Hi,
    we have fixed it call creating our own ABAP code with the following CALL:
      CALL FUNCTION 'RSDRD_DIM_REMOVE_UNUSED'
          EXPORTING
            i_infocube        = YOUR_ICUBE
    OPT       i_t_dime          = YOUR_DIMENSION
    OPT       i_check_only      =
          IMPORTING
            e_repair_possible = tp_repair_possible
    OPT     CHANGING
    OPT        c_t_msg           = it_msg
          EXCEPTIONS
            x_message         = 1
            OTHERS            = 2.
    (inserting code in the new SDN frontend is a bit of a nightmare...)
    and included this ABAP in the chain
    You can also go via RSRV and do that manually.
    I don't recommend to delete the whole dim table since the loading time will be drastically higher if you have your DIMS empty...
    The above works as well for aggregates...
    hope this helps
    Olivier.
    Edited by: Olivier Cora on Jan 15, 2008 5:41 PM

Maybe you are looking for