Cube Compression - How it Affects Loading With Delete Overlapping Request

Hi guys,
Good day to all !!!
Our scenario is that we have a process chain that loads a data to infocube and that has delete overlapping step. I just want to ask how does the cube compression affects the loading with delete overlapping request. Is there any conflict/error that will raise? Kindly advice.
Marshanlou

Hi,
In the scenario you have mentioned:
First the info cube would be loaded.
Next when it goes to the step i.e delete overlapping request:  in this particular step, it checks if the request is overlapping (with the same date or accd to the overlapping condition defined in the infopackage, if the data has been loaded). 
If the request is overlapping, then only it deletes the request. Otherwise, no action would be taken.  In this way,it checks that data is not loaded twice resulting in duplicasy.
It has nothing to do with compression and in no way affect compression/loading. 
Sasi

Similar Messages

  • Problems with delete overlapping requests from InfoCube in PC

    Hi guys,
    Iu2019m using delete overlapping requests from InfoCube in Process Chains, but Iu2019m not being able to adjust it to my specific requirement.
    For example:
    I execute DTP to load InfoCube XPTO with Fiscal Year 2008 and 2009. After this I have to load again, but only for 2009.
    In this specific example I want my process chain to delete the 2009 data from my first load, because it is overlapped, and leave 2008 data.
    Is this possible? If yes how?
    Thanks in advance
    Jão Arvanas

    It will not work that way.
    It will look if the selections are same then it wil delete if not then it will not do that activity.
    Overlapping settings which you might chosen is based on the delete overlapping for the same selections..
    So in this case the selections are different and hence its not possible.
    Thanks
    Murali

  • Non-compressed aggregates data lost after Delete Overlapping Requests?

    Hi,
    I am going to setup the following scenario:
    The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
    Now, the flow in the process chain should be:
    Delete indexes
    Load delta
    Load full
    Create indexes
    Delete overlapping requests
    Roll-up
    Compress
    In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
    Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
    Therefore, I expect that after the compression it should look like this:
    RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
    110 |                    |                    | X      | F
    109 |                    |                    | X      | D
    108 |                    |                    | X      | D
    107 |                    |                    | X      | D
    106 |                    |                    | X      | D
    105 |                    |                    | X      | D
    104 |                    |                    | X      | D
    103 |                    |                    | X      | D
    102 |                    |                    | X      | D
    101 |                    |                    | X      | D
    100 | X                  | X                  | X      | D
    099 | X                  | X                  | X      | D
    098 | X                  | X                  | X      | D
    If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
    My question is:
    What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
    Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
    I hope that my question is clear
    Any answer is highly appreciated.
    Thanks
    Michal

    Hi,
    If i get ur Q correct...
    Compress After Roll-up option is for the aggregtes of the cube not for the cube...
    So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
    If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
    Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
    This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
    And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
    even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
    Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
    here in urcase i guess there is no need to remove the step from the chain...
    correct me if any issue u found......
    rgds,

  • Proc Chain - Delete Overlapping Requests fails with aggregates

    BW Forum,
    Our weekly/daily load process chain loads several full (not delta) transaction infopackages. Those infopackages are intended to replace prior full loads and are then rolled up into aggregates on the cubes.
    The problem is the process chains fail to delete the overlapping requests. I manually have to remove the aggregates, remove the infopackages, then rebuild the aggregates. It seems that the delete overlapping request fails due to the aggregates or a missing index on the aggregates, but I'm not certain. The lengthy job log contains many references to the aggregate prior to it failing with the below messages.
    11/06/2004 13:47:53 SQL-END: 11/06/2004 13:47:53 00:00:00                                                 DBMAN        99
    11/06/2004 13:47:53     SQL-ERROR: 1,418 ORA-01418: specified index does not exist                        DBMAN        99
    11/06/2004 13:47:59 ABAP/4 processor: RAISE_EXCEPTION                                                       00        671
    11/06/2004 13:47:59 Job cancelled                                                                           00        518
    The raise_exception is a short dump with Exception condition "OBJECT_NOT_FOUND" raised.
    The termination occurred in the ABAP program "SAPLRRBA " in
    "RRBA_NUMBER_GET_BW".                                    
    The main program was "RSPROCESS ".                        
    I've looked for OSS notes. I've tried to find a process to delete aggregates prior to loading/deletion of overlapping requests. In the end, I've had to manually intervene each time we execute the process chain, so I've got to resolve the issue.
    Do others have this problem? Are the aggregates supposed to be deleted prior to loading full packages which will require deletion of overlapping requests? I presume not since there doesn't seem to be a process for this. Am I missing something?
    We're using BW 3.3 SP 15 on Oracle 9.2.0.3.
    Thanks for your time and consideration!
    Doug Maltby

    Are the aggregates compressed after the rollup?  If you compress the aggregate completely, the Request you are trying to delete is no longer identifiable once it is in the compressed E fact table (since it throws away the Request ID).
    So you need to change the aggregate so that it the most recent Requests remain in the uncompressed the F fact table.  Then the Request deletion should work.
    I thought what was supposed to happen if the aggregate was fully compressed and then you wanted to delete a Request, the system would recognize that the Request was unavailable due to compression and that it would automatically refill the aggregate - but I'm not sure where I read that. Maybe it was a Note, maybe that doesn't happen in a Process Chain, just not sure.
    The better solution when you regularly backout a Request  is just not the fully compress the aggregate, letting it follow the compression of the base cube, which I'm assuming you have set to compress Requests older than XX days.

  • Full Load" and "Full load with Repair full request"

    Hello Experts,
    Can any body share with me what is the difference between a "Full Load" and "Full load with Repair full request"?
    Regards.

    Hi......
    What is function of full repair?? what it does?
    How to delete init from scheduler?? I dont see any option like that in infopackage
    For both of you question there is a oss note 739863-Repairing data in BW ..........Read the following.....
    Symptom
    Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer).
    Other terms
    Restore data, repair data
    Reason and Prerequisites
    There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the DeltaQueue, handling errors in the customers posting procedure (for example, a change in the extract structure during production operation if the DeltaQueue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.
    Solution
    Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.
    BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B). If you want to use this function, we recommend that you use the ODS object layer.
    Note that you should only use this procedure if you have a small number of incorrect or missing records. Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
    1. Repair request: Definition
    If you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this DataSource/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully.
    The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the InfoPackage (Maintenance) menu.
    2. Prerequisites for using the "Repair Request" function
    2.1. Troubleshooting
    Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.
    We always recommend that you load the data into the PSA table in the first step and check the result there.
    2.2. Analyze the effects on the downstream targets
    Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button).
    You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets.
    If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.
    2.3. Analysis of the selections
    You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records.
    If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair.
    You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system.
    The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target.
    Some applications have additional special features:
    Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request.
    Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence).
    FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).
    3. Repair request execution
    How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets.
    And also search in forum, will get discussions on this
    Full repair loads
    Regarding Repair Full Request
    Instead of doing of all these all steps.. cant I reload that failed request again??
    If some goes wrong with delta loads...it is always better to do re-init...I mean dlete init flag...full repair..all those steps....If it is an infocube you can go for full update also instead of full repair...
    Full Upload:
    In full upload all the data records are fetched....It is similar to full repair..Incase of infocube to recover missed delta records..we can run full upload.....but in case of ODS  does'nt support full upload and delta upload parallely...so inthis case you have to go for full repair...otherwise delta mechanism will get corrupted...
    Suppose your ODS activation is failing since there is a Full upload request in the target.......then you can convert full upload to full repair using the program : RSSM_SET_REPAIR _ FULL_FLAG
    Hope this helps.......
    Thanks==points as per SDN.
    Regards,
    Debjani.....
    Edited by: Debjani  Mukherjee on Oct 23, 2008 8:32 PM

  • Process Type - Delete overlapping requests from an Info Cube

    I have read many threads on this topic, still have some questions. To give you background, I will be doing delta load (“delta” process chain). But before delta, I want to do setup initialization (“init” process chain). I want to use this process type in my “init” process chain” to delete all data in my cube loaded from a specific ods. I cannot use delete all contents process type because many ods’s feed the same cube.
    1.     Do you put this process type after Load Data or before Load Data. I have seen both cases in SDN forums.
    2.     Variant of this process type is an info package. Does this info package have to be same as in load data step or it can be different.
    3.     I want to delete all data in cube which was loaded from my ods. Term overlapping is confusing to me. Will this process type delete all data or not.
    Tanks in advance.

    Yes, you can do that. Goto RSPC - under process types Load Process and Post Processing, you will see DELETE OVERLAP REQUESTS FROM INFOCUBE option is there.
    Select the proper check boxes like same sourcesystem, same datasource etc...
    If it is one time deletion, why not do that manually?
    Also check: How to delete most recent request in a Cube by using process chain
    Hope it helps..

  • How does Delete overlapping requests work?

    We have a extract which does a full loads into a cube for the current year and a we run this daily. In the process chain we do a delete overlapping requests which I understand will delete the previous requests with the same condition. What happens when the year changes?  On Dec 31 we will load 2008 data while the next time it runs on Jan 2 it will load only 2009 data (variant only looks at current year). Will deletion of overlaps delete all the 2008 data it loaded on Dec 31? Thanks.

    what is removed depends on the settings of your delete request.
    You can use only dtp-name, or dtp name with same selection parameters or with same selection parameter in a certain amount of time.
    Take a look at the settings; i found them quite self explanatory.

  • Delete overlapping requests of same DTP from cube

    Dear All,
    i want to delete over lapping requests of same DTP in cube, but here while i am searching for the DTP in process variant, the system is not showing the DTP(it is active).
    can you people have any suggestions.
    Br,
    Vamshi.

    Hey Vamshi,
    Is your DTP a full one or delta one ? Delete Overlapping reqs is not possible for Delta loads.
    In Load Process and Post-Processing option you will find Delete Overlapping Requests from InfoCube, drag and drop and make the following settings:
    Object Type = DTP
    Object Name : DTP Name.
    Check Edit all InfoCubes with the following Delete Selections and click on Delection Selection
    Check Delete Existing Request and check Only Delete Req from same  DTP
    Activate and execute the PC.
    Hope this helps!
    Sheen

  • Delete overlapping requests from cube not working in processchain.

    In a process chain, 'deletion of overlapping requests from the cube ' step is used.
    Before this step a DTP step runs with a full update to load the cube. This process chain is scheduled every day.
    Issue is, the process chain failed at the DTP step and after correcting and repeating, the step got executed.
    However, the next step after the DTP,'delete overlapping requests from the cube'
    gets executed but without deleting the previous day's request.
    In the step details a message that 'No request for deletion were found' can be seen.
    Then next day when the DTPstep is executed without any problem the 'delete overlapping requests from cube' step is successful
    and the previous requests from cube are deleted.
    the deletion selections in the step ' delete overlapping request from infocube' is
    Delete existing requests
    Conditions:
    only delete requests from same DTP
    Selections
    Same or more comprehensive
    Because of this issue on a particular day because of the presence of 2 days requests the data is getting aggregated and shown as double in the reports.
    Please help.

    Hi Archana,
    When you delete the bad request from target and before repeating your DTP in PC, make sure the bad request deleted from table RSBKREQUEST also.
    If you find the same request in table, first delete the request from table and repeat the DTP in PC.
    Now Delete overlapping step should work.
    As this is not the permanent solution, please raise an OSS for SAP
    Regards,
    Venkatesh

  • Delete overlapping requests from info cube

    Hello
    I am getting a issue sometimes where the process chain fails to delete a request from the cube on the conditions i have given in this process type. Now i have been digging into this but cannot find much of why would this happen.
    Please can someone tell me if this is a known issue to you. Also please can someone give me some details on which program name gets generated when we pull this process type in a  process chain. what i mean is what is the program running for this process chain as i want to debug and see how does this process type delete the request.
    thanks

    check the class CL_RSSM_REQUDEL and method IF_RSPC_EXECUTE~EXECUTE

  • Delete overlapping requests in Cube

    Dear all,
    I need delete the requests uploaded in the previous day when transferring new request into Cube,and the system is BW 7.0,but I use 3.5 data source and infoSource.
    The issue is that when I

    Tianli,
    You can delete the overlapping request in the infocube.
    In the infopackage -
    > Data Targets -
    > Automatic loading of similar/identical requests
    You can set the parameter for deleting the overlapping request here.
    If you are using process chains, there is a process type available
    Load process and post processing-----> Delete overlapping request from infocube.
    Can you post your question again as I see it is incomplete..
    Sasi

  • Delete overlaping requests doesnt work with delta

    Hello
    I need to delete previous request delivered by delta DTP. Process type "Delete overlaping requests from cube" doesnt allow to select delta DTP.
    Is it only for full load DTP?
    Alex

    Hi ,
    Delete Overlapping request from cube works only for full loads.
    Also if you have any such query regarding DTP functionality please reger this article
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/10339304-7ce5-2c10-f692-fbcd25915c36
    -Vikram

  • Process type "Delete overlapping request in cube " in process chain

    Hi
    can somebody suggest me where i need to place the process type "Delete overlapping request" for an infocube.
    Regards,

    Hi,
    When you use the corresponding process in the process
    chain maintenance screens to specify the conditions for
    automatic deletion, these conditions are applied to all
    the InfoCubes for the selected InfoPackages
    Overlapping: If you set this indicator, existing requests are also
    delete from the InfoCube if the selection criteria of the new request partially or wholly overlap the selection criteria of the request to be deleted
    Same or Comprehensive: If you set this indicator,
    requests are only deleted from the InfoCube if
    the selection conditions of the new request are
    the same as or more comprehensive than the selection
    conditions of the request to be deleted.
    Thanks
    Bhagesh

  • Delete Overlapping requests question

    Hi ,
    How would a Delete overlapping requests (DOR)type work in a PC to a Cube that gets a full load every day with selections for rolling 12 months?
    Say there is a scenario to load data into an AR line items cube to update open to closed items history and the loads are configured for a 12 month rolling. eg yesterdays full would be for 03/23/09 to 03/23/10
    today's 03/24/09 to 03/24/10
    Now how can we utilize DOR instead in a PC that would delete the doubling behaviour of data for dates 03/24/09 to 03/23/10?
    All thoughts appreciated.
    Thanks

    Hi,
    in The selection criteria of the info package u can have the following code which takes the selection criteria as the first day to last day of the month  based on previous days date(SY-DATUM - 1.)
    data: l_idx  like sy-tabix,
            dDATE1 LIKE SY-DATUM,
            dDATE  LIKE SY-DATUM,
            L_LOW  LIKE RSSDLRANGE-LOW,
            L_HIGH LIKE RSSDLRANGE-HIGH.
      read table l_t_range with key
           fieldname = 'END_SHIFTDATE'.
      l_idx = sy-tabix.
      dDATE1 = SY-DATUM - 1.
      CALL FUNCTION 'ZFNC_FIRSTDAY_MONTH'
        EXPORTING
          DDATE      = dDATE1
        IMPORTING
          DLAST_DATE = dDATE.
      L_LOW         = dDATE.
      L_T_RANGE-LOW = L_LOW.
      L_T_RANGE-SIGN   = 'I'.
      L_T_RANGE-OPTION = 'BT'.
      CLEAR dDATE.
      CALL FUNCTION 'ZFNC_LASTDAY_MONTH'
        EXPORTING
          DDATE      = dDATE1
        IMPORTING
          DLAST_DATE = dDATE.
      L_HIGH         = dDATE.
      L_T_RANGE-HIGH = L_HIGH.
      modify l_t_range index l_idx.
    and DOR deletes the previous day's request as the selection criteria is same. on first day of the next month it has a request with selection criteria as first to last date of the previous month. and this request is not deleted  since on 2nd of the month it has different selection criteria(Start daay and end day of that month)
    Hope its clear
    Thanks,
    Sandeep

  • Process Chain Help - Delete Overlapping requests

    Dear Experts,
    I have a requitement where I want to delete the ' previous days' request from the cube. This has to be from the data coming from only one DSO.
    I can use the Delete overlapping request process type but I want to know how.
    And again, this has to be only for the current month.I want to delete the overlapping requests of current month i.e now since I am in April I want to delete only the April requests. When I am in  month May I dont want to delete april request anymore.
    In other words I want to delete the april request until April 30th. On May 1st I dont want to delete the april 30th request. On may 1st it shouldn't delete anything. On May 2nd it should delete the May 1st request.
    Can anyone help me with this.
    Thanks,
    KK

    Hi KK,
    If I have understood you correctly, you mean to say your cube is getting loaded from various Datasources and you want to delete the requests only for a particular Datasource and not for others.
    Please correct me if i am wrong.
    If I am right ! Then on the window "Delete Request from Infocube after update" you can "Delete Existing Requests - > Is current month" and on the bottom of that screen you can see a checkbox for Request Selection Through Routine. Check this and you can simply write a routine to do the deletion for only requests loaded from a particular datasource.
    Hope it helps.
    Regards
    Hemant Khemani

Maybe you are looking for

  • Help need in buliding my Online Bidding Project

    hello all, i am quite new to JSP. as part of my MS course work i am building a OnLine Bidding system or an Auction system. hope u all might have seen sites like. yahooauctions and ubid.com.. i want to know wether it is easy to do with JSP or complete

  • Wifi sleeps after installing amber update

    I have a Lumia 720 and since installing the update if I lock the screen the wifi connection sleeps aswell hence do not receive e mails or other messages that need GPRS connection and the connection comes on once I unlock the phone. Is this a known is

  • Updating to version 1.2 trouble

    when i go to updated my 5G 30GB ipod it comes up with - The Ipod could could not be updated. An unknown error occurred (1417). Then when i try to restore it. It says - The Ipod could not be restored. An Unknown error has occurred (1428) Anyone got an

  • Where can i download the IOS 5.0 for apple iPhone 4?????

    Where can i download the IOS 5.0 for apple iPhone 4?????

  • Input Data Validation Of A Field Associated To A Check Table

    Hello, Does Web Dynpro have an inherent functionality to validate input fields associated to a check table? For example I created a structure with the field UNAME and defined USR02 as the check table for it.  And then I defined a UI InputField in my