Authorization performance with a million values

Hi,
we have a performance problem - query run time > 1 hour!
The problem arises because some of our customers have a status 'Protected'. This means that nobody may display this customer in a query.
As Authorizations can only be defined as inclusive, this results in a list of about a million customers which can be displayed.
Reading the customer master data table is actually quite quick thanks to an additional index. But it seems that handling about a million authorized single values slows SAP BI down quite considerably.
Has anybody go any ideas?

Hi A R,
Goto RSRT--> Give ur query anme --> Execute =Debug
--> No a pop up ill come with many check boxes select "Display Aggregates found" option --> now give ur
selections in variable screen > first it will give the already existing aggregate names> continue> now after displaying all the aggregates it will display the list of objects realted to cube wise> try to copy these objects into notepad> again go with ur drill downs now u'll get the already existing aggregates for this drill down-> it will display the list of objects> copy them to notepad> now sort all the objects related to one cube by deleting duplicate objects in the note pad>goto that Infocube> context>maintain aggregates> create aggregate on the objects u copied into note pad.
now try to execyte the report... it should work properly with out delays for those selections.
I hope it helps you...
Regards,
Ramki.

Similar Messages

  • Authorization Issue with Custom Pending Value Object and Anonymous Users

    Hi,
    I am just converting my demo from version 7.1 to 7.2. I am not doing upgrade. The demo uses a custom pending value object USER_REQUEST. The idea is that new employee goes to Java AS as anonymous user and enters her details and store where she will work. After submitting request there is an approval process using custom entry type USER_REQUEST. If the request is approved then IdM converts USER_REQUEST into MX_PERSON entry. This works nice in 7.1 but I am having problems with replicating this in 7.2. I created new UI task accessible by anonymous that creates new USER_REQUEST entry. I also assigned role idm.anonymous with UME action idm_anonymous to UME built in group Anonymous users.
    My problem is with the field STORE. This field is a reference field to another custom entry type STORE (this entry type will be used in context based assignment). Every new employee must selects a store where she will work. The problem is when user clicks on button "Select". Web dynpro terminates and returns authorization error. I also tested this with entry type MX_ROLE. I added attribute MXREF_MX_ROLE and same issue. So it seems that just assigning UME action idm_anonymous is not enough to list objects from identity store. I found a workaround for this issue. When I assign also UME action idm_authenticated to Anonymous users then it does not dump and I get a pop up window where I can search for store. It does not seem right to assign idm_authenticated to anonymous users.
    Another issue is with display task for entry type USER_REQUEST. I assigned a display task to entry STORE and I set that Anonymous have access to this task in Access control tab. I assigned default value to the field store. So when a user opens page she can see a hyper link to display already assigned store. When user clicks on this hyper link it opens a new pop up window and user must authenticate against Java AS. After successful authentication the display task for entry STORE is displayed. I would assume that anonymous user can display it without authentication.
    So to me it seems like authorization checks have been changed in 7.2 versions and are more strict for anonymous tasks. Hence my question is how can I implement my scenario. Am I missing some configuration or what's the proper solution to my two issues? I don't count assigning idm_authenticated to Anonymous users as a solution. This workaround does not solve my second issue.
    Thanks

    Some of the folks from Trondheim labs check, but rather infrequently.  There's another person who I guess is in consulting that also checks from time to time.
    Sorry I can't help you with your main question...
    Matt

  • SCD 2 load performance with 60 millions records

    Hey guys!
    I'm wondering what would be the load performance for a type 2 SCD mapping based on the framework presented in the transformation guide (page A1-A20). The dimension has the following characteristics:
    60 millions records
    50 columns (including 17 to be tracked for changes)
    Has anyone come across a similar case?
    Mark or Igor- Is there any benchmark available on SCD 2 for large dimensions?
    Any help would be greatly appreciated.
    Thanks,
    Rene

    Rene,
    It's really very difficult to guesstimate the loading time for a similar configuration. Too many parameters are missing, especially hardware. We are in the process of setting up some real benchmarks later this year - maybe you can give us some interesting scenarios.
    On the other side, 50-60 million records is not that many these days... so I personally would consider anything more than several hours (on a half decent hardware) as too long.
    Regards:
    Igor

  • Two authorizations objects with OR function instead of AND

    Hi,
    We have created two authorization (RSECADMIN) objects for a CRM InfoProvider:
    Organizational responsible
    Delivery unit.
    Both the two authorized relevant InfoObjects are used in the query.
    In the query we have used a two authorization variables.
    Now only values in the authorizations are checked where Organizational responsible are true AND Delivery unit are true.
    Is it possible to check the authorization where:
    Organizational responsible is true OR Delivery unit is true??
    Please help!
    Regards,
    Jos.

    Hi,
    hmmm Andreas, I must comment on that:
    what is required is to show any record having Object1 = True OR Object2 = TRUE.
    Logically it is the same than asking:
    Don't show records having (Object1 NOT True) AND (Object2 NOT True), correct me if I am wrong there (this is pure Boolean math...)
    Because BW doesn't support this it doesn't mean that ANY system cannot do it.
    Simply put with SQL
    SELECT * FROM TABLE
    WHERE OBJ1 = TRUE OR OBJ2 = TRUE works perfectly in ANY RDBMS.
    also
    SELECT * FROM TABLE
    WHERE NOT OBJ1 <> TRUE AND OBJ2 <> TRUE would work as well.
    It is just that BW always perform an AND when you filter two different objects.
    Jos could achieve what he wants by setting up some restricted key figures and work it out with conditions but definitively not with standard authorizations.
    Alternatively, as I already mentioned, compounding objects would work but not without modeling effort. Finally I believe that with user exits it would also be possible... I don't have time but I would as well investigate bringing both objects along with the provider in a multi and verify if that couldn't be done by semi/standard means finally...
    hope this shed some lights on the issue....
    regards,
    Olivier.

  • Oracle Database Performance With Semantic

    Hello,
    Is there a Developer's Guide for Semantic that specifically talks about database performance with the Semantic network/tables/indexes? We are having issues with performance the larger the semantic network becomes.
    Any help or pointers would be appriciated.
    Thanks
    -MichaelB

    Matt,
    Thanks for your response. Here are the answers to the questions about our setup/environment.
    1) Are you querying multiple models and/or a model + entailment? If so, are you using a virtual model and using the ALLOW_DUP=T query option?
    A single model, no entailments. We attempted to use multiple models, and a virtual model (with ALLOW_DUP=T), however the UNION ALL in the explain plan made the query duration unacceptable.
    2) Are you using named graphs?
    No named graphs.
    3) How many triples are you querying?
    Approximately 85 million.
    4) What semantic network and/or datatype indexes have been created?
    We have PCSGM, PSCGM, PSCM, PCSM, CPSM, and SCM.
    5) What is your hardware setup (number and type of disks, RAM, processor, etc.)?
    We are running the 11.2.0.3 database on a Sun Solaris T2000, we have ASM managing our disks from RAID5, I believe currently we have two Disk Groups with the indexes in one and the data tables in the other. We have 32 GB of memory, and 32 CPUs. However, it is not the only thing running on the machine.
    6) How much memory have you allocated to the database (pga, sga, memory_target, etc.)?
    We have the memory_target set to 9GB, the db_cache_size set to 2GB, and the db_keep_cache_size set to 4.5GB. `pga_aggregate_target` is set to 0 (auto), as is `sga_target`.
    (Since my initial request, we pinned the RDF_VALUE$ (~2.5GB) and C_PK_VID (~1.7GB) objects in the KEEP buffer cache, which drastically improved performance)
    7) Are you using parallel query execution?
    Yes, some of the more complex queries we run with the parallel hint set to 8.
    8) Have you tried dynamic sampling?
    Yes. We have ODS set to 3 for our more complex queries, we have not altered this much to see if there is a performance gained by changing this value.
    Thanks again,
    -Michael

  • Authorization object with no authorization field

    Hi Experts,
    I have created authorization object with no field checking.
    This is possible? Because i want to create this auth object for conversion only, and its not needed field checking.
    Please advice.

    Hi
    See this and do accordingly
    In general different users will be given different authorizations based on their role in the orgn.
    We create ROLES and assign the Authorization and TCODES for that role, so only that user can have access to those T Codes.
    USe SUIM and SU21 T codes for this.
    Much of the data in an R/3 system has to be protected so that unauthorized users cannot access it. Therefore the appropriate authorization is required before a user can carry out certain actions in the system. When you log on to the R/3 system, the system checks in the user master record to see which transactions you are authorized to use. An authorization check is implemented for every sensitive transaction.
    If you wish to protect a transaction that you have programmed yourself, then you must implement an authorization check.
    This means you have to allocate an authorization object in the definition of the transaction.
    For example:
    program an AUTHORITY-CHECK.
    AUTHORITY-CHECK OBJECT <authorization object>
    ID <authority field 1> FIELD <field value 1>.
    ID <authority field 2> FIELD <field value 2>.
    ID <authority-field n> FIELD <field value n>.
    The OBJECT parameter specifies the authorization object.
    The ID parameter specifies an authorization field (in the authorization object).
    The FIELD parameter specifies a value for the authorization field.
    The authorization object and its fields have to be suitable for the transaction. In most cases you will be able to use the existing authorization objects to protect your data. But new developments may require that you define new authorization objects and fields.
    http://help.sap.com/saphelp_nw04s/helpdata/en/52/67167f439b11d1896f0000e8322d00/content.htm
    To ensure that a user has the appropriate authorizations when he or she performs an action, users are subject to authorization checks.
    Authorization : An authorization enables you to perform a particular activity in the SAP System, based on a set of authorization object field values.
    You program the authorization check using the ABAP statement AUTHORITY-CHECK.
    AUTHORITY-CHECK OBJECT 'S_TRVL_BKS'
    ID 'ACTVT' FIELD '02'
    ID 'CUSTTYPE' FIELD 'B'.
    IF SY-SUBRC <> 0.
    MESSAGE E...
    ENDIF.
    'S_TRVL_BKS' is a auth. object
    ID 'ACTVT' FIELD '02' in place 2 you can put 1,2, 3 for change create or display.
    The AUTHORITY-CHECK checks whether a user has the appropriate authorization to execute a particular activity.
    This Authorization concept is somewhat linked with BASIS people.
    As a developer you may not have access to access to SU21 Transaction where you have to define, authorizations, Objects and for nthat object you assign fields and values. Another Tcode is PFCG where you can assign these authrization objects and TCodes for a  profile and that profile in turn attached to a particular user.
    Take the help of the basis Guy and create and use.
    Regards
    Anji

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Purchase order with quantity range value

    Hi All,
    I want to have PO price to be taken from vendor info with quantity based values.
    For ex: The vendor has fixed a rate for 500 qty with X rate. and above 500 qty Y rate.
    My scenario is i want to raise the PO with this rate. How can i do this?
    Note: If i do first PO with 300 qty, then rate it is taking is X*300
    Now if i do second purchase order, with 300 qty, then the first 200 qty should go with X rate and remaining 100 should go with Y rate. ie, this purchase order will have combination of X & Y rate.
    Please give the solution for this.

    hi...
    U can perform this pricing by doing some modification in ur conditon type...
    In M/06 open ur condition type and under Scale tab...chooose scale type ..here assign scale type "D" i.e. Graduated-to interval sc
    And config the scale in purchase-info record..
    S Type    s quantity     amount            
         to            200               X
                         300               y
    then it will calculate : 200X+100y=Z
    try this out...
    Hope it works,,
    Thnks
    Edited by: ashish2210 on Jul 20, 2011 3:20 PM

  • Problem in summation on a column with possible null values

    Hi,
    I want to do summation on a column.
    If I use <?sum(amount)?>, if there is any null value,its giving NaN as output.
    From the forum I got the below syntax
    <?sum(AMOUNT[number(.)!='NaN'])?>
    but it is also not giving me the expected result. Its always displays 0.
    I want some thing like sum(NVL(amount,0)). Could some body please help me out?
    Thanks in Advance,
    Thiru

    If the column has many, many null values, and you want to use the index to identify the rows with non-null values, this is a good thing, as a B*Tree index will not index the nulls at all, so, even though your table may be very large, with many millions of rows, this index will be small and efficient, cause it will only contain index entries for those rows where the column is not null.
    Hope that helps,
    -Mark

  • Storing 6 millions values in SQL server columns

    Hi,
    How many values (size) I can store in single SQL server column.
    My Scenario,
    Column1    Column2      Column3
         1      ABC     1,2,3,........(6 millions values)  
         2      CDE     1,2,3,.......(6 millions values)  
    and problem is that I can't store them in rows. I need to store them in single column with comma separated.
    What would be the best way to store them and retrieve them faster. I am thinking about converting it to byte[] or something but not sure how much time it will take to convert to byte and  then again to text.

    and problem is that I can't store them in rows. I need to store them in single column with comma separated.
    No you don't! Unless, that is, you really like to hurt yourself. It's a basic idea that in a relational database that each cell holds an atomic value, and this what relational datbases are optimised for.
    That said, if all you want to do is to store a bunch of values that you will never look at in the database, but the database will act as an unintelligent data store you will only look at the values outside the database, then it
    could make sense. However, a warning: just because you think that there is no requirement today, don't be surprised that you are sooner or later are asked questions like: "which the highest value for ABC?", "Which value is the most
    frequent across ABCs and CDEs?". Questions that are very simple to answer is you have data in a properly designed data model. But extremely to painful to answer if your all values in a single column.
    But if you take this, path, yes, storing values in a varbinary(MAX) with four bytes (or whatever that is needed) per value would be the best thing. It takes up less space than a string, and with fixed length it's easier to find value 198713 if this would
    be nedded. You seem to be worried about conversion from byte to text. Why is not clear to me, since number are usally born binary. But obviously, I have no idea where you get these values from or how you would use them.
    My initial recommendation still stands: store each value in a single row.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • Planfunction in IP or with BW modelling - case with 15 million records

    Hi,
    we need to implement a simple planfunction (qty * price) which has to be executed for 15 million records at a time (qty of 15 million records multiplied with average price calculated on a higher level). I'd like to still implement this with a simple FOX formula but are fearing the performance, given the number of records. Does anyone has experience with this number of records. Would you suggest to do this within IP or using BW modelling. The maximum lead time accepted is 24 hours for this planfunction ...
    The planfunction is expected to be executed in a batch or background mode, but should be triggered from an IP input query and not using RSPC for example...
    please advise.
    D

    Hi Dries,
    using BI IP you should definitely do a partition via planning sequence in a process chain, cf.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/946677f8fb0cf2e10000000a114a6b/frameset.htm
    Planning functions load the requested data into main memory, with 15 million records you will have a problem. In addition it is not a good idea to emply only one work process with the whole work (a planning function uses only one work process). So partition the problem to be able to use parallelization.
    Process chains can be triggered via an API, cf. function group RSPC_API. So you can easily start a process chain via a planning function.
    Regards,
    Gregor

  • Issue in adding not null constraint on 250 G  table with 50 million rows.

    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.

    user445775 wrote:
    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.And what's wrong with it taking an hour? Presumably, this is a one time operation, and it doesn't really interfere with anything else.

  • Authorization (rsecadmin) with customer exit variable

    Hello,
    I need to maintain authorization on 0CALMONTH with a customer exit variable.
    0CALMONTH is "authorization relevant"
    I created a variable of type "customer exit" : ZVAR001 (this variable is OK, I checked its value in a query)
    I created a new authorization object with 0CALMONTH = $ZVAR001.
    When I run my query I have a message due to authorization error.
    If I change my authorization object by replacing my variable ($ZVAR001) by a constant value I have no authorization problem.
    I don't understand why...
    Error logs don't help me to solve my problem : I have the following message "Message EYE007: You do not have sufficient authorization" and system just says I have "0CALMONTH  I EQ $ZVAR001 " but doesn't precise values under variable ZVAR001
    Thanks for your help

    Indeed problem was in costumer exit because I used condition with "I_STEP". Since I have delete my condition I have no authorization problem with my variable....

  • How to populate formbean with ONLY updated values?

    Is there a way to populate the formbean with ONLY updated values in Java Struts?
    Ex: Out of 50 fields displayed on GUI, If lets say user modifies 20 fields, I need to capture only those 20 fields into the form bean.
    The reason i am looking for such function is to avoid the overhead of updating all the 50 fields into the database everytime even if one field is modified.

    Normally, you update the entire record if one of the fields is changed. However, if you display more than one record, only those records have have changed should be updated. The reason you dont update all records is because if the performance hit on updating records that didn't change, among other things (see optimistic conurrency topic at http://msdn.microsoft.com/en-us/library/bb399373.aspx)
    I don't think you should bother coding to only capture those fields that have changed in one record. It complicates your code and makes it difficult to read. Also, performance gain on not reading unchanged fields is offset by performance loss on the logic to manage such a task. Also, your update sql statement will be overly complicated and will have to be dynamically built rather than a single simple update sql statement that updates all fields.

Maybe you are looking for

  • Is there a way to control the width of Sidebar in Finder (Leopard)?

    Each time I open Finder (and there is no difference between different views - Icons, List, Column, Cover Flow!), my *Sidebar with its width takes a good 1/4 of my screen*, so I have to use mouse in order to decrease Sidebar's width!? No mattere I've

  • ITunes voucher for some songs but want to purchase an album

    I have a voucher for 3 songs and now I want to purchase an album. Is the price of 3 songs (0.99*3) just deducted from the album or won't work that at all and I can only buy single songs when I've got 3 songs for free? Any experiences with that?

  • Inconsistent transparency according to object's position on page

    I have a corner object that uses a radial gradient, with 100% opacity at the center, and 0% opacity at the edge. When laid over a rounded rectangle object, the corner object has the appearance of 'fading out' at the edge, just as intended, no problem

  • Deactivate the Foreign Trade data in SD

    Dear Friends, How can I inactivate the Foreign Trade data...Incompletion schema....I dont want to maintain this for my exports sales order..... So the system is not generating Accounting document without filling the Foreign data? Thanks Ivy

  • Trigger Table

    <pre> i have audit table as below CREATE TABLE DS_JOB_LOG JOB_ID NUMBER, JOB_NAME VARCHAR2(200), LOAD_DATE TIMESTAMP, i am trying to write a trigger which does the job of inserting one sequence number kind of thing for the column JOB_ID in DS_JOB_LOG