Questions about authorizations of tables/change requests/badis/locks/lang

Hi ,
Few questions I have not been able to find out .
1) HOw can we ensure that every time we do any change in a table including adding/changing content a change request is generated .Basically to ensure any changes being done are being stored in  a change request .
2)How to give authorizations to/for a database table ?
3)can/how we add water marks in scripts and smartforms ?
4) Can we create and place our own BADIs in SAP standard code?
5)different LOCK types/categories with clear difference (not the standard SAP help please..)
6) tips on handling two table controls on one screen.
7) WHat are the things required if we want to use objects(scripts,texts,smartforms) in different languages ?
8)multilingual scripts ?
9) how to have a search help in module pool without using Process on value request ?
Moderator message - Please - one question per thread and please ask a specific question - post locked
Edited by: Rob Burbank on Dec 3, 2009 4:29 PM

FSKB     G/L Account Posting
this transaction is not working

Similar Messages

  • A question about authorization of "me29n".

    I have a question about authorization of "me29n".
    In the screen of me29n, after I choose "cancel release" option,  there are several button I can use, such as "delete","lock","unlock" and so on.    now I want the "delete" button become unavailable after I choose "cancel release".    how can I archive ?   Is there any authorization object to use?   thanks a lot.

    Hello Victor,
    It is possible through Transaction code "SHDS".
    try to create new variant for it.Also you need to take
    ABAP'rs help in this .Try it.All the Best.
    Regards,
    Manjula.

  • Authorizations for creating change request in ALRTCATDEF

    Hi all,
    Can any one tell me, what are the roles I should have to create a transport request (change request or task) for saving alert category in ALRTCATDEF transaction in SAP XI?
    Thanks in advance.
    Regards,
    Prasad Babu.

    Hi Prasad,
                       If you have SAP_ALL authorization it will do.
    Regards,
    Vamshi

  • Questions about database application tables connector

    Hi all,
    I need to manage with OIM several databases (each one with its custom tables).
    Also, one of them will be my trusted target on the initial load of users. The structure of our tables is not similar to the included in the example.
    The documentation explains how the OraAppX.xml could be modified but it doesn't speak about relations with "DBTable_nonTrusted.xml" and/or "DBTable_trusted.xml" For example, if I don't have a column called USR_COMM_LANG, could I replace this column name for another that I have in my database ? And what about the references on the "DBTable_nonTrusted.xml" file. Connector doc doesn't explain how references to that field (USR_COMM_LANG---xel_usr_comm_lang) are managed by the connector and if the normal working of the connector could be affected if the name of a variable is changed in the xml conf file.
    It could be that adapters of this connector were hardcoded with the referece to xel_usr_comm_lang (???) .
    The connector documentation includes a section titled "Adding Custom Database Columns for Provisioning and Reconciliation". The first impression reading this document is that it is possible to extend the predefined table rather than create and define a new structure of table.
    Has anyone rebuilt the "DBTable_nonTrusted.xml", "DBTable_trusted.xml" and "OraAppX.xml" files changing all default variables in order to manage a customized table? .
    Many thanks in advance,
    Claudia

    Hi Claudia,
    1) Yes, you must replace the column names in the "column" tag on your XML file for the column names that you have in your database tables. Also, remember that you need to configure the other parameters (like data_type, data_typ_size, etc) according to your database table.
    2) The "xel_data_source" parameter is the attribute name that will be recognized by the OIM and they can be mapped to a form field (UD_DBAPP_XXXX) in your process definition in OIM Design Console.
    3) You can customize your Database Application Tables connector for provisioning as you wish, but some issues exists. In the Known Issues section of the documentation you can see that this connector doesn't work for more than two tables.
    4) To customize your Database Application Tables for Trusted Recon, there are other restrictions because OIM users have some required fields that must be filled to create a user successfully.
    Hope that helps, and please let me know if you have more questions.
    Regards.

  • 2 questions about authorization filter...

    Hi guys,
    i need your help to solve my question..
    i'm developing a jsf application and i've created an authorization
    filter...
    My filter must checking for each page access if a registered user is
    stored in the session,if not redirect to login page. I've a bit
    experience on servlet and filter and i've solved this question with
    this filter.
    import java.io.IOException;
    import javax.servlet.*;
    import javax.servlet.http.*;
    public class AuthorizationFilter implements Filter {
             * @uml.property name="config"
             * @uml.associationEnd
            FilterConfig config = null;
             * @uml.property name="servletContext"
             * @uml.associationEnd
            ServletContext servletContext = null;
            public AuthorizationFilter() {
            public void init(FilterConfig filterConfig) throws ServletException {
                    config = filterConfig;
                    servletContext = config.getServletContext();
            public void doFilter(ServletRequest request, ServletResponse response,
                            FilterChain chain) throws IOException, ServletException {
                    Utils.log(servletContext, "Inside the filter");
                    HttpServletRequest httpRequest = (HttpServletRequest) request;
                    HttpServletResponse httpResponse = (HttpServletResponse) response;
                    HttpSession session = httpRequest.getSession();
                    String requestPath = httpRequest.getPathInfo();
                    Visit visit = (Visit) session.getAttribute("visit");
                    if (visit == null) {
                            System.out.println("Visit Nullo");
                            session.setAttribute("originalTreeId", httpRequest
                                            .getPathInfo());
                            Utils.log(servletContext, "redirecting to "
                                            + httpRequest.getContextPath() + "/faces/Login.jsp");
                            httpResponse.sendRedirect(httpRequest.getContextPath()
                                            + "/index.jsp");
                    else {
                            chain.doFilter(request, response);
                    Utils.log(servletContext, "Exiting the filter");
            public void destroy() {
    } in my authentication bean,after user has logged in i've
    loggedIn=true;
    User newUser = new User(loginName, password,teamName, role);
    Visit visit = new Visit();
    visit.setUser(newUser);
    visit.setAuthenticationBean(this);
    visit.setLoggedIn(loggedIn);
    setVisit(visit);
    getApplication().createValueBinding("#{sessionScope.visit}").setValue(faces�Context,visit); to store values into visit object.
    and this is my logout function
    FacesContext facesContext = getFacesContext();
                    Utils.log(facesContext, "Executing AuthenticationBean.logout()");
                    HttpSession session = (HttpSession) facesContext.getExternalContext()
                                    .getSession(false);
                    session.removeAttribute("sessionScope.visit");
                    if (session != null) {
                            session.invalidate();
                    } My 2 questions are:
    1) how can i redirect to login page a user that tries to log in with
    the same data of a user stored in the session?
    2) how can i handling browser closing?I need a listener?
    Please help me,i'm trying to learn about it and i need your help.
    Thanks

    hi,
    1. use the copy - paste functions in the drop down menu.
    2. same menu, save setting as........
    DR9.

  • Question about X-Fi mode changes and relay clicks

    Hello,
    I've been having a heck of a time getting my new x-fi elite pro to work nicely in my computer (I have an MSI K8N neo4 platinum -- nforce4 chipset -- I'm aware of the issues). I kept having issues with the mode changer not working right about 50% of the time. Either it would stop recognizing my x-fi, or the SPI process would crash. In either case I could no longer change modes and the external box would stop working until a reboot. I tried many, many PCI slot swaps and driver reinstalls. Currently, things seem to be working ok (at least as far as mode switching goes...ASIO is another story) in that I can change modes without crashes...at least for the last half hour or so.
    The only thing bothering me is that now, when I change modes, I almost never hear the relay click. Maybe out of 20 mode changes I get the click. Before, when the mode would successfully change, I'd always hear the click. Now, that's not the case. I'm wondering if the mode changing is really working correctly even though I'm not getting any crashes. So, my question to others is, when you change modes, do you always hear the relay click? I'm trying to get this all figured out before I decide to send it back. ASIO drivers work in creation mode, my cmss settings show up right in entertainment mode, etc...so maybe it's working right. I'm just not sure.
    Thanks in advance.

    Nevermind.... I rebooted for the heck of it. The relays clicked every time until I went into audio creation mode and actually used the ASIO drivers. Then, I tried to switch back to entertainment mode and SPI crashed on me again. I think it's about time to send this thing back, I've wasted about 2 hours trying to get this thing to work right.
    Thanks anyway.

  • BRF+ - NW 7.01 - questions about a decision table

    In my testcase I have set up a decision table with multiple columns. As long as the number of rows is low it is easy to maintain. But we are looking at the possibility to use this kind of business rules in our current project, and then it would end up in at least 1000 or more rows.
    Is there an easy, user friendly way to let the business users maintain such a decision table ?
    Do you need to activate this decision table every time something is changed ?Is it possible to make this decision table local and the rest of the parts (application, data objects, function) transportable ? How can you deal with such a situation in the best way ?
    Is there a query / filter functionality to query a decision table ?
    Best regards,
    Elly van de Wouw
    IT - IFF

    Hi,
    You cannot use such big decision tables in NW 701 but only in NW 702. The reason is a change in the internal data model.
    In NW 701 you will run into timeouts.
    I did some internal testing in our development systems.
    1. programmatically create a decision table with many rows (10 columns)
    2. activate and save
    3. execute it (includes code generation)
    In NW 701 I did achieve around 800 rows and then I had a timeout. I hav to admit the system is also quite poor.
    In NW 702 it took about 100 seconds. In NW 702 you also can maintain the decision table entries in MS Excel and upload into the table.
    You need to activate the changes so that they are used for rules execution. As of NW 702 you further have possibilities to combine local and transported content.
    BR,
    Carsten

  • Question about Workspace enabled tables, triggers

    Hello,
    We have created our tables structured from Oracle Designer and we also have several triggers, API for almost all the tables. While reading Workspace Manager, it seems that after I would execute dbms_wm.enableversioning, it woould rename my table to a view and would creat other views and tables. I have following questions:
    1)What would happen to those triggers which are based on the original tables. Would trigger be attached to the VIEW definition of the origianl table?
    2) Lets assume I have a customer and cust_address tables respectively, I have a customer id =100 which has two customer address rows in the LIVE. I have a workspace is based on cust_address table. Would it be possible for a user to see all the address data ( live + workspace) for customer=100 in one SQL query?
    3) What if I have two live customer address rows in cust_address tables and 3 rows in workspace of cust_address table, does workspace manager allow me to conditionally add and update worskpace rows to LIVE data, ie could I add 1 row from workspace to LIVE and update one row from workspace to LIVE data?
    Thanks for your answers?
    Syed

    Ben,
    Here is our business problem. We receive data in terms of text file regarding our customers. Files contain customer name and address information and customers payments and debts information. For the sake of simplicity,lets assume that I have customer , address, payment , debts and referral table . From the file we only load to teh customer,adderss,payments and debts table, once data is loaded, end user creates referral data which is stored in referral table.
    When we receive data, we are not sure if data contain any customer unique key ( customer id, or SSN etc) , we try to match the incoming data based on last name, first name, date of birth , SSN and address etc, if system find a match then it creates rows in all these 4 tables (customer, address, payments and debts). If system did not find any match then I have another table called potential_match ( with similar columns like customer table plus matched_customer_id) and system insert the customer rows in this table and also insert other data such as address, payments and debts to the respective table. I want to process all these data immediately because business need is to immediately create referral data.
    When an end user find some time, they come to potential_match table and they try to identify if two customers are really same or not, if they decide that two customers data are same then we have to associate address , payments, debts and referral data ( and some data in other tables which have been created by other process) to the original customers. So I need to write a PL/SQL which will basically add all the children records of customer say B to customer say A if A and B are same.
    I was looking into workspace manager and I found that it might be a possible solution and I am not sure how would I utilize WSM so that end user does not have to change the way they used to be working. IF I donot add customer's other data ( address, payments, debt etc) to LIVE immediately then they would not be able to search for the customer debt if the debt is in workspace. I want to give them a transparent look to LIVE and production data all time ( I donot want to give them gotoWorkspace feature). Only few user will have access to potential_match table and gotoWorspace and they will be able to merge /associate to the original customers ie removing from workspace and adding to LIVE. Some time users may want to unmatched also, meaning if they matched two customers into one then if due to some problem it was a wrong mergd then they should be able to unmerged into two different customers and associative data. Users also want to run temporal query such as they want to go back intime and want to see customer debt data as on Dec 2004.
    Based on your experience, how would I be able to design my workspace, do you think WSM is the way to go or I should use my own PL/SQL custom code to implement the problem.? I have never used it so I am not sure, I need to understand its implication. Currently we are using Oracle 8.17 but very soon we are moving to 10G.
    Thanks for all your help.
    Syed

  • ICR Process 003 - question about data selection (table FBICRC003A)

    Hello, I am implementing ICR process 003. We are doing several test and we I have some questions that I hope you can help me:
    1 - If I run transaction FBICS3 - Customer/Vendor(  Select Documents) and then FBICA3 - Customer/Vendor (Document Assignment ) several times (the same selection criteria) will the same documents be selected redundantly and will be stored redundantly in table FBICRC003A? I expected that this will not happen but It seems to happen in my test environment. (?)
    2- If I need to delete the data stored in ICR '003' functionality I need to use transaction GCDE. The problem as I am using ledger '0L' for '003' process I cannot "delete data of one ledger" functionality -that allows to set selection data- and I have to "delete the data of an entire data group" that deletes all data stored in FBICRC003A & FBICRC003T tables. Should I set another ledger for '003' process in order to delete data using selection criteria? Is it recommended to not to use '0L' and create a new one?
    I have read in reference documentation that "is not necessary to set up a SL", but since all my productive companies are running in the same client that is the ICR cliente I am wondering if could be better to create and set a SL.
    Thanks in advance
    Rafael Barreda
    Edited by: Rafael Barreda on Sep 14, 2009 1:27 PM

    Hi Ralph,
    we have created a RFC in order to get data from client B to client A (where ICR system is placed). The summary is:
    I need to import vendors/customer data from Client B that belongs to a certain company code "0001". The company is called "X" in both clients A & B although it only exists as a company code (FI) in Client B and just as a company (and trading partner) in client A.
    Client A:
    1- have set company "0001" as a "Company to be reconciled" at FBIC032:
    RFC destination = ""
    RFC destination for data selection = "ZRFC0001"
    Local company= ""
    Data Source="Documents of Current Process"
    Separate Selection Process= "X"
    Data Transfer Type="Asynchronous via Direct RFC Connection"
    Sender field for reference number = "XBLNR"
    Client B:
    1- I have created the companies (V_T880) that I will need in order to inform trading partner on vendor's master data.
    2- I have assigned company code '0001' to company "X"
    3- I have assigned trading partners created on step 1 to vendors
    4- I have post few FI documents with trading partners informed.
    Then I run FBICS3 - Customer/Vendor: Select Documents in background but the programs takes a lot of time and do not select any document.
    Do you think that I am missing something?
    Thanks very much in advance.

  • VAT Tax determination- question about "Requirement" for Table-Is mandatory?

    Hi everybody,
    I am having problems to determine the appropriate VAT tax percent in some cases in Mexico.
    In Mexico for billing at the border with other countries the VAT tax percent is 11% and for billing inside the country the VAT tax percent is 16%. For determine the VAT tax percent the sequence access is MWST and is assigned to the condition type MWST.  For cover the border cases we create a table 989 with the fields: Distribution Channel/Plant and VK11 is fill out with this information. With these parameters is possible to determine successfully the VAT percent of 11% at border sales point.
    The problem that I have is related to the u201Crequirementu201D assigned to the table 989: Distribution Channel/Plant. This table has assigned a standard requirement number 7- Domestic Business and when billing to a customer with address of MX (Mexico) as LAND1 at general data section of customer master data, the VAT tax percent of 11% is obtained successfully, because all the fields of the table are filled. But when billing to a customer with address of US ( United States of America) as LAND1 at general data section of customer master data, is not possible to obtain the correct VAT tax percent of 11% even all the fields of the table are filled.
    I do not know if the problem is the requirement number 7- Domestic Business, but if I change for the requirement 8- Export Business then the opposite is occurring. For US customers the VAT tax percent is determinate successfully and for MX customers not, because the fields of the u201Crequirement u201Care not filled.  
    So, I am wondering if:
    1.     Would be necessary to create a new requirement for combine the 7- Domestic Business and 8- Export Business requirements or exist one like this? I do not need to conditioning anything
    2.     What happen if I left in blank the requirement field, I mean if I do not assign anything. I was doing testing for both customers and the VAT tax percent was determined correctly. But I do not know if is correct do not assign a u201Crequirementu201D to the table of the sequence of access MWST.
    Thank you in advance for your help!

    OK I think I understand but here is where I am at now. I originally created a regulation that requires a license when shipping to certain countries out of the EU. I set the regulation as an Export reg did all the necessary config and was able to assign licenses automatically where needed.
    Has I was testing the solution I could not find sales docs or deliveries for some of the materials and finally found out they are being moved on NB PO's.
    So I changed the reg to include imports etc., classified with import control class etc. but I am seeing a couple things that don't make sense and tells me my config isn't quite right.
    I finally do see that the system at least recognizes the new legal reg in the screening now. before this it had not. But it is still telling me it can't find any relevant country.
    The other thing the document itself only shows under Display Existing Export Documents even though in the log it says it's an import document and when I go to the Import log there is no data.
    Before I realized there was a note to change the Import to export on transfer I assigned the PO types to export orders and I am thinking that's why they are showing in the export list but I woul have tought that it would say it's an export doc which it doesn't.
    Any ideas?

  • Basic question about querying georaster tables

    hello all
    i have a table with georaster column containing georeferenced images covering large area. can i query the table to get an image covering any user given cooordinates . the user given coordinates can span more than one image in the table. is this possible
    thanks

    Hi,
    Yes this is possible to operate with several GeoRaster images in one or several tables. But before this you should create specific GeoRaster metadata structure to join those separate images. You can find information about this procedure in GeoRaster documentation.
    Regards,
    Andrejus

  • Question about global temp tables

    I have global temporary table with ON COMMIT setting ON COMMIT PRESERVE ROWS. E.g.:
    CREATE GLOBAL TEMPORARY TABLE admin_work_area
            (startdate DATE,
             enddate DATE,
             class CHAR(20))
          ON COMMIT PRESERVE ROWS;On application start procedure inserting data into table, on application end is DELETE statement used to make table empty.
    Interestingly if application is started again (in same session!) deleted rows appear again in table before call of insert-procedure. So after call of insert-procedure data will be doubled... :(
    So my question is:
    Does COMMIT in this constellation making ROLLBACK of deleted rows?
    Sounds unlogical to me, but appear to be like that...
    Message was edited by:
    Faust
    Edit: ON COMMIT setting

    Are you sure that the rows somehow just appear back
    and it's not the application which inserts them
    twice.Yes I'm sure, there is only one call of insert-procedure (on application start).
    Are you using autonomous transactions for
    those inserts by any chance?No.
    SID is just an index into session fixed array, so the
    only way to get the same SID in an instance is when
    the previous session ends.
    Each session array slot contains a SERIAL# field
    which is zero at instance start and is incremented
    every time the slot is reused by next session.
    So, as long as your session exists, it is impossible
    that someone else gets same SID + SERIAL# combination
    in an instance.
    Note that the SESSION_ADDR and SESSION_NUM give you
    the address and SERIAL# of the session owning a
    temporary segment.Original session exist...
    Thank you Tanel for your replay!

  • Question about sorted, hashed tables, mindset when using OO concepts...

    Hello experts,
    I just want to make sure if my idea about sorted and hashed table is correct.Please give tips and suggestions.
    In one of my reports, I declared a structure and an itab.
    TYPES: BEGIN OF t_mkpf,
            mblnr           LIKE mkpf-mblnr,
            mjahr           LIKE mkpf-mjahr,
            budat           LIKE mkpf-budat,
            xblnr(10)       TYPE c,
            tcode2          LIKE mkpf-tcode2,
            cputm           LIKE mkpf-cputm,
            blart           LIKE mkpf-blart,
          END OF t_mkpf.
    it_mkpf       TYPE SORTED   TABLE OF t_mkpf WITH HEADER LINE
                                       WITH NON-UNIQUE KEY mblnr mjahr.
    Now, I declared it as a sorted table with a non-unique key MBLNR and MJAHR. Now suppose I have 1000 records in my itab. how will it search for a particular record?
    2. Is it faster than sorting a standard table then reading it using binary search?
    3. How do I use a hashed table effectively? lets say that I want to use hashed type instead of sorted table in my example above.
    4. I am currently practicing ABAP Objects and my problem is that I think my mindset when programming a report is still the 'procedural one'. How do one use ABAP concepts effectively?
    Again, thank you guys and have a nice day!

    Hi Viray,
    <b>The different ways to fill an Internal Table:</b>
    <b>append&sort</b>
    This is the simplest one. I do appends on a standard table and then a sort.
    data: lt_tab type standard table of ...
    do n times.
    ls_line = ...
    append ls_line to lt_tab.
    enddo.
    sort lt_tab.
    The thing here is the fast appends and the slow sort - so this is interesting how this will compare to the following one.
    <b>read binary search & insert index sy-tabix</b>
    In this type I also use a standard table, but I read to find the correct insert index to get a sorted table also.
    data: lt_tab type standard table of ...
    do n times.
    ls_line = ...
    read table lt_tab transporting no fields with key ... binary search.
    if sy-subrc <> 0.
      insert ls_line into lt_tab index sy-tabix.
    endif.
    enddo.
    <b>sorted table with non-unique key</b>
    Here I used a sorted table with a non-unique key and did inserts...
    data: lt_tab type sorted table of ... with non-unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>sorted table with unique key</b>
    The coding is the same instead the sorted table is with a unique key.
    data: lt_tab type sorted table of ... with unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>hashed table</b>
    The last one is the hashed table (always with unique key).
    data: lt_tab type hashed table of ... with unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>You Can use this Program to Test:</b>
    types:
      begin of local_long,
        key1 type char10,
        key2 type char10,
        data1 type char10,
        data2 type char10,
        data3 type i,
        data4 type sydatum,
        data5 type numc10,
        data6 type char32,
        data7 type i,
        data8 type sydatum,
        data9 type numc10,
        dataa type char32,
        datab type i,
        datac type sydatum,
        datad type numc10,
        datae type char32,
        dataf type i,
        datag type sydatum,
        datah type numc10,
        datai type char32,
        dataj type i,
        datak type sydatum,
        datal type numc10,
        datam type char32,
        datan type i,
        datao type sydatum,
        datap type numc10,
        dataq type char32,
        datar type i,
        datas type sydatum,
        datat type numc10,
        datau type char32,
        datav type i,
        dataw type sydatum,
        datax type numc10,
        datay type char32,
        dataz type i,
        data11 type numc10,
        data21 type char32,
        data31 type i,
        data41 type sydatum,
        data51 type numc10,
        data61 type char32,
        data71 type i,
        data81 type sydatum,
        data91 type numc10,
        dataa1 type char32,
        datab1 type i,
        datac1 type sydatum,
        datad1 type numc10,
        datae1 type char32,
        dataf1 type i,
        datag1 type sydatum,
        datah1 type numc10,
        datai1 type char32,
        dataj1 type i,
        datak1 type sydatum,
        datal1 type numc10,
        datam1 type char32,
        datan1 type i,
        datao1 type sydatum,
        datap1 type numc10,
        dataq1 type char32,
        datar1 type i,
        datas1 type sydatum,
        datat1 type numc10,
        datau1 type char32,
        datav1 type i,
        dataw1 type sydatum,
        datax1 type numc10,
        datay1 type char32,
        dataz1 type i,
      end of local_long.
    data:
      ls_long type local_long,
      lt_binary type standard table of local_long,
      lt_sort_u type sorted table of local_long with unique key key1 key2,
      lt_sort_n type sorted table of local_long with non-unique key key1 key2,
      lt_hash_u type hashed table of local_long with unique key key1 key2,
      lt_apsort type standard table of local_long.
    field-symbols:
      <ls_long> type local_long.
    parameters:
      min1 type i default 1,
      max1 type i default 1000,
      min2 type i default 1,
      max2 type i default 1000,
      i1 type i default 100,
      i2 type i default 200,
      i3 type i default 300,
      i4 type i default 400,
      i5 type i default 500,
      i6 type i default 600,
      i7 type i default 700,
      i8 type i default 800,
      i9 type i default 900,
      fax type i default 1000.
    types:
      begin of measure,
        what(10) type c,
        size(6) type c,
        time type i,
        lines type i,
        reads type i,
        readb type i,
        fax_s type i,
        fax_b type i,
        fax(6) type c,
        iter type i,
      end of measure.
    data:
      lt_time type standard table of measure,
      lt_meantimes type standard table of measure,
      ls_time type measure,
      lv_method(7) type c,
      lv_i1 type char10,
      lv_i2 type char10,
      lv_f type f,
      lv_start type i,
      lv_end type i,
      lv_normal type i,
      lv_size type i,
      lv_order type i,
      lo_rnd1 type ref to cl_abap_random_int,
      lo_rnd2 type ref to cl_abap_random_int.
    get run time field lv_start.
    lo_rnd1 = cl_abap_random_int=>create( seed = lv_start min = min1 max = max1 ).
    add 1 to lv_start.
    lo_rnd2 = cl_abap_random_int=>create( seed = lv_start min = min2 max = max2 ).
    ls_time-fax = fax.
    do 5 times.
      do 9 times.
        case sy-index.
          when 1. lv_size = i1.
          when 2. lv_size = i2.
          when 3. lv_size = i3.
          when 4. lv_size = i4.
          when 5. lv_size = i5.
          when 6. lv_size = i6.
          when 7. lv_size = i7.
          when 8. lv_size = i8.
          when 9. lv_size = i9.
        endcase.
        if lv_size > 0.
          ls_time-iter = 1.
          clear lt_apsort.
          ls_time-what = 'APSORT'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            append ls_long to lt_apsort.
          enddo.
          sort lt_apsort by key1 key2.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_apsort ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_binary.
          ls_time-what = 'BINARY'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            read table lt_binary
              transporting no fields
              with key key1 = ls_long-key1
                       key2 = ls_long-key2
              binary search.
            if sy-index <> 0.
              insert ls_long into lt_binary index sy-tabix.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_binary ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_sort_n.
          ls_time-what = 'SORT_N'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_sort_n.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_sort_n ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_sort_u.
          ls_time-what = 'SORT_U'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_sort_u.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_sort_u ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_hash_u.
          ls_time-what = 'HASH_U'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_hash_u.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_hash_u ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
        endif.
      enddo.
    enddo.
    sort lt_time by what size.
    write: / ' type      | size   | time        | tab-size    | directread  | std read    | time direct | time std read'.
    write: / sy-uline.
    loop at lt_time into ls_time.
      write: / ls_time-what, '|', ls_time-size, '|', ls_time-time, '|', ls_time-lines, '|', ls_time-readb, '|', ls_time-reads, '|', ls_time-fax_b, '|', ls_time-fax_s.
    endloop.
    form fill.
      lv_i1 = lo_rnd1->get_next( ).
      lv_i2 = lo_rnd2->get_next( ).
      ls_long-key1 = lv_i1.
      ls_long-key2 = lv_i2.
      ls_long-data1 = lv_i1.
      ls_long-data2 = lv_i2.
      ls_long-data3 = lv_i1.
      ls_long-data4 = sy-datum + lv_i1.
      ls_long-data5 = lv_i1.
      ls_long-data6 = lv_i1.
      ls_long-data7 = lv_i1.
      ls_long-data8 = sy-datum + lv_i1.
      ls_long-data9 = lv_i1.
      ls_long-dataa = lv_i1.
      ls_long-datab = lv_i1.
      ls_long-datac = sy-datum + lv_i1.
      ls_long-datad = lv_i1.
      ls_long-datae = lv_i1.
      ls_long-dataf = lv_i1.
      ls_long-datag = sy-datum + lv_i1.
      ls_long-datah = lv_i1.
      ls_long-datai = lv_i1.
      ls_long-dataj = lv_i1.
      ls_long-datak = sy-datum + lv_i1.
      ls_long-datal = lv_i1.
      ls_long-datam = lv_i1.
      ls_long-datan = sy-datum + lv_i1.
      ls_long-datao = lv_i1.
      ls_long-datap = lv_i1.
      ls_long-dataq = lv_i1.
      ls_long-datar = sy-datum + lv_i1.
      ls_long-datas = lv_i1.
      ls_long-datat = lv_i1.
      ls_long-datau = lv_i1.
      ls_long-datav = sy-datum + lv_i1.
      ls_long-dataw = lv_i1.
      ls_long-datax = lv_i1.
      ls_long-datay = lv_i1.
      ls_long-dataz = sy-datum + lv_i1.
      ls_long-data11 = lv_i1.
      ls_long-data21 = lv_i1.
      ls_long-data31 = lv_i1.
      ls_long-data41 = sy-datum + lv_i1.
      ls_long-data51 = lv_i1.
      ls_long-data61 = lv_i1.
      ls_long-data71 = lv_i1.
      ls_long-data81 = sy-datum + lv_i1.
      ls_long-data91 = lv_i1.
      ls_long-dataa1 = lv_i1.
      ls_long-datab1 = lv_i1.
      ls_long-datac1 = sy-datum + lv_i1.
      ls_long-datad1 = lv_i1.
      ls_long-datae1 = lv_i1.
      ls_long-dataf1 = lv_i1.
      ls_long-datag1 = sy-datum + lv_i1.
      ls_long-datah1 = lv_i1.
      ls_long-datai1 = lv_i1.
      ls_long-dataj1 = lv_i1.
      ls_long-datak1 = sy-datum + lv_i1.
      ls_long-datal1 = lv_i1.
      ls_long-datam1 = lv_i1.
      ls_long-datan1 = sy-datum + lv_i1.
      ls_long-datao1 = lv_i1.
      ls_long-datap1 = lv_i1.
      ls_long-dataq1 = lv_i1.
      ls_long-datar1 = sy-datum + lv_i1.
      ls_long-datas1 = lv_i1.
      ls_long-datat1 = lv_i1.
      ls_long-datau1 = lv_i1.
      ls_long-datav1 = sy-datum + lv_i1.
      ls_long-dataw1 = lv_i1.
      ls_long-datax1 = lv_i1.
      ls_long-datay1 = lv_i1.
      ls_long-dataz1 = sy-datum + lv_i1.
    endform.".
    Thanks & Regards,
    YJR.

  • Question about authorizations in SRM + cFolders

    Dear colleagues,
    We are working in a SAP SRM 7.0 prototype.
    Right now we are trying to configure cFolders (installed in the same server as SRM and with RFC working) but we've reached a point where we don't certainlly know what can be going wrong.
    After several tries with different user authorizations we've always had the same result, it's impossible to create or assign a collaboration to a bid when creating it due to an error which just says "Error creating collaboration; check user authorization"
    We have tried with different roles configurations (SAP_CFX_SUPER_USER_ADMIN, all CFX roles, just CFX_*_CREATOR roles...) just nearly everything you can think about, but always get the same error.
    The funny thing about it is that when accessing cFolders through BSP application CFX_RFC_UI this user can create folders and other short of objects, as all works fine.
    Thank you very much for your help.
    Miguel

    Hey Miguel,
    Did you ever find a solution for this issue?  I am having the exact same problem.
    My user has all the roles assigned and is able to create collaborations directly in CFolders, but when I try to create it from SRM RFx, I get the message:  "error creating collaboration; check user authorisation".
    If anybody else has come across the same problem, please help!!
    Thanks and happy new year!
    Monica

  • Questions about authorization variable customer user exit

    Dear all,
    To reduce the authorization maintenance effort, I found from the web that we can use authorization variable with customer user exit RSR00001.
    When I use the transaction CMOD to display the maintain the user exit RSR00001, the user exit does not found. I would like to know how can I use thie user-exit?
    My SAP version is R/3 4.7
    The information of authorization variable  from web is as follow:
    http://help.sap.com/saphelp_nw04/helpdata/en/6d/58f438114ee836e10000000a114084/frameset.htm
    Would anyone have some ideas to solve my questions?
    Many thanks
    Sunny
    Edited by: LI Sunny on Aug 3, 2010 12:08 PM

    Dear Bala Duvvuri,
    Firstly, many thanks of your reply.
    Actually, what I want to do is to call some user-exit when performing authorization checking. I want to add some logic to the authorization checking and the user exit can be called automatically when performing authorization checking.
    I mainly use this checking in the FI module.
    Are there any ways I can perform this checking?
    One more findings, I have another machine containing SAP XI, I can search the user exit RSR00001. but it doesn't exit in SAP R/3 4.7. Is it version issue or my SAP R/3 4.7 doesn't contain the BI module?
    Many Thanks again.
    Sunny

Maybe you are looking for

  • Automatic Row Processing?

    Hi, I have a requirement to create forms that insert into multiple tables and returns items from multiple tables. What are my options? Can I still use automatic row processing and fetching? Thanks in advance.

  • Quick Time 7.1.6 problems

    I can no longer view Flash movies within Safari after installing QT 7.1.6 update. I just get a "Q" with question mark. All is well with FireFox. I repaired permissions prior to and after installation. I did not run DiskWarrior since I just used it an

  • IPlanet datasources and ArrayIndexOutOfBoundsException - ack!

    Hello - I am encountering an annoying problem since deploying a module to iPlanet 6.0. On the line: ds = (javax.sql.DataSource) ctx.lookup(dsName); an ArrayIndexOutOfBoundsException is being thrown. There was no problem with ' ctx = new javax.naming.

  • Apple TV2 throwing "An error occurred while loading this content"

    Hi I have purchased an Apple TV2 recently and tried to watch rental movies. Preview of all the movies working fine but after rental is consistently giving error "An error occurred while loading this content". I have 8Mbps line and can rent movies fro

  • Urgent!Java Frame Cutting according to non-rectangular image (Transparency

    Urgent-Java Frame Cutting according to non-rectangular image (CrossPlatform - Transparency) hi, i want to make the frame transparent in order to make a skin like Windows Media Player . skin is non-rectangular how should i make the frame transparent i