Querying Dates missing in the table

I have columns name, taskname start_date and end_date in my table.
e.g.,
Name Task StartDate EndDate
abc task1 2008-04-01 2008-04-03
abc task2 2008-04-05 2008-04-15
abc task3 2008-04-08 2008-04-18
abc task4 2008-04-21 2008-04-25
abc task5 2008-04-19 2008-04-23
abc task6 2008-04-25 2008-04-30
I need to find out the dates for which there are no tasks planned for the person in the month excluding Saturdays and Sundays.
Is it possible to get the output in a single SQL itself, or else how could it be done in PL/SQL procedure ?

Hi,
You'll need to do an outer join to a table (or subquery) that has all the possible dates. Here's one example from this site:
OUTER JOIN

Similar Messages

  • How to select data missing in the table?

    <p>
    <strong>I have a list of strings as follows (not in the table):
    'aaaaa'
    'bbbbb'
    'ccccc'
    In the table T1 I have a VARCHAR column x. The data under x are as follows
    'aaaaa'
    'ccccc'
    I want to formulate a query such that it will bring 'bbbbb' since 'bbbbb' is not in the table. Please help!
    Thanks.
    </strong>
    </p>

    We get asked this question on a reasonably regular basis.. Apparently people think it's perfectly normal and acceptable to ask a data repository for all the data it doesnt contain. We normally have a conversation that runs:
    Tech "So, you want to see all data in the database, that is not in the database?"
    User "Yes"
    Tech "How do you propose that a database gives you all the information it doesnt have"
    User "Oh. Now you put it that way.."
    Actually I wish someone would hurry up and invent a data repository that does this, that way if I suddenly took a liking to The Beatles, but didnt have any of their songs in my collection, I could ask the data repository for all the Beatles songs I didnt have and.. bam! I'd have all the Beatles songs! cool
    Seriously.. youre asking us to tell you how to make an oracle db tell you what information it doesnt have, but youre refusing to supply a block of data for it to compare against (i.e. so it can know what "Everything" is, in the phrase "Everything it doesn't have".
    Do you really see it as possible?
    Edited by: charred on Oct 22, 2008 9:45 AM

  • PR00 is missing in the table PRCC_COND_CT

    Hi Experts
    I have successful replicated all conditional Objects like DNL_CUST_CNDALL,DNL_CUST_CND,DNL_CUST_PRC from ECC to CRM2007,
    when i try to create a Sales Order in crm Condition types are missing ,i can able to see only values  with Zero in the condition Tab page for which i have maintained the conditional maitainece gp for PRODUCTCRM inorder to see condition types in CRM product conditions as per Building Block(c03).
    so what i was observed is condition types like PR00,K004,K005,K007 are missing in the table PRCC_COND_CT.which are not properly replicated into CRM system for which i have done the initial replication for all conditional objects so many time.
    Hence i request you all could you please tell me, is there any way to get all condition types into crm
    will it possible to add manually condition types in crm system.
    i would appreciate you help
    thanking you in advance
    Regards
    Rao

    Hi Bhanu
    I have checked in SLG1 there are following errors  and warnings
    Errors are:Table PRCC_COND_PPD has been posted in the database with errors
    Message no. CND_MAP155
    Table PRCC_COND_CT has been posted in the database with errors
    Message no. CND_MAP155
    1 .Error converting field FIELD_TIMESTAMP into /SAPCND/T685 for condition type BGEW.
        Message no. CND_MAP181
    Diagnosis
    Data records from the tables mentioned are required for converting field contents. These tables show inconsistencies that could have the following causes:
    A required entry is not available in one of the tables.
    Field contents for table entries are not compatible with each other.
    System Response
    Only those data records are modified in the table for which consistent table entries exist, and for which conversion of field entry is possible.
    Procedure
    With the help of OSS Note 314542 and instructions contained therein, analyze causes, and make necessary corrections.
    2. Error converting field FIELD_TIMESTAMP into /SAPCND/T685 for condition type BI00.
        Message no. CND_MAP181
    Diagnosis
    Data records from the tables mentioned are required for converting field contents. These tables show inconsistencies that could have the following causes:
    A required entry is not available in one of the tables.
    Field contents for table entries are not compatible with each other.
    System Response
    Only those data records are modified in the table for which consistent table entries exist, and for which conversion of field entry is possible.
    Procedure
    With the help of OSS Note 314542 and instructions contained therein, analyze causes, and make necessary corrections.
    3.Error converting field FIELD_TIMESTAMP into /SAPCND/T685 for condition type PR00.
    Message no. CND_MAP181
    Diagnosis
    Data records from the tables mentioned are required for converting field contents. These tables show inconsistencies that could have the following causes:
    A required entry is not available in one of the tables.
    Field contents for table entries are not compatible with each other.
    System Response
    Only those data records are modified in the table for which consistent table entries exist, and for which conversion of field entry is possible.
    Procedure
    With the help of OSS Note 314542 and instructions contained therein, analyze causes, and make necessary corrections.
    ther are follwoing Warnings:
    1. Double entry for table PRCC_COND_PPD. For details, see long text.
        Message no. CND_MAP815
    Diagnosis
    An error occurred during the update. You tried to update a data record with the key KAPPL = CRM and KALSM = 18CBCL in table PRCC_COND_PPD. A data record with the above key already exists in table PRCC_COND_PPD.
    System Response
    Data records with identical keys conflict with data integrity. The above data record was not updated for this reason.
    Procedure
    Use SAP Note 0314315 to eliminate the cause of the error.
    2. Double entry for table PRCC_COND_CT. For details, see long text.
    Message no. CND_MAP812
    Diagnosis
    An error occurred during the update. You tried to update a data record with the key KAPPL = CRM and KSCHL = 18CL in table PRCC_COND_CT. A data record with the above key already exists in table PRCC_COND_CT.
    System Response
    Data records with identical keys conflict with data integrity. The above data record was not updated for this reason.
    Procedure
    Use SAP Note 0314315 to eliminate the cause of the error.
    Edited by: padmarao mota on Mar 13, 2009 9:47 PM

  • Error " Data missing for the entry check while creating a new waste code

    Hi all, While setting a new Waste code I get the error " Data missing for the entry check, correction:". while filling the NAM- WASTECOCAT - LER item.
    This sould look for the catalog's name included in the phrase set but for some reason it doesn't find it giving me this error.
    I am changing original Characteristics, phrase set, classes, and value assignment type. Just to have my own estructure with Znames for all of them.
    I have also change the enviroment parameter "WAM_PHRSET_WACATLG" with the name of my phrase set.
    I have checked everything several times watching for typos or looking for a missing step.
    I have even tried including my new Z's characteristics in the classe and living the original SAP_EHS_1024_001_WASTE_CATALOG. (changing the enviroment parameter WAM_PHRSET_WACATLG to SAP_EHS_1024_001_WASTE_CATALOG) and it works.
    I will like to change this characteristic by Z_EHS_WA_WASTE_CATALOG
    Phrase set to Z_EHS_WA_WASTE_CATALOG.
    enviroment parameter WAM_PHRSET_WACATLG= Z_EHS_WA_WASTE_CATALOG
    After matching up the master data It should work fine but I might be missing something to get it running ok.
    ¿Any idea?
    Regards,
    Alvaro

    Hello Juan Carlos, the value and class that I want to duplicate and doesn't work is for Waste Code, I have also duplicated the one you have displayed (waste pproperties) without any problem.
    1.I have duplicated and changed class SAP_EHS_1024_001. to Z_EHS_WA
    2. Create a copy of the 5 characteristics included in this class.
    SAP_EHS_1024_001_WASTE_CATALOG
    SAP_EHS_1024_001_WASTE_CODE
    SAP_EHS_1024_001_WA_SUBCATEG
    SAP_EHS_1024_001_WA_CATEGORY
    SAP_EHS_1024_001_REMARK
    change the name by
    Z_EHS_WA_WASTE_CATALOG
    Z_EHS_WA_WASTE_CODE
    Z_EHS_WA_SUBCATEG
    Z_EHS_WA_CATEGORY
    Z_EHS_WA_REMARK.
    I checked the funcion C14K_WASTECATLG_CHECK is in the value of the Z_EHS_WA_WASTE_CODE characteristic
    I checked the funcion C14K_WASTECODE_CHECK is in the value of the Z_EHS_WA_WASTE_CATALOG characteristic
    3. Create phrase sets for each new category. with same name.
    4. Match up the master data.
    5. Change the enviroment parameter.to Z_EHS_WA_WASTE_CATALOG
    I think I have followed all the steps, but for some reason it doesn't find the catalog
    The phrase for the catalog is EWC in english and LER in spanish.
    Regards
    Alvaro.

  • Not displaying the data in to the table..wht is the issue

    I have problem for the displaying the RFC Model object date in to the table.
    I have created the Table in the view, Then  i have choosed  the "create  binding" option in the outLine window to map the perticular RFC model object to the Table to display. The data is not displaying in the table . But the RFC model object contains data. when i am trying to display with MessageMaganger.reporSuccess(). it is diplaying the data.
    Can any one tell me what is the issue.

    First, in your view layout in NWDS,  look at the tableview,  do you see fieldnames in the columns and rows.  If so, then I believe that you have bound correctly.    Also, in your executeBAPI method,  make sure that it looks something like this.
        public void executeBapi_Gl_Acc_Getlist_Input( )
        //@@begin executeBapi_Gl_Acc_Getlist_Input()
        try{
             wdContext.currentBapi_Gl_Acc_Getlist_InputElement().modelObject().execute();
        catch (Exception ex)
                  ex.printStackTrace();
    <b>    wdContext.nodeOutput().invalidate();</b>
        //@@end
    Regard,
    Rich Heilman

  • Problem with displaying Date field in the table.

    Hi All,
    I am trying to display data into a table UI Element. 
    In that data, i have one DATE type field. While displaying data in DATE field, it will display like this "01.02.2009".
    Now my requirement is if i want to modify that DATE field, it will allow to modify "01.02.2009"  to "26.02.2009".
    But while modifying DATE field , I want to show the Calender of that month, in that i  have to select the another date.
    (Like normal Date UI Element will show that calender).
    Can anyone please help me.
    Thanks in Advance!
    Regards,
    Sreelakshmi.

    Hi,
          Go to the context attribute that was mapped to the DATE field of the table and change the perperty INPUT HELP MODE to  AUTOMATIC and it works.
    Regards,
    Manne.

  • How to generate test data for all the tables in oracle

    I am planning to use plsql to generate the test data in all the tables in schema, schema name is given as input parameters, min records in master table, min records in child table. data should be consistent in the columns which are used for constraints i.e. using same column value..
    planning to implement something like
    execute sp_schema_data_gen (schemaname, minrecinmstrtbl, minrecsforchildtable);
    schemaname = owner,
    minrecinmstrtbl= minimum records to insert into each parent table,
    minrecsforchildtable = minimum records to enter into each child table of a each master table;
    all_tables where owner= schemaname;
    all_tab_columns and all_constrains - where owner =schemaname;
    using dbms_random pkg.
    is anyone have better idea to do this.. is this functionality already there in oracle db?

    Ah, damorgan, data, test data, metadata and table-driven processes. Love the stuff!
    There are two approaches you can take with this. I'll mention both and then ask which
    one you think you would find most useful for your requirements.
    One approach I would call the generic bottom-up approach which is the one I think you
    are referring to.
    This system is a generic test data generator. It isn't designed to generate data for any
    particular existing table or application but is the general case solution.
    Building on damorgan's advice define the basic hierarchy: table collection, tables, data; so start at the data level.
    1. Identify/document the data types that you need to support. Start small (NUMBER, VARCHAR2, DATE) and add as you go along
    2. For each data type identify the functionality and attributes that you need. For instance for VARCHAR2
    a. min length - the minimum length to generate
    b. max length - the maximum length
    c. prefix - a prefix for the generated data; e.g. for an address field you might want a 'add1' prefix
    d. suffix - a suffix for the generated data; see prefix
    e. whether to generate NULLs
    3. For NUMBER you will probably want at least precision and scale but might want minimum and maximum values or even min/max precision,
    min/max scale.
    4. store the attribute combinations in Oracle tables
    5. build functionality for each data type that can create the range and type of data that you need. These functions should take parameters that can be used to control the attributes and the amount of data generated.
    6. At the table level you will need business rules that control how the different columns of the table relate to each other. For example, for ADDRESS information your business rule might be that ADDRESS1, CITY, STATE, ZIP are required and ADDRESS2 is optional.
    7. Add table-level processes, driven by the saved metadata, that can generate data at the record level by leveraging the data type functionality you have built previously.
    8. Then add the metadata, business rules and functionality to control the TABLE-TO-TABLE relationships; that is, the data model. You need the same DETPNO values in the SCOTT.EMP table that exist in the SCOTT.DEPT table.
    The second approach I have used more often. I would it call the top-down approach and I use
    it when test data is needed for an existing system. The main use case here is to avoid
    having to copy production data to QA, TEST or DEV environments.
    QA people want to test with data that they are familiar with: names, companies, code values.
    I've found they aren't often fond of random character strings for names of things.
    The second approach I use for mature systems where there is already plenty of data to choose from.
    It involves selecting subsets of data from each of the existing tables and saving that data in a
    set of test tables. This data can then be used for regression testing and for automated unit testing of
    existing functionality and functionality that is being developed.
    QA can use data they are already familiar with and can test the application (GUI?) interface on that
    data to see if they get the expected changes.
    For each table to be tested (e.g. DEPT) I create two test system tables. A BEFORE table and an EXPECTED table.
    1. DEPT_TEST_BEFORE
         This table has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look BEFORE the
         test for that test case is performed.
         CREATE TABLE DEPT_TEST_BEFORE
         TESTCASE NUMBER,
         DEPTNO NUMBER(2),
         DNAME VARCHAR2(14 BYTE),
         LOC VARCHAR2(13 BYTE)
    2. DEPT_TEST_EXPECTED
         This table also has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look AFTER the
         test for that test case is performed.
    Each of these tables are a mirror image of the actual application table with one new column
    added that contains a value representing the TESTCASE_NUMBER.
    To create test case #3 identify or create the DEPT records you want to use for test case #3.
    Insert these records into DEPT_TEST_BEFORE:
         INSERT INTO DEPT_TEST_BEFORE
         SELECT 3, D.* FROM DEPT D where DEPNO = 20
    Insert records for test case #3 into DEPT_TEST_EXPECTED that show the rows as they should
    look after test #3 is run. For example, if test #3 creates one new record add all the
    records fro the BEFORE data set and add a new one for the new record.
    When you want to run TESTCASE_ONE the process is basically (ignore for this illustration that
    there is a foreign key betwee DEPT and EMP):
    1. delete the records from SCOTT.DEPT that correspond to test case #3 DEPT records.
              DELETE FROM DEPT
              WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3);
    2. insert the test data set records for SCOTT.DEPT for test case #3.
              INSERT INTO DEPT
              SELECT DEPTNO, DNAME, LOC FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3;
    3 perform the test.
    4. compare the actual results with the expected results.
         This is done by a function that compares the records in DEPT with the records
         in DEPT_TEST_EXPECTED for test #3.
         I usually store these results in yet another table or just report them out.
    5. Report out the differences.
    This second approach uses data the users (QA) are already familiar with, is scaleable and
    is easy to add new data that meets business requirements.
    It is also easy to automatically generate the necessary tables and test setup/breakdown
    using a table-driven metadata approach. Adding a new test table is as easy as calling
    a stored procedure; the procedure can generate the DDL or create the actual tables needed
    for the BEFORE and AFTER snapshots.
    The main disadvantage is that existing data will almost never cover the corner cases.
    But you can add data for these. By corner cases I mean data that defines the limits
    for a data type: a VARCHAR2(30) name field should have at least one test record that
    has a name that is 30 characters long.
    Which of these approaches makes the most sense for you?

  • How to transport the data contains in the table from development to product

    How to transport the data contains in the table from development to production.
    Please let me know ASAP.

    Hello Dilip
    Create a workbench request and add the following entries to the request:
    Object key: R3TR TABU <name of z-table>
    For this object add the following value key:
    - client-independent table: '*'
    - client-dependent table (e.g. client 100): '100*'
    See also: [SAP Network Blog: Transport Table Entries|/people/community.user/blog/2007/01/07/transport-table-entries]
    Regards
      Uwe

  • Select data from all the table names in the view

    Hi,
    "I have some tables with names T_SRI_MMYYYY in my database.
    I created a view ,Say "Summary_View" for all the table names
    with "T_SRI_%".
    Now i want to select data from all the tables in the view
    Summary_View.
    How can i do that ? Please throw some light on the same?
    Thanks and Regards
    Srinivas Chebolu

    Srinivas,
    There are a couple of things that I am unsure of here.
    Firstly, does your view definition say something like ...
    Select ...
    From "T_SRI_%"
    If so, it is not valid. Oracle won't allow this.
    The second thing is that your naming convention for the
    tables suggests to me that each table is the same except
    that they store data for different time periods. This would be
    a very bad design methodology. You should have a single
    table with an extra column to state what period is referred to,
    although you can partition it into segments for each period if
    appropriate.
    Apologies if i am misinterpreting your question, but perhaps
    you could post your view definition and table definitions
    here.

  • While Creating New Insert Form Existing Data Display from the Table

    Hi
    I am New To Sun Java Studio Creator and New to Java Also While Creating New Insert Form Existing Data Display from the Table while i am Run the Form. Can any one help me to Solve this one

    Dear Giri,
    As per your Advise, Literally I have Search the Properties for the Components to set value Null, but I am Unable to find the Value in Properties palate. I have tried in various options like
    In the Properties Palate
    TextField1_onselect use Value null
    TextField1_text I have selected use Value option and I have manually Keyed-in null;
    In the JSP Page, I have manually keyed in the null value below said
    <ui:textField binding="#{BI.textField2}" id="textField2" style="position: absolute; left: 240px; top: 96px" text="#{BI.bDataProvider.value['ISSUENO'] = null}"/>
    <ui:textField binding="#{BI.textField2.Value = null }" id="textField2" style="position: absolute; left: 240px; top: 96px" text="#{BI.bDataProvider.value['ISSUENO'] = null}"/>
    At last I am Failure. I am ignorant of it. Can you please help me on this where I have to set null value for the components? I will be very kind of you

  • Date field is missing in the Table maintainance generator

    Hi,
      A table is created. A date field is there in that table. I tried to create a table maintainance generator for this table using 2 screens. Table maintainance generator is  created. But when I go to SM30 , I am unable to see this <b>DATE</b> field in the SM30 , in the first screen. But when I go to maintainance screen , I am able to see this field. This particularly happening with date field.
    Could you please help me.
    Regards,
    Satya

    go to se11 and type the table what you want.
    utilities, table maintenance generator.
    you config the options that you need.
    press F6 and go to sm30.
    its only this.
    Regards

  • Overlapping Dates , Denormaization of the tables

    Hi Guys,
    I am in a situation that we are denormalising few tables for better performance and reduce the joins. For which I am joining the tables using the logic to check for the dates and add the fields corresponding to those dates to represent the joined data correct at any given point of time . A simple example is :
    In the below tables : in Table1, the case_number 0023A is having Status "AC" during the time period 15-jan-08 and 31-dec-9999 ( open-ended or current date), and in Table2 the same case is having center_code "C12" during 22-jun-08 and 31-dec-9999 . When the tables are joined the over-lapped dates will be having common data like *0023A, 22-JUN-2008, 31-DEC-9999, AC, C12*
    My [previous posting|http://forums.oracle.com/forums/thread.jspa?forumID=75&threadID=682324] on the forums which has been answered by Frank and was perfectly working for me in getting the results for different data as in the above mentioned thread, but the present data set is throwing me the problem by not selecting the Non-Overlapping Dates
    CREATE TABLE TABLE1 (
    CASE_NUMBER VARCHAR2(5),
    CHANGE_EFF_DATE DATE,
    END_EFF_DATE DATE,
    STATUS VARCHAR2(2) );
    INSERT INTO TABLE1 VALUES
    ( '0023A'
    ,TO_DATE('15-JAN-2008','DD-MON-YYYY')
    ,TO_DATE('31-DEC-9999','DD-MON-YYYY')
    ,'AC'
    INSERT INTO TABLE1 VALUES
    ( '0023A'
    ,TO_DATE('07-OCT-2007','DD-MON-YYYY')
    ,TO_DATE('14-JAN-2008','DD-MON-YYYY')
    ,'CL'
    INSERT INTO TABLE1 VALUES
    ( '0023A'
    ,TO_DATE('08-APR-2007','DD-MON-YYYY')
    ,TO_DATE('06-OCT-2007','DD-MON-YYYY')
    ,'AC'
    INSERT INTO TABLE1 VALUES
    ( '0023A'
    ,TO_DATE('13-MAR-2007','DD-MON-YYYY')
    ,TO_DATE('07-APR-2007','DD-MON-YYYY')
    ,'RJ'
    INSERT INTO TABLE1 VALUES
    ( '0023A'
    ,TO_DATE('31-MAY-2005','DD-MON-YYYY')
    ,TO_DATE('12-MAR-2007','DD-MON-YYYY')
    ,'AP'
    CREATE TABLE TABLE2 (
    CASE_NUMBER VARCHAR2(5),
    CHANGE_EFF_DATE DATE,
    END_EFF_DATE DATE,
    CENTER_CODE VARCHAR2(3) );
    INSERT INTO TABLE2 VALUES
    ( '0023A'
    ,TO_DATE('22-JUN-2007','DD-MON-YYYY')
    ,TO_DATE('31-DEC-9999','DD-MON-YYYY')
    ,'C12'
    INSERT INTO TABLE2 VALUES
    ( '0023A'
    ,TO_DATE('09-MAR-2007','DD-MON-YYYY')
    ,TO_DATE('21-JUN-2007','DD-MON-YYYY')
    ,'101'
    SQL> SELECT * FROM TABLE1;
    CASE_ CHANGE_EF END_EFF_D ST
    0023A 15-JAN-08 31-DEC-99 AC
    0023A 07-OCT-07 14-JAN-08 CL
    0023A 08-APR-07 06-OCT-07 AC
    0023A 13-MAR-07 07-APR-07 RJ
    0023A 31-MAY-05 12-MAR-07 AP
    SQL> SELECT * FROM TABLE2;
    CASE_ CHANGE_EF END_EFF_D CEN
    0023A 22-JUN-07 31-DEC-99 C12
    0023A 09-MAR-07 21-JUN-07 101 -----
    Here is the query I am running to get the joined information from both the tables with regards to specific point in time.
    SELECT T1.CASE_NUMBER
    ,GREATEST(T1.CHANGE_EFF_DATE,T2.CHANGE_EFF_DATE) CHANGE_EFF_DATE
    ,LEAST(T1.END_EFF_DATE,T2.END_EFF_DATE) END_EFF_DATE
    ,T1.STATUS
    ,T2.CENTER_CODE
    FROM
    TABLE1 T1
    LEFT OUTER JOIN
    TABLE2 T2
    ON
    T1.CASE_NUMBER=T2.CASE_NUMBER AND
    T1.CHANGE_EFF_DATE &lt;= T2.END_EFF_DATE AND
    T2.CHANGE_EFF_DATE &lt;= T1.END_EFF_DATE
    ORDER BY 2;
    Here is the result-set I am getting :
    CASE_ CHANGE_EF END_EFF_D ST CEN
    0023A 09-MAR-07 12-MAR-07 AP 101
    0023A 13-MAR-07 07-APR-07 RJ 101
    0023A 08-APR-07 21-JUN-07 AC 101
    0023A 22-JUN-07 06-OCT-07 AC C12
    0023A 07-OCT-07 14-JAN-08 CL C12
    0023A 15-JAN-08 31-DEC-99 AC C12But my result-set should include the Non-Overlapping dates as well which should look like this, but I am missing the top record ( First record) in my output :
    CASE_ CHANGE_EF END_EFF_D ST CEN
    0023A 31-MAY-07 08-MAR-07 AP
    0023A 09-MAR-07 12-MAR-07 AP 101
    0023A 13-MAR-07 07-APR-07 RJ 101
    0023A 08-APR-07 21-JUN-07 AC 101
    0023A 22-JUN-07 06-OCT-07 AC C12
    0023A 07-OCT-07 14-JAN-08 CL C12
    0023A 15-JAN-08 31-DEC-99 AC C12I shall be really thankful if you guys can me help me .
    Thanks.
    Venu
    Edited by: Oracle, Developer on Dec 11, 2008 9:30 PM

    Hi, Venu,
    If I understand the problem correctly, you need a row in table2 that covers all dates before the earliest change_eff_date, similar to the way you have a row that covers all dates after the latest change_eff_date. You don't have to store such a row in the table: you can generate one at run time, and use UNION to add it to your real data.
    WITH     t2     AS
         SELECT     case_number
         ,     change_eff_date
         ,     end_eff_date
         ,     center_code
         FROM     table2
         UNION
         SELECT     case_number
         ,     TO_DATE (1, 'J')          AS change_eff_date     -- Earliest possible date
         ,     MIN (change_eff_date) - 1     AS end_eff_date
         ,     NULL                    AS center_code
         FROM     table2
         GROUP BY     case_number
    SELECT     T1.CASE_NUMBER
    ,     GREATEST (T1.CHANGE_EFF_DATE, T2.CHANGE_EFF_DATE)     CHANGE_EFF_DATE
    ,     LEAST (T1.END_EFF_DATE ,T2.END_EFF_DATE)          END_EFF_DATE
    ,     T1.STATUS
    ,     T2.CENTER_CODE
    FROM          TABLE1     T1
    LEFT OUTER JOIN          T2
    ON     T1.CASE_NUMBER          =  T2.CASE_NUMBER
    AND     T1.CHANGE_EFF_DATE     <= T2.END_EFF_DATE
    AND     T2.CHANGE_EFF_DATE     <= T1.END_EFF_DATE
    ORDER BY 1, 2;Note that the main query is exactly what you had before, except for the definition of t2.
    Where t2 had been simply table2, now it is the UNION of table2 with one row per case_number, with a change_eff_date in 4712 BCE.

  • Query View missing in the BI Content

    Hi
    I want to install a 3.x Web Template which is using a Query View as the dataprovider, but the Query View is not available as D-version and hence the Web Template throws an error that the query view is missing and hence cannot execute the web Template.
    I have checked in the tables : RSZWVIEW and there is an entry but there is not object in RSORBCT to install.
    SAP BI_CONT version 704 level : 003
    When i try to activate the D-version of the query view the it gives the following message:
    *Object 0CSAL_C07_Q0101_V01 (Query View) could not be collected for object  ()
    Message no. RSO296
    Diagnosis
    You have collected objects in the BI Metadata Repository. Associated objects for object  of type  have also been collected. Object 0CSAL_C07_Q0101_V01 of type Query View was among these objects.
    However, this object 0CSAL_C07_Q0101_V01 of type Query View is not available in the metadata repository.
    System Response
    Object 0CSAL_C07_Q0101_V01 of type Query View is ignored in further collections. The links for object  of type  are incomplete. This may result in you not being able to activate this object.
    Procedure
    If this error occurs while you are installing BI Content, an error may have occurred during the delivery. Inform SAP. In all other cases, check that the object has not been deleted by another user during collection, for example.*
    Please let me know incase of any other info required.Thanks!

    Hi Kumar....
    Go to Datasource Repository(RSA2)................give the datasorce name and click on display.......from there u will get the extract structure name......copy it ...........go to SE11.............In the table field give the Extract structure name.............and click on the Where used list in the top.................from there u will get all the program name/Function module............where this extract structure is used...........copy that name and open it through SE37................
    Hope this helps..........
    Regards,
    Debjani........

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • Data Display in the Table Control

    Hi All,
             Please help me know how to fetch the data from the internal table in to the Table Control.  I have defined the internal table with some fields and wrote the select query to fetch the records from database. But Iam unable to get the idea of how to pass the internal table records into the table control.
    Your suggestion will be appreciated.
    Thanks and Regards,
    Murali Krishna Tatoju

    HI
    if your select query is first operation before the screen is displayed.
    then write the select query in PBO.
    in PBO
    module select
    loop at itab with control tc.
    module transfer_itab_to_tc
    endloop
    in program.
    module select
    select query here and get the data.
    endmodule
    module transfer_itab_to_screen.
    move-corresponding itab to t_tab " here t_tab is strucutre from where the fields on Table control has been taken
    endmodule
    Cheerz
    Ram

Maybe you are looking for

  • Purchase Non Stock Item

    Hi, I want to purchase one stock/non stock item for my order immediately.I did local purchase on urgent basis without any pr/po. So how can i book this against the order? Regards ebin <a href="http://www.isohunt.name/">isohunt</a>, <a href="http://ww

  • TIF as transition in 16:9

    I'm creating a transition from a track to a menu by placing a still image (TIF) after the video & then adding the transition. Video is 16:9 anamorphic. I created the still from the sequence in Final Cut. When I import the still to DVDSP and add it to

  • Never mind the speed, feel the quality!

    I'm still struggling with my new ADSL lie. Here's some more information: 1. BT line comes into house to a BT Master Socket/iPlate, directly (2m) to HH3. From there, wired ethernet all the way. NOT using the gigabit port. No phone line noise on the te

  • Alter view

    hi how can we use alter view to update view by coding in vb.net    thanks

  • Shuffle - The same songs are played a lot

    I have about 3000 songs and there are are some I never hear and some I hear all the time. Any suggestions to get it to shuffle the whole library so you don't hear the same song again until it plays all the other songs? I have the setting on shuffle a