Remove filter criteria from the table

I have used af:table to display data and used filter to filter column data. After specifying any search criteria we need to remove it manually and press enter. I want to remove search criteria programmatically and display table data without any search criteria.
How to remove table filter criteria in backing bean and refresh the table?
Thanks in advance.

I can't get the example "Programmatically Manipulating a Table's QBE Filter Fields"
Where is it ?
https://smuenchadf.samplecode.oracle.com/samples/ClearTableColumnFilterFields.zip
Thks

Similar Messages

  • How to remove the Personalize link from the table layout displayed on Page.

    While i create a new page and a table layout is created on this page it displays a link called Personalize this table. For every table i create the same is displayed.
    If 10 such table layouts are there i get personalize options on all such 10 layouts on same page.
    If anyone of u knows @ this please help me out how to remove that link from the page.

    You can set the value for the profile FND: Personalization Region Link Enabled / FND_PERSONALIZATION_REGION_LINK_ENABLED to No.
    For additional information, please check the Personalization topic of the Oracle Application Framework Profile Options chapter in the OA Framework developer's guide.

  • How to remove the bookmark from the table of contents?

    Hi everyone, I have a question:
    How to remove the bookmark from the table of contents in Captivate 6?
    Regards...

    Doug, your frustration is making you "bug" gun trigger happy. LOL
    LOL, I suppose I have.
    I believe the answer you are looking for is in the publishing preferences: under LMS advanced settings, look for the option: never send resume data.
    [can someone please confirm this - I am not at a desktop now]
    Anyhow, selecting this option will stop resume data - so the lesson will treat even returning learners as new. Hopefully, this will work for you.
    This works! Thank you again!
    *but* Why isn't Captivate asking the learner to make the choice of whether to start over or resume. It does this correctly if the lesson is published stand-alone.
    Thanks again!

  • How to remove the approved order from the table in sapui5

    Hi Experts,
      how to remove the approved order from the table in sapui5.
    After Approving the order how to remove the order from the table in sapui5.
    Please help me.
    Thanks & regards
    chitti Babu

    Hi,
    Probelm is OBIEE on your machine.Some one might have deleted pdf option.
    Refer : http://obiee101.blogspot.com/2009/07/obiee-dashboard-default-controls.html
    Try to find out tag that is to be removed from controlmessages.xl so that you have only HTML.
    Update :
    Stop BI Server.Try removing below tag and restart server.
    (sawm:if name="enablePDF">(a class="NQWMenuItem" name="pdf" href="javascript:void(null)" onclick="return PortalPrint('@{pdfURL}[javaScriptString]',@{bNewWindow});">
    <sawm:messageRef name="kmsgDashboardPrintPDF"/></a)</sawm:if)
    Regards,
    Srikanth
    Edited by: Srikanth Mandadi on Apr 19, 2011 3:36 AM

  • How do I remove a row from the database?

    Guys and Gals,
    I'm using Studio Edition Version 11.1.1.3.0.
    The code below will delete a row from my table, but how do I actually delete the row from the database as well?
        DCBindingContainer dcbc = (DCBindingContainer)getBindings();
        DCIteratorBinding dcib = dcbc.findIteratorBinding("TipsSelectorIterator");
        dcib.removeCurrentRow();
        BindingContainer bindings = getBindings();
        OperationBinding operationBinding = bindings.getOperationBinding("Commit");
        Object result = operationBinding.execute();
        AdfFacesContext adffacesctx = AdfFacesContext.getCurrentInstance();
        adffacesctx.addPartialTarget(this.getQueryTable()); In this post, I believe Frank sums up what I should do: Remove row from af:table
    >
    The problem in your case is that this removes the row from the iterator but not your business service. EJB exposes explicit methods to remove an entity. A method lime removeEmployees(employee) >expsed on the EJB model must be called.
    You can drag and drop the "removeEmployees" method then as a button to your page. Rename the text label to "Remove" or "Delete". In the opened dialog box, point the method argument to the >following EL #{bindings.iteratorName.currentRow.dataProvider}. Next time you press the button, the row is deleted from the iterator and the business service
    Huh? So to delete the row, I need to expose a method. But where? Do I do this in the AppModuleImpl so I can expose it to the Client Interface? But then how do I access entities? And what data type is the #{bindings.iteratorName.currentRow.dataProvider}?
    If anyone could point me in a general direction, it'd be great.

    Frank and Shay,
    Thank you both for your posts. I'm amazed there's such a great place to get help. Between the forums, ADF code corner, Not Yet Documented ADF Samples, various blogs and tutorials, it's obvious you guys really care about what you do.
    On my example, that approach doesn't seem to work. I tried following it like so in one of the video tutorials by Shay: http://blogs.oracle.com/shay/2010/04/doing_two_declarative_operatio.html. Following Shay's approach, in my particular case, deletes only from the table. Perhaps it is due to my iterator / collection setup?
    TipsSelector (1 to 1) -> TipsView (1 to *)-> DependentBom -(1 to * Recursive)> RecursiveBom
    TipsSelector references the same View Object as TipsView, but it is set as a 1 to 1 relationship so that I can show TipsView on the page one record at a time. This is the same setup as Tuhra2's hierarchy veiwer example. TipsView / DependentBom is a classic parent / child setup. RecursiveBom is DependentBom referencing itself. Think of it as departments within departments within departments etc...
    I have dragged TipsSelector's Named Criteria (All Queriable Attributes) onto my page and created an ADF Query with Table. A remove button on the table's toolbar calls the two actions, Delete (from TipsSelector's iterator) and then Commit. I have modified the code slightly from my previous post but the end result seems to be the same.
        BindingContainer bindings = getBindings();
        OperationBinding operationBinding = bindings.getOperationBinding("Delete");
        operationBinding.execute();
        bindings = getBindings();
        operationBinding = bindings.getOperationBinding("Commit");
        operationBinding.execute();
        AdfFacesContext adffacesctx = AdfFacesContext.getCurrentInstance();
        adffacesctx.addPartialTarget(this.getQueryTable());Page Def file
        <action id="Commit" InstanceName="AppModuleDataControl"
                DataControl="AppModuleDataControl" RequiresUpdateModel="true"
                Action="commitTransaction"/>
        <action IterBinding="TipsSelectorIterator" id="Delete"
                InstanceName="AppModuleDataControl.TipsSelector"
                DataControl="AppModuleDataControl" RequiresUpdateModel="false"
                Action="removeCurrentRow"/>On ppr of the query table, the row is gone. However, when I press the Search button on the query again, the record reappears.
    This is beyond me. Am I somehow deleting from the query results but not the database?

  • How to truncate the values from the table

    Hi All,
    I am working on an issue..where we are first deleting all the records from the table and then based on few conditions we are putting the records back in that table...when we tried to run this program along with few others those who are doing almost the same stuff we are having issues...we tried to schedule few jobs related to these programs only...but after a ceratin amount of time couple of jobs got canceled...I was talking to my basis guy and he said the problem is ratehr then truncating the records from the table we are deleting the records and it's taking lots and lots of space to execute...so we need to truncate the records from the table insted of deleting it...we are using the following the statement right now:
        DELETE FROM ZTUS_PG.
        COMMIT WORK.
    So can you please tell me how can we truncate the values from this table instead of just deleting them and what would be effect of this.
    Thanks,
    Rajeev Gupta

    I don't think basis is saying you should delete all the records from the table. They are saying remove the table and it's contents (a much faster thing to do). I'm not sure this the right thing to do, but you can have a look at:
    http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.apdv.sample.doc/doc/admin_scripts/s-truncate-db2.htm
    Something like:
    EXEC SQL.
      TRUNCATE TABLE ZTUS_PG REUSE STORAGE
    ENDEXEC.
    COMMIT WORK.                      "Empty table is committed here
    Rob
    Edited by: Rob Burbank on Dec 1, 2008 4:06 PM

  • How do I remove an app from the update list that is under someone else's apple id?

    How do I remove an app from the update list that is under someone else's apple id? So this way the update always fails because it asks for someone else's password. I don't have the app on my mac, it only appears in the update list. It's just annoying, because the update keeps appearing, and the reminder keeps reminding me that I should install a new update.

    You installed a hacked app, originally from the Mac App Store. It contains the receipt for a different app, downloaded using an account that you don't control. You need to identify and remove the hacked app.
    Important: The app you need to remove is not necessarily the one named in the App Store alert. For example, the App Store may prompt you to update "Angry Birds" or "Twitter," but the hacked app may be something else entirely. Don't make any assumptions about which app you're looking for. To find it, you must carry out a systematic search with Spotlight.
    1. Triple-click anywhere in the line of text below on this page to select it:
    kMDItemAppStoreHasReceipt=1
    Copy the selected text to the Clipboard by pressing the key combination command-C.
    2. In the Finder, press command-F to open a search window, or select
    File ▹ Find
    from the menu bar. In the search window, select
    Search: This Mac
    from the row of tokens below the toolbar. Below that is a popup menu of search criteria, initially showing Kind. From that menu, select
    Other...
    A sheet will drop down. In that sheet, select
    Raw Query
    as the criterion, then click OK or press return.
    Now there will be a text box to the right of the menu of search criteria. That's where you enter the raw search query. Click in that box and paste the text you copied earlier by pressing command-V.
    4. The search window will now show all the App Store products that are installed. Compare those search results with the list of your purchases from the App Store. To see the complete list, you may need to unhide hidden purchases. If any apps were download from the App Store using other Apple ID accounts that you control, sign in to the store under each of those ID's and check the purchases.
    At least one of the apps in the Spotlight search results is not among your purchases in the App Store. Move each such item to the Trash, after quitting it if it's running. You may be prompted for your administrator password. Empty the Trash.
    Quit and relaunch the App Store. Test.
    If you find these instructions confusing, ask for an alternative method.

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • Dashboard Prompt using values not from the table

    Hi,
    I have a requirement from the client to design a dashboard report like the following.
    Dashboard prompt will have 4 filters, three filters come from the table, but the fourth filter will have 3 values not from the table. The fourth filter will have values like "Report with Sales Amount", "Report with Purchase Amount", "Report with both Purchase and Sales". I have three different Table reports designed for each of the fourth filter choices. But how do I implement it, both in the dashboard prompt as well as navigating to the rite report based on the selection.
    Is my approach correct.
    Thanks for your time and help.

    The fourth prompt where you have "Report with Sales Amount", "Report with Purchase Amount", "Report with both Purchase and Sales" you pull a dummy column into the prompt and write a sql in show.
    would be something like
    SELECT Case when 1=0 then "Dimension- Customer"."Cust Name" else 'Report with Sales Amount' end FROM Sales UNION SELECT Case when 1=0 then "Dimension- Customer"."Cust Name" else 'Report with Purchase Amount' end FROM Sales
    and in the prompt set a presentation variable say var_criteria
    Now create report2 for with a some randomn column and another column will have the values that you want to display for example 'Report with Sales Amount'
    Create a filter on the 2nd column and reference the presentation variable var_criteria and default it to 'Report with Sales Amount'
    On the dashboard page in the section place the report and enable guided navigatoin by selecting report 2.
    Please let me know if you have any questions.
    thanks,
    deep

  • Interpreting Predicate  and Filter information from the explain plan

    Can somebody help in understanding how does the predicate and filter operation effects the execution plan.or how can I conclude the predicate or filter operation chosen by optimizer is not optimized.

    User445775,
    The paraphrase that I provided in my previous post was from page 74 of "Cost-Based Oracle Fundamentals".
    How to word the explanation...
    Assume that you have a database table which contains all of the phone numbers and addresses for people in a state. A query is executed to find user445775 in the city named "Redmond". Assume that the query looks like this:
    SELECT
      PHONE
    FROM
      PHONE_NUMBERS
    WHERE
      CITY = 'Redmond'
      AND FULL_NAME = 'user445775';Assume that there are no indexes on the table. The DBMS_XPLAN would show two filter predicates applied during a full tablescan - this probably indicates an inefficient access path, especially if there are a very small percentage of people matching the WHERE clause restrictions on the table.
    Now, assume that an index is created on the CITY column. The DBMS_XPLAN would show an access predicate applied to the index on the CITY column, and a filter predicate on the table access by index for the FULL_NAME. We eliminated a large number of possible rows by applying the access predicate to jump directly to the rows with the city of interest, and then filtered out those names which were not 'user445775'. This is not terribly efficient, especially if there are a large number of people in 'Redmond' that need to be filtered out.
    Now, assume that we drop the index on the CITY column and create an index on the FULL_NAME column. The DBMS_XPLAN would show an access predicate applied to the index on the FULL_NAME column, and a filter predicate on the table access by index for the CITY column. This could be a fairly efficient plan if there are only a couple rows in the table with FULL_NAME of 'user445775', as few rows will be discarded after the index access to find those with CITY = 'Redmond'.
    Now, assume that we drop the index on the FULL_NAME column and create a composite index on CITY,FULL_NAME. The DBMS_XPLAN would show an access predicate applied to the index on the CITY and FULL_NAME columns and there would not be a filter predicate on the table access by index - in this case, we will not discard any rows once retrieved by the index access.
    Page 211 of "Troubleshooting Oracle Performance" also shows a clear explanation of access and filter predicates.
    Think of access predicates (on indexes at least) as throwing out rows before they are retrieved from disk (or memory), and filter predicates as throwing out rows after they are retrieved from disk (or memory).
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How to dispaly datas from the table, base on the selection screen

    hi there gurus,
    im currently developing a stock aging report,
    i have completed one program but it do not allow me to excutes the program althought the syntax is correct.
    i would to get some ideas from you, regarding how to extract the datas from the tables?
    my selction screen will be, mat number, date, and gl account.
    and the out put datas are, mbew-matnr, makt-maktx, mbew-lbkum, mara_meins, mbew-salk3,and the consumptions for the past 12months and the values for it.
    can u plz guide me with this,
    thank you,.
    this is kind of very urgent program that i need to finish , plz help me.

    here is the total code the i do
    REPORT  ZSTK_AGING_REP2.
    *TABLES
    TABLES: mseg,
            mara,
            makt,
            SKAT,
            SKA1,
            MARV,
            T001,
            T030,
            T149D,
            AM07M,
            MCMSEG,
            T001K,
            T001W,
            T134M,
            vbak,
            mbew,
    mcon, rmcb0, marc, t024w,  mvke, v134w, t438a, propf, maprf, t000, t024e
    , tvko.
    DATA: BEGIN OF ta_material OCCURS 2,
           werks LIKE mard-werks,
           lgort LIKE mard-lgort,
           matnr LIKE mard-matnr,
           labst LIKE mard-labst,
           umlme LIKE mard-umlme,
           insme LIKE mard-insme,
           einme LIKE mard-einme,
           speme LIKE mard-speme,
           retme LIKE mard-retme,
           verpr LIKE mbew-verpr,
           maktx LIKE makt-maktx,
           meins LIKE mara-meins,
          bukrs LIKE t001-bukrs,
          konto LIKE t030-konts,
          butxt LIKE t001-butxt,
          txt50 LIKE skat-txt50,
          MABTR LIKE MCMSEG-DMBTR,
          SKBTR LIKE MCMSEG-DMBTR,
          WAERS LIKE T001-WAERS,
          WAER2 LIKE T001-WAERS,
          BWKEY LIKE MBEW-BWKEY,
          LBKUM LIKE MBEW-LBKUM,
         MEINS LIKE MARA-MEINS,
             SALK3 LIKE MBEW-SALK3,
             WAERS1 LIKE T001-WAERS,
             BUKRS1 LIKE T001-BUKRS,
             KONTO1 LIKE T030-KONTS,
            lbkum LIKE mbew-lbkum,
              erdat LIKE vbak-erdat,
          END OF ta_material.
    DATA: BEGIN OF ta_mseg OCCURS 2,
            mblnr LIKE mseg-mblnr,
    *->Begin of KL02+ -
            mjahr like mseg-mjahr,
            zeile like mseg-zeile,
    *->End of KL02+ -
            meins LIKE mseg-meins,
            menge LIKE mseg-menge,
            werks LIKE mseg-werks,
            lgort LIKE mseg-lgort,
            matnr LIKE mseg-matnr,
            budat LIKE mkpf-budat,
            saknr LIKE SKA1-SAKNR,
          END OF ta_mseg.
    single recs based on MATNR
    DATA: BEGIN OF i_matnr OCCURS 0,
            werks LIKE mard-werks,
            lgort LIKE mard-lgort,
            matnr LIKE mard-matnr,
            maktx LIKE makt-maktx,
            mblnr LIKE mseg-mblnr,
            verpr LIKE mbew-verpr,
            labst LIKE mard-labst,                    "Valuated stock with
    *unrestricted use
            umlme LIKE mard-umlme,                    "Stock in transfer
    *(from one storage location to another)
            insme LIKE mard-insme,                    "Stock in quality
    *inspection
            einme LIKE mard-einme,                    "Total Stock of All
    *Restricted Batches
            speme LIKE mard-speme,                    "Blocked stock
            retme LIKE mard-retme,                    "Blocked Stock Returns
            meins LIKE mara-meins,                    "base unit
          bukrs LIKE t001-bukrs,
          konto LIKE t030-konts,
          butxt LIKE t001-butxt,
          txt50 LIKE skat-txt50,
          MABTR LIKE MCMSEG-DMBTR,
          SKBTR LIKE MCMSEG-DMBTR,
          WAERS LIKE T001-WAERS,
          WAER2 LIKE T001-WAERS,
          BWKEY LIKE MBEW-BWKEY,
          LBKUM LIKE MBEW-LBKUM,
         MEINS LIKE MARA-MEINS,
             SALK3 LIKE MBEW-SALK3,
             WAERS1 LIKE T001-WAERS,
             BUKRS1 LIKE T001-BUKRS,
             KONTO1 LIKE T030-KONTS,
            lbkum LIKE mbew-lbkum,
           END OF i_matnr.
    recs based on MBLNR
    DATA: BEGIN OF i_mblnr OCCURS 0,
            mblnr LIKE mseg-mblnr,
            werks LIKE mseg-werks,
            lgort LIKE mseg-lgort,
            matnr LIKE mseg-matnr,
            menge LIKE mseg-menge,
            meint LIKE mseg-meins,
            budat LIKE mkpf-budat,
         bukrs LIKE t001-bukrs,
         konts LIKE t030-konts,
         butxt LIKE t001-butxt,
         txt50 LIKE skat-txt50,
         MABTR LIKE MCMSEG-DMBTR,
         SKBTR LIKE MCMSEG-DMBTR,
         WAERS LIKE T001-WAERS,
         WAER2 LIKE T001-WAERS,
         BWKEY LIKE MBEW-BWKEY,
          LBKUM LIKE MBEW-LBKUM,
         MEINS LIKE MARA-MEINS,
             SALK3 LIKE MBEW-SALK3,
             WAERS1 LIKE T001-WAERS,
             BUKRS1 LIKE T001-BUKRS,
             KONTO1 LIKE T030-KONTS,
           END OF i_mblnr.
    TYPES: BEGIN OF t_mat,
            lgort LIKE mseg-lgort,
            werks LIKE mseg-werks,
            matnr LIKE mseg-matnr,
            mblnr LIKE mseg-mblnr,
            maktx LIKE makt-maktx,
            meins LIKE mara-meins,
    meng0 LIKE mbew-lbkum,
    value0 LIKE mbew-salk3,
           meng0  LIKE mard-labst,                   "0 to 10 days
           value0 LIKE mseg-dmbtr,
           meng1  LIKE mard-labst,                   "11 to 30 days
           value1 LIKE mseg-dmbtr,
           meng2 LIKE mard-labst,                   "31 to 60 days
           value2 LIKE mseg-dmbtr,
           meng3 LIKE mard-labst,                   "61-90
           value3 LIKE mseg-dmbtr,
           meng4 LIKE mard-labst,                   "90 days onwards
           value4 LIKE mseg-dmbtr,
           END OF t_mat.
    DATA: i_mat2 TYPE t_mat OCCURS 0 WITH HEADER LINE.
    TYPES: BEGIN OF t_mat2,
            lgort LIKE mard-lgort,                    " storage location
            cnt0(5),
            cnt1(5),
            cnt2(5),
            cnt3(5),
            cnt4(5),
       meng0 LIKE mbew-lbkum,
    value0 LIKE mbew-salk3,
           meng0  LIKE mard-labst,                   "0 to 10 days
           value0 LIKE mseg-dmbtr,
           meng1  LIKE mard-labst,                   "11 to 30 days
           value1 LIKE mseg-dmbtr,
           meng2 LIKE mard-labst,                   "31 to 60 days
           value2 LIKE mseg-dmbtr,
           meng3 LIKE mard-labst,                   "61-90
           value3 LIKE mseg-dmbtr,
           meng4 LIKE mard-labst,                   "90 days onwards
           value4 LIKE mseg-dmbtr,
           END OF t_mat2.
    DATA: i_matsum TYPE t_mat2 OCCURS 0 WITH HEADER LINE.
    DATA:  w_mengb TYPE mbew-lbkum,
           w_workqyt TYPE mbew-lbkum,
           w_index TYPE sy-tabix,
    *DATA: w_mengb TYPE mard-labst,     "tmp Balance qty
         w_workqty TYPE mard-labst,   "Work qty
         w_index TYPE sy-tabix,
          w_days(5)  TYPE n,           "duration difference (days)
          w_mths(5)  TYPE n,           "duration difference (mths)
          w_dat1 TYPE sy-datum,        "date
          w_dat2 TYPE sy-datum,        "today's date
          w_detl(1) TYPE c,
          w_summ(1) TYPE c,
          w_denom TYPE i,
          w_numer TYPE i,
          w_conv TYPE i,
          w_ttlcnt TYPE i,
          w_cnt TYPE i,
          v_topofpage(1),
         w_meng LIKE mard-labst,
           w_meng LIKE mbew-lbkum,
    *sapscript values
          w_title(20) TYPE c.              "Summary / Detail Report
    DATA: lv_peinh LIKE mbew-peinh,  "Price Unit
          lv_verpr LIKE mbew-verpr.  "Moving Price
    proram comes here
    SELECTION-SCREEN BEGIN OF BLOCK blk1 WITH FRAME TITLE text-001.
    SELECT-OPTIONS: s_werks FOR mseg-werks,
                    s_lgort FOR mseg-lgort,
                    s_matnr FOR mara-matnr,
                    s_saknr FOR ska1-saknr,
                    S_ERDAT FOR VBAK-ERDAT.
    PARAMETERS: pck_detl RADIOBUTTON GROUP rep1,
                pck_summ RADIOBUTTON GROUP rep1,
                pck_dtsm RADIOBUTTON GROUP rep1 DEFAULT 'X'.
    SELECTION-SCREEN END OF BLOCK blk1.
    top of the page
      TOP-OF-PAGE.
      PERFORM  f_top_of_page.
      FORM f_top_of_page .
      IF v_topofpage = 'D'.
    *-->Report header for detail report
        WRITE:/2 'Printed By :', sy-uname,
               80 'Stock Aging Report - Detail',
               180 'Printed on:', sy-datum, sy-timlo,
               220 'Page:', sy-pagno.
        WRITE:/,/,/.
        WRITE:/2 'Storage',
               10 'Matl ID',
               22 'Matl Description',
               61 'UOM',
               78 '<--=<QTY ON THIS DATE -->',
              78 '<-- =< 10 days -->',
              112 '<--11 to 30 days -->',
              148 '<--31 to 60 days -->',
              181 '<--61 to 90 days -->',
              216 '<-- > 90 days -->',
               /2 'Location',
               76 'Qty',
               92 'Value'.
              112 'Qty',
              128 'Value',
              148 'Qty',
              164 'Value',
              181 'Qty',
              195 'Value',
              216 'Qty',
              231 'Value'.
        WRITE:/2 sy-uline(235).
      ELSE.
    *-->Report header for Summary report
        WRITE:/2 'Printed By :', sy-uname,
               80 'Stock Aging Report - Summary',
               180 'Printed on:', sy-datum, sy-timlo,
               220 'Page:', sy-pagno.
        WRITE:/,/,/.
        WRITE:/2 'Storage',
               10 'Matl ID',
               22 'Matl Description',
               61 'UOM',
               78 '<--< QTY ON THIS DATE -->',
              78 '<-- < 10 days -->',
              112 '<--11 to 30 days -->',
              148 '<--31 to 60 days -->',
              181 '<--61 to 90 days -->',
              216 '<-- > 90 days -->',
               /2 'Location',
               76 'Qty',
               92 'Value'.
              112 'Qty',
              128 'Value',
              148 'Qty',
              164 'Value',
              181 'Qty',
              195 'Value',
              216 'Qty',
              231 'Value'.
        WRITE:/2 sy-uline(235).
      ENDIF.
    ENDFORM.                    " f_top_of_page
    *start-of-selection
    *PERFOM f_data_selection.
    FORM f_data_selection.
      SELECT a~werks
             a~lgort
             a~matnr
            a~saknr
            a~lbkum
            a~erdat
            a~labst
             a~umlme
             a~insme
             a~einme
             a~speme
             a~retme
           b~verpr    "this field no long been used
             c~maktx
             d~meins
             INTO CORRESPONDING FIELDS OF TABLE ta_material
             FROM mard AS a
             INNER JOIN makt AS c ON amatnr = cmatnr
             INNER JOIN mara AS d ON amatnr = dmatnr
             WHERE a~matnr IN s_matnr
               AND a~werks IN s_werks
               AND a~lgort IN s_lgort
              AND a~saknr IN s_saknr
              AND a~erdat IN s_erdat
               AND c~spras = 'EN'.
    *--> SC01 - End  of Insertion **
    *-->Select material documents
      SELECT a~mblnr
            a~mjahr
            a~zeile
            a~meins
            a~menge
            a~werks
            a~lgort
            a~matnr
            b~budat
            INTO CORRESPONDING FIELDS OF TABLE ta_mseg
      FROM mseg AS a INNER JOIN mkpf AS b
      ON amblnr = bmblnr
      AND amjahr = bmjahr
      FOR ALL ENTRIES IN ta_material
      WHERE matnr = ta_material-matnr
      AND a~werks = ta_material-werks
      AND a~lgort = ta_material-lgort
      AND algort NE aumlgo
      AND a~shkzg = 'S'
      AND a~smbln EQ space
      AND a~smblp EQ space.
    *--> SC03 - Start of Insertion **
    If MBLNR exist in MSEG-SMBLN and this
    record's SHKZG = 'H'. Remove it from the table.
    This is becuase this particular record has already been reverse.
      LOOP AT ta_mseg.
        SELECT SINGLE *
                 FROM mseg
                WHERE smbln = ta_mseg-mblnr
    *->Begin of KL02+ -
                  and SMBLP = ta_mseg-zeile.
                 AND shkzg = 'H'.  "return.           " KL02-
    *->End of KL02+ -
        IF sy-subrc = 0.
          DELETE ta_mseg.
        ENDIF.
      ENDLOOP.
    *--> SC03 - Enf   of Insertion **
    ENDFORM.                    " f_data_selection
    *IMPORTANT , NEED TO CHECK LATER
    FORM f_data_preparation.
    *-->Append data for report details
      LOOP AT ta_material.
        DATA: ta_msegtemp LIKE ta_mseg OCCURS 2 WITH HEADER LINE.
    *-->Loop at all material documents into a temp table
        REFRESH ta_msegtemp. CLEAR ta_msegtemp.
        LOOP AT ta_mseg WHERE matnr = ta_material-matnr AND
                              werks = ta_material-werks AND
                              lgort = ta_material-lgort.
          ta_msegtemp = ta_mseg. APPEND ta_msegtemp.
        ENDLOOP.
    *-->Add up all the stock for the material
        CLEAR w_mengb.
        w_mengb = ta_material-labst +
                        ta_material-umlme +
                        ta_material-insme +
                        ta_material-einme +
                        ta_material-speme +
                        ta_material-retme.
        IF w_mengb IS INITIAL.
          CONTINUE.
        ENDIF.
    *-->sort msegtemp by posting date
        SORT ta_msegtemp BY budat DESCENDING.
    *-->get the values from the material documents into the report output
        LOOP AT ta_msegtemp.
    *->Begin of KL02- -
         CALL FUNCTION 'HRCM_TIME_PERIOD_CALCULATE'
           EXPORTING
             begda               = ta_msegtemp-budat
             endda               = sy-datum
           IMPORTING
            NOYRS               =
             nomns               = w_mths
             nodys               = w_days
          EXCEPTIONS
            invalid_dates       = 1
            overflow            = 2
            OTHERS              = 3
    *->End of KL02- -
    *->Begin of KL02+ -
    *-->Get the days difference btw two dates
          clear w_days.
          w_days = sy-datum - ta_msegtemp-budat.
    *--> Include today's date into calculation
          w_days = w_days + 1.
    *->End of KL02+ -
    check base unit, do conversion
          IF ta_material-meins <> ta_msegtemp-meins.
            CALL FUNCTION 'MD_CONVERT_MATERIAL_UNIT'
              EXPORTING
                i_matnr              = ta_material-matnr
                i_in_me              = ta_msegtemp-meins
                i_out_me             = ta_material-meins
                i_menge              = ta_msegtemp-menge
              IMPORTING
                e_menge              = ta_msegtemp-menge
              EXCEPTIONS
                error_in_application = 1
                error                = 2
                OTHERS               = 3.
          ENDIF.
    *--> SC01 - Start of Insertion **
          SELECT SINGLE peinh
                        verpr
                   INTO (lv_peinh,
                         lv_verpr)
                   FROM mbew
                  WHERE matnr = ta_material-matnr
                    AND bwkey = ta_material-werks.
          IF sy-subrc = 0.
            ta_material-verpr = lv_verpr.
          ENDIF.
    *--> SC01 - End   of Insertion **
    *-->check whether the mseg value is LE than the stock value
          IF ta_msegtemp-menge LE w_mengb.
    *-->Days < 10 days
            IF w_days  LE 1 AND w_days EQ 366.
              i_mat2-meng0 = i_mat2-meng0 + ta_msegtemp-menge.
              IF NOT lv_peinh EQ 0.
                i_mat2-value0 = ( i_mat2-meng0 / lv_peinh ) *
    ta_material-verpr."+SC01
              ENDIF.
    **-->Days 11 - 30 days
           ELSEIF w_days >= 11 AND w_days =< 30.
             i_mat2-meng1 = i_mat2-meng1 + ta_msegtemp-menge.
             IF NOT lv_peinh EQ 0.
               i_mat2-value1 = ( i_mat2-meng1 / lv_peinh ) *
    *ta_material-verpr."+SC01
             ENDIF.
    **-->Days 31-60 days
           ELSEIF w_days >= 31 AND w_days =< 60.
             i_mat2-meng2 = i_mat2-meng2 + ta_msegtemp-menge.
             IF NOT lv_peinh EQ 0.
               i_mat2-value2 = ( i_mat2-meng2 / lv_peinh ) *
    *ta_material-verpr."+SC01
             ENDIF.
    **-->Days 61-90 days
           ELSEIF w_days >= 61 AND w_days =< 90.
             i_mat2-meng3 = i_mat2-meng3 + ta_msegtemp-menge.
             IF NOT lv_peinh EQ 0.
               i_mat2-value3 = ( i_mat2-meng3 / lv_peinh ) *
    *ta_material-verpr.
             ENDIF.
    **-->Days > 90 days
           ELSEIF w_days > 90.
             i_mat2-meng4 = i_mat2-meng4 + ta_msegtemp-menge.
             IF NOT lv_peinh EQ 0.
               i_mat2-value4 = ( i_mat2-meng4 / lv_peinh ) *
    *ta_material-verpr.
             ENDIF.
           ENDIF.
    *->End of KL002+
            w_mengb = w_mengb - ta_msegtemp-menge.
          ELSE.
            IF NOT w_mengb LE 0 .
                 IF NOT lv_peinh EQ 0.
    **->End of KL001+
                   i_mat2-value0 = ( i_mat2-meng0 / lv_peinh )
    *ta_material-verpr."+SC01
    **->Begin of KL001+
                 ENDIF.
               ELSEIF w_days GE 22.
                 i_mat2-meng3 = i_mat2-meng3 + w_mengb.
                 IF NOT lv_peinh EQ 0.
                   i_mat2-value3 = ( i_mat2-meng3 / lv_peinh )
    *ta_material-verpr.
                 ENDIF.
    **->End of KL001+
               ENDIF.
    **--> SC02 - End  of Insertioin **
           ENDIF.
    *->End of KL002-
    *->Begin of KL002+
    *--> < 10 days
                     IF w_days EQ 1 AND w_days LE 366.
              i_mat2-meng0 = i_mat2-meng0 + ta_msegtemp-menge.
              IF NOT lv_peinh EQ 0.
                i_mat2-value0 = ( i_mat2-meng0 / lv_peinh ) *
    ta_material-verpr."+SC01
              ENDIF.
             ELSEIF w_days >= 11 AND w_days =< 30.
               i_mat2-meng1 = i_mat2-meng1 + w_mengb.
               IF NOT lv_peinh EQ 0.
                 i_mat2-value1 = ( i_mat2-meng1 / lv_peinh ) *
    *ta_material-verpr.
               ENDIF.
    **--> 31 - 60 days
             ELSEIF w_days >= 31 AND w_days =< 60.
               i_mat2-meng2 = i_mat2-meng2 + w_mengb.
               IF NOT lv_peinh EQ 0.
                 i_mat2-value2 = ( i_mat2-meng2 / lv_peinh ) *
    *ta_material-verpr.
               ENDIF.
    **--> 61 - 90 days
             ELSEIF w_days >= 61 AND w_days =< 90.
               i_mat2-meng3 = i_mat2-meng3 + w_mengb.
               IF NOT lv_peinh EQ 0.
                 i_mat2-value3 = ( i_mat2-meng3 / lv_peinh ) *
    *ta_material-verpr.
               ENDIF.
    **--> > 90 days
             ELSEIF w_days > 90.
               i_mat2-meng4 = i_mat2-meng4 + w_mengb.
               IF NOT lv_peinh EQ 0.
                 i_mat2-value4 = ( i_mat2-meng4 / lv_peinh ) *
    *ta_material-verpr.
               ENDIF.
              ENDIF.
    *->End of KL002+
              w_mengb = 0.
            ENDIF. " check stock value NE zero
          ENDIF.   "check Mat doc amount is LE than the stock value
    ENDIF.
        ENDLOOP. " msegtemp
    *-->append i_mat2 values
        i_mat2-werks = ta_material-werks.
        i_mat2-lgort = ta_material-lgort.
        i_mat2-matnr = ta_material-matnr.
        i_mat2-maktx = ta_material-maktx.
        i_mat2-meins = ta_material-meins.
        APPEND i_mat2. CLEAR i_mat2.
      ENDLOOP. " ta_material
    *-->Append data for summary data
      DATA: i_lgort LIKE i_mat2 OCCURS 2 WITH HEADER LINE.
      i_lgort[] = i_mat2[].
      SORT i_lgort BY werks lgort.
      DELETE ADJACENT DUPLICATES FROM i_lgort COMPARING werks lgort.
      DATA: v_cnt0(5), v_cnt1(5), v_cnt2(5), v_cnt3(5), v_cnt4(5),
            v_value0 LIKE i_matsum-value0.
           v_value1 LIKE i_matsum-value1,
           v_value2 LIKE i_matsum-value2,
           v_value3 LIKE i_matsum-value3,
           v_value4 LIKE i_matsum-value4.
      LOOP AT i_lgort.
        CLEAR v_cnt0. CLEAR v_value0.
      CLEAR v_cnt1. CLEAR v_value1.
       CLEAR v_cnt2. CLEAR v_value2.
       CLEAR v_cnt3. CLEAR v_value3.
       CLEAR v_cnt4. CLEAR v_value4.
        LOOP AT i_mat2 WHERE lgort = i_lgort-lgort AND
                             werks = i_lgort-werks.
          IF NOT i_mat2-meng0 IS INITIAL.
            v_cnt0 = v_cnt0 + 1.
            v_value0 = v_value0 + i_mat2-value0.
          ENDIF.
         IF NOT i_mat2-meng1 IS INITIAL.
           v_cnt1 = v_cnt1 + 1.
           v_value1 = v_value1 + i_mat2-value1.
         ENDIF.
         IF NOT i_mat2-meng2 IS INITIAL.
           v_cnt2 = v_cnt2 + 1.
           v_value2 = v_value2 + i_mat2-value2.
         ENDIF.
         IF NOT i_mat2-meng3 IS INITIAL.
           v_cnt3 = v_cnt3 + 1.
           v_value3 = v_value3 + i_mat2-value3.
         ENDIF.
         IF NOT i_mat2-meng4 IS INITIAL.
           v_cnt4 = v_cnt4 + 1.
           v_value4 = v_value4 + i_mat2-value4.
         ENDIF.
        ENDLOOP.
        CLEAR i_matsum.
        i_matsum-lgort = i_mat2-lgort.
        IF v_cnt0 NE space.
          i_matsum-cnt0 = v_cnt0.
          i_matsum-value0 = v_value0.
          APPEND i_matsum.
        ENDIF.
       CLEAR i_matsum.
       i_matsum-lgort = i_mat2-lgort.
       IF v_cnt1 NE space.
         i_matsum-cnt1 = v_cnt1.
         i_matsum-value1 = v_value1.
         APPEND i_matsum.
       ENDIF.
       CLEAR i_matsum.
       i_matsum-lgort = i_mat2-lgort.
       IF v_cnt2 NE space.
         i_matsum-cnt2 = v_cnt2.
         i_matsum-value2 = v_value2.
         APPEND i_matsum.
       ENDIF.
       CLEAR i_matsum.
       i_matsum-lgort = i_mat2-lgort.
       IF v_cnt3 NE space.
         i_matsum-cnt3 = v_cnt3.
         i_matsum-value3 = v_value3.
         APPEND i_matsum.
       ENDIF.
       CLEAR i_matsum.
       i_matsum-lgort = i_mat2-lgort.
       IF v_cnt4 NE space.
         i_matsum-cnt4 = v_cnt4.
         i_matsum-value4 = v_value4.
         APPEND i_matsum.
       ENDIF.
      ENDLOOP.
    ENDFORM.                    "f_data_preparation
    *IMPORTANT , NEED TO CHECK LATER
    FORM f_display_data .
      IF pck_dtsm = 'X'.
    *-->Display detail
        v_topofpage = 'D'.
        PERFORM f_display_detail.
    *-->display summary
        v_topofpage = 'S'.
        NEW-PAGE.
        PERFORM f_display_summary.
      ELSEIF pck_detl = 'X'.
    *-->Display detail
        v_topofpage = 'D'.
        PERFORM f_display_detail.
      ELSEIF pck_summ = 'X'.
    *-->display summary
        v_topofpage = 'S'.
        PERFORM f_display_summary.
      ENDIF.
    ENDFORM.                    " f_display_data
    FORM f_display_detail .
      v_topofpage = 'D'.
      SORT i_mat2 BY werks lgort.
    *DATA: v_count(5),
    DATA: v_count,
            v_va1 LIKE i_mat2-value0.
           v_va2 LIKE i_mat2-value0,
           v_va3 LIKE i_mat2-value0,
           v_va4 LIKE i_mat2-value0,
           v_va5 LIKE i_mat2-value0.
      LOOP AT i_mat2.
        WRITE:/2 i_mat2-lgort,
              10 i_mat2-matnr,
              22 i_mat2-maktx(38),
              61 i_mat2-meins,
              64 i_mat2-meng0,
              82 i_mat2-value0.
             100 i_mat2-meng1,
             118 i_mat2-value1,
             136 i_mat2-meng2,
             154 i_mat2-value2,
             169 i_mat2-meng3,
             186 i_mat2-value3,
             204 i_mat2-meng4,
             222 i_mat2-value4.
    *-->Set counter and add up all values by lgort
        v_count = v_count + 1.
         v_va1 = v_va1 + i_mat2-value0.
       v_va2 = v_va2 + i_mat2-value1.
       v_va3 = v_va3 + i_mat2-value2.
       v_va4 = v_va4 + i_mat2-value3.
       v_va5 = v_va5 + i_mat2-value4.
        AT END OF lgort.
          WRITE:/10 'Cnt:',
                 15 v_count,
                 55 'Subtotal',
                 82 v_va1.
                118 v_va2,
                154 v_va3,
                186 v_va4,
                222 v_va5.
          CLEAR v_count. CLEAR v_va1.
         CLEAR v_va2. CLEAR v_va3. CLEAR v_va4. clear v_va5.
        ENDAT.
      ENDLOOP.
    ENDFORM.                    " f_display_detail
    FORM f_display_summary .
      v_topofpage = 'S'.
      LOOP AT i_matsum.
        IF NOT i_matsum-cnt0 IS INITIAL.
          WRITE:/2  i_matsum-lgort,
                 10 'Cnt:',15 i_matsum-cnt0,
                 82 i_matsum-value0.
        ENDIF.
       IF NOT i_matsum-cnt1 IS INITIAL.
         WRITE:/2  i_matsum-lgort,
                10 'Cnt:',15 i_matsum-cnt1,
                118 i_matsum-value1.
       ENDIF.
       IF NOT i_matsum-cnt2 IS INITIAL.
         WRITE:/2  i_matsum-lgort,
                10 'Cnt:',15 i_matsum-cnt2,
                154 i_matsum-value2.
       ENDIF.
       IF NOT i_matsum-cnt3 IS INITIAL.
         WRITE:/2  i_matsum-lgort,
                10 'Cnt:', 15 i_matsum-cnt3,
                186 i_matsum-value3.
       ENDIF.
       IF NOT i_matsum-cnt4 IS INITIAL.
         WRITE:/2  i_matsum-lgort,
                10 'Cnt:', 15 i_matsum-cnt4,
                222 i_matsum-value4.
       ENDIF.
      ENDLOOP.
    ENDFORM.                    " f_display_summary

  • How to remove unused objects from the webcatalog In OBIEE11g

    Hi,
    I want to delete unused objects from obiee11g catalog, i know in obiee10g it's working fine (i.e: we can do it via manage catalog then delete the unused objects) is there any way to do it automatically like RPD utility --->removing unused objects from Physical layer in RPD
    fyi: I don't want to delete manualy. i need somethink like button/link to find unused objects(report,filter,folder..etc) from my obiee11g catalog.
    Thanks
    Deva
    Edited by: Devarasu on Nov 29, 2011 12:06 PM
    Edited by: Devarasu on Nov 29, 2011 3:15 PM

    Hi,
    Checked with Oracle Support team and confirmed below points
    --> incorporated into the Current product and consider as BUG it may resolve future release
    --> Currently there isnt any automatic method to remove the unused objects like reports, filters,folder etc from catalog.
    Treated as Bug
    Bug 13440888 - AUTOMATICALLY REMOVE OF UNUSED CATALOG OBJECTS FROM WEBCATALOG
    FYI:
    SR 3-4984291131: How to remove unused objects from the webcatalog in obiee11g
    Thanks
    Deva

  • Incomplete Data on report (report does not show all records from the table)

    Hello,
    I have problem with CR XI, I'm running the same report on the same data with simple select all records from the table (no sorting, no grouping, no filters)
    Sometimes report shows me all records sometimes not. Mostly not all records on the report. When report incomplete sometimes it shows different number of records.
    I'm using CR XI runtime on Windows Server 2003
    Any help appreciated
    Thanks!

    Sorry Alexander. I missed the last line where you clearly say it is runtime.
    A few more questions:
    - Which CR SDK are you using? The Report Designer Component or the CR assemblies for .NET?
    - What is the exact version of CR you are using (from help | about)
    - What CR Service Pack are you on?
    And a troubleshooting suggestion:
    Since this works on some machines, it will be a good idea to compare all the runtime (both CR and non CR) being loaded on a working and non working machines.
    Download the modules utility from here:
    https://smpdl.sap-ag.de/~sapidp/012002523100006252802008E/modules.zip
    and follow the steps as described in this thread:
    https://forums.sdn.sap.com/click.jspa?searchID=18424085&messageID=6186767
    The download also includes instructions on how to use modules.
    Ludek

  • How to remove available downloads from the list

    how to remove available downloads from the list without it resuming when i open itunes or check for available downloads?

    There is not a way to remove them from the list.  Just let them download, and then delete them from your library when they are done.

  • How to display the values from the table in the screen

    Hi,
    I have created a screen where i will enter the values for the field treshold amount and desc and if i press update button  .it will update the new values by overriting the existing values .
    Now i have got requirement i need to create a button show which will display the existing value from the table. always there will be only one entry...in this table
    Please can one give me idea...to do this
    or sample code...thanks in advance
    regards
    paveeeeee

    Define a function code 'SHOW' for your button. In your PAI module, when you check for various sy-ucomms, check for 'SHOW' also.
    Your code will be like this:
    Case sy-ucomm.
      when 'SHOW'.
         perform show_details.
    endcase.
    In the perform, you can fetch the data from the table and put it in global variables. In the PBO, move the data from the global variables to the screen fields so that they get displayed on the screen.
    Hope this helps. Reward points for useful answers.
    Regards
    Nithya

Maybe you are looking for

  • Survey questionaire not saved

    Hi, We are using SAP CRM 5.0 Interaction Centre. While creating a customer feedback transaction with survey in Interaction Centre, following is the observation: 1. We open the customer feedback transaction, enter the mandate information in the detail

  • Itunes not saving user agreement

    Hi if any one can help me on this sbject i would really really appreciate it!! basically since i've had my 3rd generation ipod nano this problem has occured, everytime i open itunes and click on the ipod icon the user agreement that when firt opening

  • Failed to convert

    I have tried to convert a pdf in xlsx, but the new file shows only the half of the original. Can you please help me? thanks a lot

  • Fill Image not visible with iOS (ipad/iphone)

    Hi I created a website with Adobe Muse and I have an issue when surfing with ipad or iphone. When you have a coloured rectangle and add a fill image to it, it's perfectly visible with any type of browser on a desktop pc. Yet, when using iOS7 the imag

  • How I can download dtv India on my iphone  5 to watch live cricket match

    How can I down dtv India on my iphone 5 to watch live cricket match.