Stats of Data Collector from Profiler Trace

Hi,
I managed to successfully create a data collector. The collection set/item definitions are generated from the SQL Server Profiler tool. 
The trace definition is simple and is as follows.
SELECT @trace_definition = convert(xml, 
N'<ns:SqlTraceCollector xmlns:ns="DataCollectorType" use_default="0">
<Events>
  <EventType name="Stored Procedures">
    <Event id="10" name="RPC:Completed" columnslist="1,8,11,12,13,35" />
  </EventType>
  <EventType name="TSQL">
    <Event id="12" name="SQL:BatchCompleted" columnslist="1,11,8,12,13,35" />
  </EventType>
</Events>
<Filters>
  <Filter columnid="8" columnname="HostName" logical_operator="AND" comparison_operator="NOTLIKE" value="XXXX" />
  <Filter columnid="11" columnname="LoginName" logical_operator="AND" comparison_operator="LIKE" value="YYYY" />
  <Filter columnid="35" columnname="DatabaseName" logical_operator="AND" comparison_operator="LIKE" value="ZZZZ" />
</Filters>
</ns:SqlTraceCollector>
The data collection is scheduled and started. There are no errors shown in the logs.
I created this collection to store the execution time of various SQL statements executed by a particular user connecting from particular machine in a particular database? I am struggling to find out the SQL query to query Management data warehouse to see
the stats. Could anyone let me know which table/view can be used to find out the stats? I can't find the data in snapshots.performance_counter_values table.
Thanks,
Sree

Check this link and this link is having few more details and see if this may help you..
http://blogs.msdn.com/b/sqlprogrammability/archive/2009/01/29/using-sql-server-2008-management-data-warehouse-for-database-monitoring-in-my-application.aspx
Raju Rasagounder Sr MSSQL DBA

Similar Messages

  • Is there a way from where I can restore my profile from a back-up that I took from a folder (e.g \Local Settings\Application Data\Mozilla\Firefox\Profiles\)

    Before re-installing Firefox 19, I mistakenly took backup of my profile from a folder - C:\Documents and Settings\garan14\Local Settings\Application Data\Mozilla\Firefox\Profiles and not from C:\Documents and Settings\garan14\Application Data\Mozilla\Firefox\Profiles.
    While installing, I selected "Delete the old profiles" option and I don't see any old profile folder under C:\Documents and Settings\garan14\Application Data\Mozilla\Firefox\Profiles.
    However, I have a lot of important data saved under that profile which now has got deleted.
    Is there any way I can still restore the data from old profile? (P.S.: As mentioned above, I have a back-up of profile folder which resides under Local Settings folder)
    I am in a desperate need for the help. Pls reply as soon as possible.

    i have some problem. i delete firefox profile and i cannot use mozilla right now. can you help me ? thank you

  • Info about the log traces in Activity Data Collector

    Hi,
    I have configured the activity data collector by setting the following properties in ADC and restarted the service
    Activate Data Collection :true
    Additional File Formats: --(not set anythng left blank)
    Base File Name: Portal Activity
    Directory Name: portalActivityTraces
    File Encoding : UTF-8
    Hour in the day to close all files, in GMT : 0
    Main File Format : %Orfo.t(dd-MM-yyyy HH:mm:ss,GMT+5.5)%%Stab%%Orfo.ct%%Stab%%Orfo.in%%Stab%%Orfo.un%%Stab%%Orfo.bt%%Stab%%Orfo.pu%%Stab%%Orfo.rh(referer)%%Snl%
    Max Buffer size :500KB
    Max File Size : 10240 KB
    The files are created in the folder called "Portal Activity Traces" But the issue is with the name of the log files getting created
    Since i have not set any additional file formats, The log files consist of main file formats
    the file names are like this 
    portalActivity_29893750_1254305061537.txt.open in this
    wat  does the time stamp "1254305061537" refer? Plz explain
    some files are of type text document and some are of type "open" wat does tis mean?
    If i set t "Hour in the day to close all files"
    to 0 wen does it write the log files? Is it at 12? after that will it create a new file?
    n in the main file format i have set the time to GMT5.5 (since IST is GMT5.5) but im not getting the proper time format
    Plz help me out
    Thanks in Advance
    Regards,
    Sowmya
    Edited by: Sowmya B on Sep 30, 2009 1:43 PM

    Hi Prasanna,
    Thanks for the reply.
    Actually there are about 3 to 4 files which are of type .open and they have created long back.Are those files still getting populated.If we set the "hour to close all files" to 0 it should close the open files and create the new(fresh) files for the next day right?
    According to the documentation in help portal, "Files may be closed before reaching this limit(Max File Size), as all files are closed at the hour specified in the Hour in the day to close all files property."
    Then how come some old files are still getting populated?
    Midnight means wat time in particular? Plz explain
    About timestamp is it the time the file was created in some format?
    If anyone knows plz explain the format of timestamp
    Edited by: Sowmya B on Sep 30, 2009 2:13 PM

  • How to create an event/incident from data collector on agent?

    Hi,
    I would like to be able to create an event or incident for a target type/name, from its data collector script on the Agent.
    On the OM Server I would be able to use emcli publish_event but emcli is not available on the Agent of course.
    Thanks,
    Ed

    Andy;
    If you are using Labview to do that, you can take advantage of the shipping example named "Measure Buffered Period.vi". You can set that counter's source to be the 100K interanl time base, and route the events line to that counter's gate pin. At each event, the counting number will be stored at the buffer, and you can keep reading those numbers. as you know the frequency of the timebase, the elapsed time will be the counting number you read, times 1/100k. As you probably don't need much accuracy on the 10minutes counter, you can do that opeartion by Software, and stop the while loop after that time has been reached.
    Hope this heslps.
    Filipe
    Applications Engineer
    National Instruments

  • Exception thrown while trying to acquire data from Profile Data Control

    I created a new WebCenter Portal Application (with JDeveloper 11.1.1.4).
    Then tried to acquire data from Profile Data Control through getProfile().WCUserProfile to create a read-only form with profile data.
    After deploying (to internal WLS) and running I got a popup saying:
    *Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.1.2.v20101104-r8475): org.eclipse.persistence.exceptions.ValidationException Exception*
    Description: Cannot acquire data source [java:comp/env/jdbc/WebCenterDS].
    Internal Exception: javax.naming.LinkException: [Root exception is javax.naming.NameNotFoundException: While trying to lookup 'jdbc.webcenter.CustomPortalDS' didn't find subcontext 'webcenter'. Resolved 'jdbc'; remaining name 'webcenter/CustomPortalDS']; Link Remaining Name: 'jdbc/webcenter/CustomPortalDS'
    The question is how can I create such a data source to acquire data needed?

    Hi, there's two ways, in both ways you need use the user <PRE>_WEBCENTER that RCU create.
    1 - You can go in your console and create de dataSource, as Yannick sad. You can use your production database if you want get some more real tests, or you can get some develope database and run RCU. You can copy the configuration from you WLS server that are installed webcenter to make this easy.
    2 - In develop mode you can just create a database connection name "WebCenter", just like sad here:
    http://download.oracle.com/docs/cd/E17904_01/webcenter.1111/e10148/jpsdg_people.htm#BABICGCH

  • Creating Report from Activity Data Collector

    Hi all,
    I have used  Activity Data Collector to get the potal user's activities. All informations are in text file only. I want to create formatted report. How can I achieve this?
    Help me in this regard.
    Thanks & Regards,
    Hemalatha J

    Hi Hema,
    You can try out the different options in this blog. It contains links to information and infosession by John Polus.
    Who's Doing What in my Portal? : Usage Analytics and the SAP Netweaver Portal - Part I
    regards,
    Shantanu

  • Loading info from actvity data collector into SM BI client

    Experts:
    We enabled  actvity data collector  in our EP, and want to load the data collected into the SM BI client.
    We are not sure how to do the load.
    Could you provide some links here?
    Thanks!
    PS: EP is monitored by the said SM7.0

    Hi Christy,
    You may want to read about "No-Limits Portal Statistics".
    It provides a WebBug system to write custom data via ADC (Activity Data Collector), and a JavaScript API to collect custom characteristics and integrate with internal tools like BI or SM.
    Also read,
    http://wiki.sdn.sap.com/wiki/display/EP/No-LimitsPortalStatistics
    Hope this helps.
    Regards,
    Varun

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • Reg:Activity Data Collector

    Hi All,
    Greetings!!!
    Happy New Year!!!
    Coming to the query, i need to implement Activity Data Collector Report, but the configuration needs an application named "com.sap.portal.activitydatacollector"; but that is not available in our enterprise portal 6.0. So kindly tell whether this comes under which support package. Or to resolve this issue.
    Thanks
    Piiyush

    Hello,
    If you needed a graphical front-end or multiple reporting capabilities, one would need to build something themselves as described at the link above, which may not be convenient, nor offer all the usage stats you are looking for. The ADC simply does not offer a graphical front-end and does not come with integrated reports.      
    Otherwise a far easier solution is to switch to a 3rd party application like Click Stream SAP Portal analyzing solution from Sweetlets. (http://www.sweetlets.com/click_stream_overview.html or http://ecohub.sdn.sap.com/irj/ecohub/solutions/clickstream) This plug n' play application offers various SAP Portal usage reports complete with a visually appealing Flex frontend.
    Good luck.
    Burt

  • Activating Activity Data Collector

    Hi,
    I have configured the activity Data Collector according to the below documentation.
    http://help.sap.com/saphelp_nw70/helpdata/EN/47/8ac2e51b141e1ee10000000a42189d/frameset.htm
    But it appears that the ADC only starts colllecting logs, if the Portal Activity Report Service is also turned on. I thought ADC was supposed to be a competing activity collection technology and I dont understand this dependancy.
    Am I right in this or am I missing something else?
    Thanks
    Raj Balakrishnan

    Hello,
    If you needed a graphical front-end or multiple reporting capabilities, one would need to build something themselves as described at the link above, which may not be convenient, nor offer all the usage stats you are looking for. The ADC simply does not offer a graphical front-end and does not come with integrated reports.      
    Otherwise a far easier solution is to switch to a 3rd party application like Click Stream SAP Portal analyzing solution from Sweetlets. (http://www.sweetlets.com/click_stream_overview.html or http://ecohub.sdn.sap.com/irj/ecohub/solutions/clickstream) This plug n' play application offers various SAP Portal usage reports complete with a visually appealing Flex frontend.
    Good luck.
    Burt

  • Data read from undo

    hi,
    how to find the data read from undo tablespace.
    how to find the data read from datafile.
    any select statement read from redo log or not. i think no,is it correct
    thanks
    with regards

    user3266490 wrote:
    hi,
    thanks for reply.
    What does it mean by "how to find data read" ?
    that means how  to find the a select statement  whether read from data buffer cache or data file*
    even if it is read from data file.first kept in buffer then return to userYes , data is always going to be read from teh buffer cache only , even if its going to be a physical read too.
    In case you want to see that there was a PIO involved or logical IO( from teh cache), you can check so by seeing the stats for the query like below
    SQL*Plus: Release 10.2.0.1.0 - Production on Thu May 21 11:59:39 2009
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to an idle instance.
    SQL> startup
    ORACLE instance started.
    Total System Global Area  167772160 bytes
    Fixed Size                  1247900 bytes
    Variable Size              75498852 bytes
    Database Buffers           88080384 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    Database opened.
    SQL> conn aman/aman
    Connected.
    SQL> set autot trace stat
    SQL> select * from scott.emp;
    14 rows selected.
    Statistics
            455  recursive calls
              0  db block gets
             83  consistent gets
             10  physical reads                      <----------------------- This went to disk first to read the data
              0  redo size
           1415  bytes sent via SQL*Net to client
            381  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
             14  rows processed
    SQL> select * from scott.emp;
    14 rows selected.
    Statistics
              0  recursive calls
              0  db block gets
              8  consistent gets
              0  physical reads                    <---------------No PIO, which means it was accessed truly from the cache and didn't involve disk IO at all.
              0  redo size
           1415  bytes sent via SQL*Net to client
            381  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             14  rows processed
    SQL>HTH
    Aman....
    Edited by: Aman.... on May 21, 2009 11:58 AM

  • S_A.SYSTEM Profile - Trace

    Hi Experts ,
    We have XXX user ID - (Communication Data ) for XXX Monitoring software and it has S_A. System  Profille in all 000 clients.
    I found this XXX user tracing all systems (all systems - cross clients, But this id exists only 000 client) every day . I dont know more details about this ID. How i can find what this ID is performing with Trace in the system.
    How to find all Trace Records in a system ?
    Thanks
    R

    Hi,
    Hoping that trace was enabled in SM19, You can see the trace results in SM20. From this trace results you can find all transaction codes / reports used by this user. Its always better to reconfigure the role for this user rather than assigning S_A.System profile.
    Also you can check with client side responsible managers and documentation like Manuals which may describe more about this user.
    Regards,
    Gowrinadh

  • Exchange 2013 Performance Monitor Data Collector Sets

    I've noticed that the Exchange 2013 install created two data collector sets within Performance Monitor:
    ExchangeDiagnosticsDailyPerformanceLog
    ExchangeDiagnosticsPerformanceLog
    These sets are creating daily log files saved in C:\Program Files\Microsoft\Exchange Server\V15\Logging\Diagnostics\DailyPerformanceLogs that are about 500MB in size, which are filling up our system volume.
    I've tried stopping them, but they get restarted automatically. I also tried deleting them, but they get recreated automatically. Is there a way to disable these? We have a 3rd party tool that we use for performance monitoring and would like to avoid having
    to delete these log files on a regular basis to keep our system volume from filling up.
    As a workaround, I've changed the path where the log files get saved to a non existent drive letter, but would still like to disable the Data Collector Sets completely.
    Thanks,
    -Cory

    Do note,
    These performance counters only take up 1 week and not more so it will be about 3-5 GB if you can`t spare this on you c disk you have not implemented any best practise exchange design.
    specialy with todays JBOD SATA design your c disk will be 1TB +. Review your designs to match MS advise.
    MCTS-MCITP exchange 2010 | MCTS-MCITP Exchange: 2007 | MCSA Messaging: 2003 | MCP windows 2000
    Hello Martin,
    Many of us are implementing exchange in a virtual environment, where disk is at a premium.  Spending money on a 1 TB system drive is not a reasonable expectation.  Even if that is best practice, there should be a way to manage the location and
    size of these log files. 
    The inability to manage the location of log/performance data for centralization, or other reasons, is one more example of the disconnect between MS and their customer's use cases in regards to 2013. 
    These are the requirements for 2013 from MS:
    At least 30 GB on the drive on which you install Exchange
    An additional 500 MB of available disk space for each Unified Messaging (UM) language pack that you plan to install
    200 MB of available disk space on the system drive
    A hard disk that stores the message queue database on with at least 500 MB of free space.
    So your statement is invalid as 30 GB is not even close to deal with the amount of log and performance data that we cannot modify in any way. 

  • RAT - SQL Tuning Set from 9i trace in wrong order

    Hi together
    I want to test the impact of some staments on 9.2.0.8 in 11g with Real Application Testing, but the order in the tuning set seems to b wrong (not in the traced order). So i made a small Test Script who is creating a Table, insert Data with a loop and select all.
    Steps :
    * 9i enable trace level 4
    * 9i run the script
    * 9i stop the trace
    * transfer to 11g
    * make an sql tunnig set
    * running in the Performance Analyzer Workflow
    The first run Oracle makes performance datas from the trace files. In the second run I want to execute the statements from the tuning set in the 11g database. The traced Schema from 9i exists.
    In the report after the second run I see the failure that the insert and select statement is not allowed. When i take a look in the SQL Tuning Set i see that the select and insert will be played before the create table ! Also on the view DBA_SQLSET_STATEMENTS.
    Is there a way to get the correct order in the SQL Tuning Set like the statement order (create before insert,select) ? Or can Oracle not handle DDL-Statements in the trace ?
    Regards Martin

    Hi together
    I want to test the impact of some staments on 9.2.0.8 in 11g with Real Application Testing, but the order in the tuning set seems to b wrong (not in the traced order). So i made a small Test Script who is creating a Table, insert Data with a loop and select all.
    Steps :
    * 9i enable trace level 4
    * 9i run the script
    * 9i stop the trace
    * transfer to 11g
    * make an sql tunnig set
    * running in the Performance Analyzer Workflow
    The first run Oracle makes performance datas from the trace files. In the second run I want to execute the statements from the tuning set in the 11g database. The traced Schema from 9i exists.
    In the report after the second run I see the failure that the insert and select statement is not allowed. When i take a look in the SQL Tuning Set i see that the select and insert will be played before the create table ! Also on the view DBA_SQLSET_STATEMENTS.
    Is there a way to get the correct order in the SQL Tuning Set like the statement order (create before insert,select) ? Or can Oracle not handle DDL-Statements in the trace ?
    Regards Martin

  • Error while scheduling data collector: Error adding routine 'SAPTOOLS.DBH_FWK_CLEANUP' to task scheduler

    Error Detail
    Exception CX_DBA_ADBC in program RAGS_SISE_ACTIVITY_JOB line 0
    Kernel Error ID:
    WP ID: 23
    WP PID: 28977
    SYSID: CR5
    SY-SUBRC: 0
    SQL statement:
    Database: CR5
    caused by
    Exception CX_SQL_EXCEPTION in class CL_SQL_STATEMENT
    Kernel Error ID:
    DB Error: Yes
    SQL Code: 444-
    SQL Message: SQL0444N Routine "*TASK_ADD" (specific name "SQL140704080729140") is implemented with code in library or path "...ib/function/SYSPROC.ADMIN_TASK_ADD", function "*" which cannot be accessed. Reason code: "4". SQLSTATE=42724 row=1
    DB Object Exists: No
    Duplicated Key: No
    Internal Error: 1
    Invalid Cursor: No
    Unknown Connection: No
    Connection Closed: No
    System Detail:
    Solman 7.1
    ST    710    0010    SAPKITL710    SAP Solution Manager Tool
    SAP_BASIS    702    0013    SAPKB70213    SAP Basis Component
    Managed system:
    SAP CRM ABAP 7.0
    SAP_BASIS    701    0005    SAPKB70105    SAP Basis Component
    SAP_ABA    701    0005    SAPKA70105    Cross-Application Component
    PI_BASIS    701    0005    SAPK-70105INPIBASIS    Basis Plug-In
    ST-PI    2008_1_700    0008    SAPKITLRD8    SAP Solution Tools Plug-In
    CRMLOY    700    0005    SAPK-70005INCRMLOY    CRM Loyalty Management 700
    SAP_BS_FND    701    0005    SAPK-70105INSAPBSFND    SAP Business Suite Foundation
    SAP_BW    701    0005    SAPKW70105    SAP Business Warehouse
    LCAPPS    2005_700    0007    SAPKIBHD07    LC Applications (LCAPPS) 2005_700
    Database DB2
    db2level
    DB21085I  Instance "db2cr5" uses "64" bits and DB2 code release "SQL09016" with
    level identifier "01070107".
    Informational tokens are "DB2 v9.1.0.6", "s081007", "U817474", and Fix Pack
    "6".
    Activity detail:
    We are performing managed system configuration for CRM into Solman by using solman_setup transaction. While performing the Database Extractor Setup in step 8 we have observed above error.
    Action take at our end:
    1. Clean the LMDB and restart the configuration. - no luck
    2. Upgrade the hostagent at CRM - No luck
    3. Update the SLD for CRM entry - no luck
    4. implemented SAP notes:
    875986    Note Assistant: Important notes for SAP_BASIS up to 702
    1246964    Note Assistant: Master language of notes incorrect
    1262653    SPAU: New object is deleted after note is reset
    1309424    DB6: DBA Cockpit Correction Collection SAP Basis 7.01 / 7.11
    1335017    DB6:"Remove Redundant Restrictions" can return wrong results
    1349277    Note Assistant: Method cannot be implemented
    1365677    Note Assistant: Runtime error MOVE_CAST_ERROR during implmtn
    1372652    DB6: Short dump when viewing diaglog due to NULL bytes
    1373957    DB6: CX_SY_CONVERSION_OVERFLOW in new EXPLAIN
    1376543    DB6: OPTIONS parameters for backup jobs in DB13
    1378499    DB6: CLI error CLI0112E with "REORGCK_ALL" job
    1379260    DB6: Add BW query name as comment to SQL statements
    1379346    DB6: Scheduling of data collectors fails
    1381179    DB6: Incorrect values for 'number of objects in tablespace'
    1382634    DB6: Unable to create view 'SAPTOOLS.DBH_TABCLASS'
    1382996    DB6: Update of DPW Back-End in Monitored DBs does not work
    1384238    DB6: Defect scheduler on DB2 9.1 FP7/FP8 for LUW on Linux
    1387022    DBA Cockpit: Month displayed incorrectly in DB13C
    1387297    DB6: SQL-Fehler 901 during RUNSTATS and REORGCHK
    1397709    Ignore Dynpro element fields AGLT and ADEZ in SNOTE/CWB
    1398258    DB6: Job REORGCK_ALL places load on package cache
    1400843    DB6: Incorrect display of key fields in EXPLAIN
    1412719    SNOTE: error when implementing enhancement implementations
    1413008    DB6: SQL0206N in function module DB6_PM_LOCKSNAP
    1414624    DB6: Performance views if database monitors are deactivated
    1414626    DB6: Incorrect display of file system sizes of containers
    1415680    Note Assistant: Incorrect status in subsequent systems
    1421157    DB6: SQL error 204 when accessing table DBSTATC
    1425487    SE24: Error regarding READ-ONLY for complex attributes
    1426092    DB6: Incorrect DROPPED TABLE clause for tablespaces
    1426480    DB2: Incorrect display of registry values for DPF systems
    1427030    DB6: Container specifications for tablespaces not changeable
    1429082    DB6: No REORG after deactivating compression
    1429687    DB6: SQL cache performance
    1438168    DB6: REORGCHK recommendations for indexes are missing
    1444373    DB6: Loading the package cache with monitor functions
    1449482    DB6: Error message 'Command LIST_DB2DUMP failed'
    1451958    DBA Cockpit: Incorrect start time for jobs
    1452197    DB6: SQL error 100 in job REORGCK_ALL
    1452502    DBA Cockpit: Jobs are missing in central planning calendar
    1455897    DB6: Display of data classes is not updated
    1456379    DB6: No display of indexes in data classes
    1460895    DB6: SQL0104N during creation of WLM threshold
    1462415    DB6: SQL -444 error messages in system log
    1462855    DB6: Incorrect database name in HA environment
    1464800    DB6: SQL Commands executes automatically on system change
    1464858    DB6: COMPUTE_BCD_OVERFLOW during EXPLAIN Test Execute
    1469515    DB6: Runtime error GETWA_NOT_ASSIGNED_RANGE in SAPLSDB6MON
    1485313    DBA Cockpit: Incomplete system entries after SLD import
    1486972    DB6: Parameters for DB/DBM configuration cannot be changed
    1489968    DBA: DBA Cockpit WebDynpro does not care about HTTPURLLOC
    1496515    DB6: SQL error 1428N when starting the DBA Cockpit
    1501130    DB6: SQL error 802 in DB6_DIAG_COUNT_TABLE_ENTRIES
    1508074    RZ20: 'Connection' attribute does not report alerts
    1509121    DBA Cockpit: Endless loop occurs when starting DBA Cockpit
    1511803    DB6: DB_TABLE_DATA_READ does not return data
    1521525    DB6: Table display is not sorted
    1522617    DB6: Availability of BW-specific functions in DBA Cockpit
    1532114    DB6: Too many locks when collecting table history
    1536787    DBA Cockpit: WebDynpro Explain - LOADDATA requires a model
    1542311    DB6: Runtime error BCD_OVERFLOW in auto maintenance display
    1546866    DB6: Runstats_DBSTATC interprets runtime param. incorrectly
    1551729    DB6: Incorrect number of key fields in EXPLAIN
    1552812    DB6: Use of db2sap functions
    1559699    DB6: Missing data in SQL cache display
    1559967    DB6: SQL error 206 when collecting the table history
    1563327    DB6: SQL error SQL0551N when accessing SYSSTAT.TABLES
    1568800    DB6: Error when deleting alert messages
    1569592    DB6: SQL error SQL0433N in EXPLAIN
    1569669    DB6: Incomplete history for performance data
    1571365    DB6: SQL error SQL0443N in alert monitoring of DPF system
    1576094    DB6: Database error SQL1751N in partitioning wizard
    1597281    DB6: Incorrect compression displays for tables
    1599764    DB6: SQL error 1428 when calling transaction SM50
    1602403    DB6: No VOLATILE attribute after RUNSTATS or REORG job
    1613270    DB6: Runtime error DYNPRO_FIELD_CONVERSION in DBA cockpit
    1615698    DBA Cockpit: Incorrect date selection in DB13C
    1619084    DBA Cockpit: Runtime err MESSAGE_TYPE_X when alert displayed
    1619636    DBA Cockpit: Daily scheduling is not deleted
    1624436    DBA Cockpit: Errors when accessing SHM area CL_DBA_SHM_AREA
    1639631    DBA Cockpit: Failed schedulings in DB13
    1720495    Invalid deimplementation of obsolete notes by Snote tool
    Kindly suggest the correct solution to fix the issue.

    Dear Deepak
    Thanks for quick response.
    Note 1462415 - DB6: SQL -444 error messages in system log  ( SAP_BW 701 SP7 suggested)
    This is already implemented and updated in my 1st message.
    Note 978319 - DB6: Incorrectly cataloged table functions
    This note can not be implemented due to version issue.
    Kindly suggest some more hints on top of what I already did.
    Regards
    Bipin

Maybe you are looking for

  • How can i get music to not show up on my phone that i dont want there

    how can i get music to not show up on my phone that i dont want there

  • CSS problem?

    I'm not sure what I did, but somehow I have managed to change a preference (or something?) in DW8 which is screwing with my templates and HTML files. The buttons I've created are now coming out as linked text, and the format of the page in the templa

  • Zen Micro battery flat by itself .

    Hi, I bought my Zen Micro in early Dec. Lately I observed that battery flat in two days time even after fully charged and left idle without using it. It didn't seem to be like when it was first used. After a call to Creative Support Center, they advi

  • Zero values in reports

    Hello everyone, I would like to have values displayed in a report even if they are "0" for example I have a simple table with country, #of campaigns and quater/year. Unfortunately I get only the countries and quarters displayed where I have got value

  • DAC-Changing the $$ANALYSIS_START ownership

    All, We are trying to copy and change the $$ANALYSIS_START date in DAC. We are in the Source System Parameters and are in our custom Container. We select the Name and copy Record, but it does not change the ownership. How do you change the ownership