Performance Tuning of View

Hi All,
I have 2 tables.
Parts and Part_Property
Parts
Part ID Status 
1 Complete
2 Pending
3 Complete
Part Property
Part ID Status
Part Type Part String
1 Complete
Active True
1 Complete
Data_Status Raw
1 Complete
Temp_Verification Valid_Test
1 Complete
Name Screw
2 Complete
Active False
2 Complete
Data_Status Raw
2 Complete
Temp_Verification Valid_Test
2 Complete
Name Hooks
3 Complete
Active True
3 Complete
Data_Status Raw
3 Complete
Temp_Verification Valid_Test
3 Complete
Name Bolt
The above rows are small set of data. Our Previous code with Left Outer Join, has huge Scan Count > 15000 and Logical Reads > 40000. And the users complain that it is very slow always.
 SQL Server Execution Times:
   CPU time = 313 ms,  elapsed time = 318 ms.
SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 0 ms.
Is there any way to overcome these performance bottleneck ?

You violated ISO-11179 naming standards. You violated First Normal Form(1NF).
You have committed EAV (Entity-Attribute-Value), a huge design flaw. We never, never, never mix data and meta data in a table. Using "VW_" in the names of VIEW is called "Volkswagen Programming", another noob error
This is a pretty simple and very clean EAV example. In practice you will see one table, descriptively named something like "DATA", all NULL-able columns, no attempt at keys and worse. Much worse. 
CREATE TABLE Attributes_Values 
(attribute_name VARCHAR (10) NOT NULL,
 attribute_value VARCHAR (50) NOT NULL,
 PRIMARY KEY (attribute_name, attribute_value));
INSERT INTO Attributes_Values 
VALUES ('LOCATION', 'Bedroom'),
       ('LOCATION', 'Dining Room'),
       ('LOCATION', 'Bathroom'),
       ('LOCATION', 'courtyard'),
       ('EVENT', 'verbal aggression'),
       ('EVENT', 'peer'),
       ('EVENT', 'bad behavior'),
       ('EVENT', 'other');
CREATE TABLE Entities 
(physical_row_locator INTEGER IDENTITY (1,1) NOT NULL,
 generic_entity_id INTEGER, 
 attribute_name VARCHAR (10) NOT NULL, 
 attribute_value VARCHAR (50) NOT NULL,
  FOREIGN KEY (attribute_name, attribute_value)
   REFERENCES Attributes_Values (attribute_name, attribute_value)
INSERT INTO Entities 
VALUES (1, 'LOCATION', 'bedroom'),
       (1, 'EVENT', 'other'),
       (1, 'EVENT', 'bad behavior'),
       (2, 'LOCATION', 'bedroom'),
       (2, 'EVENT', 'other'),
       (2, 'EVENT', 'verbal aggression'),
       (3, 'LOCATION', 'courtyard'),
       (3, 'EVENT', 'other'),
       (3, 'EVENT', 'peer');
Please notice that there is nothing to prevent me from inserting a row (2, 'AUTHORITY', 'police')  or (42, 'SHOESIZE',  '10') that may or may not make sense.  No two generic_entity_id's are required to have the same structure when you join their
attributes together. There is no restriction on the attribute names or values; every typo is a new attribute or value. 
The poster wanted a simple (Location, Event, COUNT(*) ) report. That is about as basic as you can get. Here is one shot at it. 
WITH Locations (location_name) -- notice aliasing!
AS
(SELECT attribute_value
   FROM Attributes_Values AS AV1
  WHERE attribute_name = 'LOCATION'), 
Events (incident_type) -- notice aliasing!
AS
(SELECT attribute_value
   FROM Attributes_Values AS AV2
  WHERE attribute_name = 'EVENT'),
Incidents (incident_nbr, location_name, incident_type) -- notice aliasing!
AS
(SELECT L1.generic_entity_id, L1.location_name, E1.incident_type
   FROM Locations AS L1,
        Events AS E1
  WHERE L1.generic_entity_id = E1.generic_entity_id
SELECT location_name, incident_type, COUNT(*) AS incident_cnt
  FROM Incidents 
 GROUP BY location_name, incident_type;
This is a general pattern for EAV queries. Each column is extracted from the Attribute-Values table. A query with (n) columns becomes an n-way self-join under the covers. Then each of those working tables will need (n-1) joins to the Entities table. This gives
us something like a table, but it has no data integrity, guarantee of a key, constraints, nor numeric and temporal data types. 
The use of CTEs is simply to make the query easier to read. It does not help performance. You also have to give particular names to the generic data as you extract it.  How do you get everyone to agree on those names? 
Did you want to add a 'FINE" attribute to the table? Well, the values can hold only character data. so you now have the overhead of CAST() calls. In fact, since you cannot predict what will go into a column, you have to use the most general data type you can
find to cast to anything else -- NVARCHAR (MAX).  But you will probably use a  VARCHAR(<big number here>) instead. 
I worked for a software company that used an EAV model for a package to compute insurance salesman's commission based on an elaborate tiered scheme. They got paid based on the performance of their personal sales, their team's sales, their district sales and
finally the company as a whole.  This is a fairly common way to compute commissions.  
The reason given for the EAV model was that the commission algorithm could be easily changed by end users on the fly. The bad news was that it was changed by the end users on fly. Orphan rows could not be removed for fear of breaking something even if you could
figure out the chains of GUIDs used to link things together.  Servers fill with junk data and locked up the system in a few months. 
The bigger problem is that EAV has no data integrity. Consider the constraints you need to add  to the simple Attributes_Values table to make it almost work:
INSERT INTO Attributes_Values 
VALUES ('LOCATION', 'bedroom'),
       ('LOCATION', 'dining room'),
       ('LOCATION', 'bathroom'),
       ('LOCATION', 'courtyard'),
       ('EVENT', 'verbal aggression'),
       ('EVENT', 'peer'),
       ('EVENT', 'bad behavior'),
       ('EVENT', 'other');
CREATE TABLE Attributes_Values 
(attribute_name VARCHAR (10) NOT NULL,
 attribute_value VARCHAR (50) NOT NULL,
 PRIMARY KEY (attribute_name, attribute_value),
CONSTRAINT Valid_Attribute_Names
CHECK (attribute_name IN ('LOCATION', 'EVENT')),
 CONSTRAINT Generic_DRI
CHECK (CASE WHEN attribute_name = 'LOCATION'
             AND attribute_value 
              IN ('bedroom', 'dining room', 'bathroom', 'courtyard')
            THEN 'T'
            WHEN attribute_name = 'EVENT' 
             AND attribute_value 
              IN ('verbal aggression', 'peer', 'bad behavior', 'other')
            THEN 'T' ELSE 'F' END = 'T')
Now add a FINE attribute and constraint it to being non-negative money amounts. Now add a constraint that the amount of the fine cannot be over $5.00 if the event was 'verbal aggression' in a bedroom. 
Try to write a single DEFAULT clause for all the entities crammed into one column. Impossible, unless they all happen to use NULL. 
This is the same thing in a proper schema would start with a sane schema design. There should be separate referenced tables or CHECK() constraints for Locations and Events, since they are attributes of something. 
CREATE TABLE Incident_Reports
(incident_report_nbr CHAR(12) NOT NULL PRIMARY KEY, 
 location_code VARCHAR(15) NOT NULL
   REFERENCES Locations (location_code)
   ON DELETE CASCADE
   ON UPDATE CASCADE,
 incident_type VARCHAR(20) NOT NULL
   REFERENCES Incident_Types (incident_type)
   ON UPDATE CASCADE,
 etc); 
Entities, attributes and values are where they belong, so the query is now trivial: 
SELECT location_code, incident_type, COUNT(*)
  FROM Incident_Reports
 GROUP BY location_code, incident_type;
Then I could get fancier report with a simple change to the GROUP BY clause:
SELECT location_code, incident_type, COUNT(*)
  FROM Incident_Reports
 GROUP BY ROLLUP (location_code, incident_type);
The EAV version is left as an exercise for the reader. 
There is such a thing as "too" generic.  "To be is to be something in particular; to be nothing in particular or everything in general is to be nothing at all." --Law of Identity, Parmenides the Eleatic (circa BCE 490) 
References
For those who are interested, there are couple of links to articles on EAV I found on the net: 
Generic Design of Web-Based Clinical Databases 
http://www.jmir.org/2003/4/e27/ 
The Attributes_Values/CR Model of Data Representation 
http://ycmi.med.yale.edu/nadkarni/eav_CR_contents.htm 
An Introduction to Entity-Attribute-Value Design for Generic Clinical Study Data Management Systems 
http://ycmi.med.yale.edu/nadkarni/Introduction%20to%20EAV%20systems.htm 
Data Extraction and Ad Hoc Query of an Entity-Attribute-Value Database 
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubme... 
Exploring Performance Issues for a Clinical Database Organized Using an Entity-Attribute-Value Representation 
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubme... 
A really good horror story about this kind of disaster is at:
http://www.simple-talk.com/opinion/opinion-pieces/bad-carma/
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL

Similar Messages

  • [ADF-11.1.2] Proof of view performance tuning in oracle adf

    Hello,
    Take an example of : http://www.gebs.ro/blog/oracle/adf-view-object-performance-tuning-analysis/
    It tells me perfectly how to tune VO to achieve performance, but how to see it working ?
    For example: I Set Fetch size of 25, 'in Batch of' set to 1 or 26 I see following SQL Statement in Log
    [1028] SELECT Company.COMPANY_ID,         Company.CREATED_DATE,         Company.CREATED_BY,         Company.LAST_MODIFY_DATE,         Company.LAST_MODIFY_BY,         Company.NAME FROM COMPANY Companyas if it is fetching all the records from table at a time no matter what's the size of Batch. If I am seeing 50 records on UI at a time, then I would expect at least 2 SELECT statement fetching 26 records by each statement if I set Batch Size to 26... OR at least 50 SELECT statement for Batch size set to '1'.
    Please tell me how to see view performance tuning working ? How one can say that setting batch size = '1' is bad for performance?

    Anandsagar,
    why don't you just read up on http://download.oracle.com/docs/cd/E21764_01/core.1111/e10108/adf.htm#CIHHGADG
    there are more factors influencing performance than just query. Btw, indexing your queries also helps to tune performance
    Frank

  • Idoc views updation, Workflow, Performance tuning techniques!

    Hello,
    Greetings for the Day!
    Currently my client is facing following issues and they seek an help/attention to these issues. Following is the current landscape of an client.
    Sector – Mining
    SAP NW MDM 7.1 SP 09
    SAP ECC EHP 5
    SAP PI 7.0
    List of Issues:
    Classification (CLFMAS idoc) and Quality (MATQM idoc) views tries to update before MATMAS idoc updates and creates the material in ECC table.
    At workflow level, how to assign incoming record approval request, put them in mask like functionality and approve them as bulk records.
    Performance tuning techniques.
    Issue description:
    Classification (CLFMAS idoc) and Quality (MATQM idoc) views tries to update before MATMAS idoc updates and creates the material in a table.
    Currently, client’s MATMAS idoc updates Basic data1 and Basic data2 along with other views and material gets updated in ECC table, but whenever record has classification and quality view to update via CLFMAS and MATQM idoc, these 2 idocs tries to search the material ECC table before respective MATMAS to update the table. As it does not have the basic data created for the material entire idoc fails. Kindly suggest the solution as in how we can align the process where classification and quality view will get update only after the basic data views gets updated to material master. Is there any way we can make views to be updated sequentially?
    At workflow level, how to assign incoming record approval request, put them in mask like functionality and approve them as bulk records.
    Currently, super users are configured within the system, they have 2 roles assigned to their ID’s, 1.custodian and 2.steward. In custodian role user assigns the MDM material number and check other relevant assignment to record creation request, user approves the material request and the request goes to steward role. As the 1 user has 2 roles, same user need not to checks everything again in steward role, hence user wants whatever request comes at steward user inbox, he shall be able to create one single group for those 20-30 records and on one single click entire materials shall be approved and disappear out of his workflow level. Is there any way by which it can be achieved.
    Performance tuning techniques.
    Currently, client MDM system response time is very very slow, after a single click of action it takes long time to reflect the action within MDM. Material database is almost around 2.5 lakh records, standard structure has been used, not a complex landscape structure. Both ECC and MDM server is on single hardware, only the logical separate DB. Kindly suggest performance techniques if any.
    Kindly suggest !
    Regards,
    Neil

    Hi Niel,
    Kindly try the below options
    -> Performance tuning techniques.
    SAP Recommendation is to put the application ,server and Database in different Boxes . I am not sure how you managed to install both MDM and ECC in the same box but that is a big NO NO .
    Make sure there is enough hardware support for a separate MDM box.
    -> Classification (CLFMAS idoc) and Quality (MATQM idoc) views tries to update before MATMAS idoc updates and creates the material in a table.
    MDM only sends out an XML file , so you definitely need a middle ware (PI) to do the conversion.
    You can use PI logic ( ccBPM) to sent the IDOC is the necessary sequence .
    Else you can maintain this logic in the Processing code of ECC system .
    PS : The PI option is more recommended.
    Regards,
    Vag VIgnesh Shenoy

  • Performance tuning from Basis point of View ?

    Hi,
    Can anybody help me in doing the performance tuning from Basis point of view.
    What all parameters are involved in it and what are the values need to be initially assigned and what all factors need to be kept in mind.?
    Thanks in advance.

    wrong forum??
    not a security related question??

  • Performance tuning in XI, (SAP Note 857530 )

    Could any one pls tell me where to find sap notes.
    I am looking for "SAP Note 857530 "
    Integration process performance(in sap XI).
    or how can I view the performance of the integration process ? or exactly how performance tuning is done.
    pls help,
    Best regards,
    verma.

    Hi,
    SAP Note:
    Symptom
    Performance bottlenecks when executing integration processes.
    Other terms
    ccBPM
    BPE
    Performance
    Integration Processes
    Solution
    This note refers to all notes that are concerned with improving the performance of the ccBPM runtime.
    This note will be continually updated as improvements are made.
    Also read the document "Checklist: Making Correct Use of Integration Processes" in the SAP Library documentation, on SAP Service Marketplace, and in SDN; it contains information about performance issues to bear in mind when you model integration processes.
    Refer to the appended notes and maintain the default code changes by using SNOTE, or by importing the relevant service packs. Note that some performance improvements cannot be implemented by using SNOTE and are instead only available in service packs.
    Regards
    vijaya

  • Performance Tuning in IR

    Hello All,
    We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
    But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
    Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
    Regards,
    Raj

    SQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
    When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
    To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
    To tune your bqy documents...
    1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
    2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
    3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
    4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
    5) Saved compressed documents
    6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
    7) Remove all duplicate images and keep the image file size small.
    Hope this helps!
    PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
    Edited by: user10899957 on Feb 10, 2009 6:07 AM

  • Performance Tuning in CRM

    Hi All,
    Can any one help me to important OSS Notes in CRM performance tuning, CRMIC webclient performance, web client channel performance.
    Thanks & Regards,
    Sandeep

    Hi Sandeep,
    Please take a look at the following SAP NOTES for performance tuning in CRM 2007 Webclient:
    Note 1048388 - General Performance improvements of BSP transactions
    Note 1228076 - CRM Web UI: Frontend performance DDLB and input changes
    Note 1246144 - Advanced Search right side hidden when resolution: 1024X768
    Note 1242599 - IE7: Message # shines through the "More" dropbox
    Note 1240769 - Native DDLB value not set when its disabled
    Note 1237437 - Config Mode: FireFox support and Visual enhancements
    Note 1230443 - Scrollbar missing or not showing up on some CRM views
    Best Regards,
    Gabriel

  • Performance Tuning 10g

    Hi All,
    I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
    I have following queries with this respect:
    - How should I start... Should I use tkprof or AWR.
    - How to enable these tools.
    - How to view its reports
    - What should I check in these reports
    - Will just increasing RAM improves performance or should we also increase Hard Disk?
    - What is CPU Cost and I/O?
    Please help.
    Thanks & Regards.

    dbdan wrote:
    Hi All,
    I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
    I have following queries with this respect:
    - How should I start... Should I use tkprof or AWR.
    - How to enable these tools.
    - How to view its reports
    - What should I check in these reports
    - Will just increasing RAM improves performance or should we also increase Hard Disk?
    - What is CPU Cost and I/O?
    Please help.
    Thanks & Regards.Here is something you might try as a starting point:
    Capture the output of the following (to a table, send to Excel, or spool to a file):
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$OSSTAT
    ORDER BY
      STAT_NAME;
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SYS_TIME_MODEL
    ORDER BY
      STAT_NAME;
    SELECT
      EVENT,
      TOTAL_WAITS,
      TOTAL_TIMEOUTS,
      TIME_WAITED
    FROM
      V$SYSTEM_EVENT
    WHERE
      WAIT_CLASS != 'Idle'
    ORDER BY
      EVENT;Wait a known amount of time (5 minutes or 10 minutes)
    Execute the above SQL statements again.
    Subtract the starting values from the ending values, and post the results for any items where the difference is greater than 0. The Performance Tuning Guide (especially the 11g version) will help you understand what each item means.
    To repeat what Ed stated, do not randomly change parameters (even if someone claims that they have successfully made the parameter change 100s of times).
    You could also try a Statspack report, but it might be better to start with something which produces less than 70 pages of output.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • LDAP Performance Tuning In Large Deployments - numconnect parameter

    LDAP Performance Tuning In Large Deployments - numconnect parameter
    <p>
    Tuning the LDAP connections
    (numconnect parameter)
    This parameter translates directly into the number of unidas processes that will
    be launched when Calendar Server is started. A process takes time to load, uses
    RAM, and when active, CPU cycles. And, unidas maintains an LDAP client
    connection to a Directory Server which can only support a fixed number of these
    connections. Since a calendar client does not require constant directory access
    then having a matching number of unidas processes (to match uniengd "client"
    processes) is not a good configuration.
    Basically, a calendar client will make many requests for LDAP information, even
    if the event information being retrieved is not currently view able. For example,
    if the calendar client is displaying a week view with 20 events and each event
    has 5 attendees, that will translate into at least 100 separate ldap search
    requests for the given name and surname of each attendee. What this means is
    that an "active" calendar user will require the services of a calendar server
    unidas connection quite often.
    Recommendation is that you increase the number of unidas connections
    to match the number of "active" calendar users. Our experience is that
    at least 20% of the number of configured users (lck_users from the
    /users/unison/misc/unison.ini file) are actually logged in, and 10% of
    those calendar users are active. For example, if have 3000 configured
    calendar users, 600 configured are logged in and 10% of the logged in
    are active, which would translate into at least 60 unidas connections.
    Keep in mind that configured vs logged in vs active might be different at each
    customer site, so please adjust your number of unidas connections
    accordingly. To set this up, edit the /users/unison/log/unison.ini file and add
    the numconnect parameter to the section noted (where "hostname" is the name of
    your local host):
    [LCK]
    lck_users = 600
    [hostname,unidas]
    numconnect = 60
    The calendar server will need to be restarted after making changes
    to the /users/unison/log/unison.ini file, before those changes will
    take effect.
    Note: Due to some architectural changes in the Calendar Server 4.x, the total
    number of DAS connections should never be set higher than 250.
    Recommendations for num_connect would be a maximum of 5% of logged on users.
    However, keep in mind that 250 das connections is a very high number.
    Example:
    [LCK]
    lck_users = 5000
    [hostname,unidas]
    numconnect = 250

    Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
    But, could you please clarify 2 points to me
    1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
    In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
    2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
    Best reagards

  • Performance Tuning Question

    Greetings,
    I did a few searches for any topics related to this and
    haven't found anything relevant - if I'm missing something obvious
    I apologize.
    We are doing some performance tuning of a CFMX 7.0.2 system
    running on Solaris. Of the many things we're doing one is to run
    truss on the cfmx processes to find out what in fact it's doing.
    The following is an excerpt from one of the truss outputs:
    stat64("/opt/coldfusionmx7/runtime/../lib/macromedia/jdbc/sqlserver/SQLServerURLParser.cla ss\0",
    0x254FB860, 0x254FB9BC) = -1 Err#2
    stat64("/opt/coldfusionmx7/runtime/../gateway/lib/macromedia/jdbc/oracle/OracleURLParser.c lass\0",
    0x254FB860, 0x254FB9BC) = -1 Err#2
    Err #2 means "File not found" in essence
    These lines show up quite a bit in the output - and we're
    curious as to why it's trying to find those particular classes at
    all and why in those locations - there are other entries where it
    looks like it's going through a series of paths that it knows about
    trying to find these entries. As we use Oracle as our database what
    could be the reason it's looking for SQLServer? Finally does anyone
    know of a way to stop the attempt to find these classes and save
    the system processing time to give us back those cycles for real
    work?
    Regards,
    Scott

    Please try:
    Create View View3 
    As
    SELECT a.Col1, a.Col2 
    From dbo.TableA A 
    WHERE NOT Exists (SELECT 1 From dbo.TableB B With(NoLock) WHERE A.Col1 = B.Col1)
    UNION 
    SELECT Col1, Col2 From dbo.TableB
    Also, please make sure that INDEXes on Col1 on both tables  are NOT fragmented and your STATISTICS are also up tp date. 
    Best Wishes, Arbi; Please vote if you find this posting was helpful or Mark it as answered.

  • Performance tuning through OEM

    Hi,
    I am unable to run performance tuning pack through OEM. Initially when i was not connected with any production database it worked. Now whenever i click performance manager, performance manager over view, etc it simply hangs where as diagnositc packs such as oracle expert are all working. I have increased the java_pool_size to 50M from 24M both in the remote database machine and in server but still i unable to run it through remote database machine. I have configured rman in the remote database and taking fullbackup, logical back up through it but the performance pack hangs whenever i run it. I am new to dba. statstical report collected through statspack report gives no idea for me to tune. Please help me what to do in this regard to tune the database.
    What are the vital points one has to tune and how to tune it.

    Brain heart,
    As you have mentioned that you are new in Tuning so I shall say first is to understand what exactly we are hunting for.Please read Performance Tuning guide frm oracle docs.
    You have not mentioned any version for your db so I am assumig 10g.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm
    Also there is an excellent guide from 9i docs, called Performance Planning also read that one.
    And to understand tuning and its various know hows, get these books,
    Optimize Oracle Performance--Carry Millsap
    Oracle Wait Interface --Richmond Shee
    Forecasting Oracle Performance --Craig Shallahamer
    These will help you in understanding alot of things which will help for sure.
    Aman....

  • Performance Tuning for OBIEE Reports

    Hi Experts,
    I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
    The key point is the Dimension table is used in most of the OOTB reports.
    so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
    can anyone suggest good performance tuning tips to tune the reports.
    i created some indices on Materialized view columns and and on dimension table columns.
    i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
    is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
    Please Provide valuable suggestions and comments
    Thank You
    Kumar

    Kumar,
    Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    Hope that helps
    ~Srix

  • Performance Tuning Tips

    Dear All,
    In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
    But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
    And also I have one more query that what is " Table Statistics " , what is the use of this ...
    Please give ur valueble suggestions to us in improving the performance .
    Thanks in Advance !

    Hi
    <b>Ways of Performance Tuning</b>
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    <b>Selection Criteria</b>
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    <b>Points # 1/2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Select Statements   Select Queries</b>
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    <b>Point # 1</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    <b>Point # 2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Point # 3</b>
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    <b>Point # 4</b>
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    <b>Point # 5</b>
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    <b>Select Statements           contd..  SQL Interface</b>
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    <b>Point # 1</b>
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    <b>Point # 2</b>
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    <b>Point # 3</b>
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    <b>Select Statements       contd…           Aggregate Functions</b>
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    <b>Select Statements    contd…For All Entries</b>
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    <u>Points to be must considered FOR ALL ENTRIES</u> •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    <b>Select Statements    contd…  Select Over more than one Internal table</b>
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    <b>Point # 1</b>
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    <b>Point # 2</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    <b>Point # 3</b>
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    <b>Internal Tables</b>
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    <b>Point # 2</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    <b>Point # 3</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    <b>Point # 5</b>
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    <b>Point # 6</b>
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    <b>Point # 7</b>
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 8</b>
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    <b>Point # 9</b>
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 10</b>
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 11</b>
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    <b>Point # 12</b>
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 13</b>“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    <b>Internal Tables         contd…
    Hashed and Sorted tables</b>
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    <b>Point # 1</b>
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    <b>Point # 2</b>
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    <b>Reward if usefull</b>

  • Performance tuning techniques

    I am looking to compile a list of the major performance tuning techniques that can be implemented in an ABAP program. 
    Appreciate any feedback
    J

    HI,
    chk this.
    http://www.erpgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    Performance tuning for Data Selection Statement 
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of 
    entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the 
    length of the WHERE clause. 
    The plus
    Large amount of data 
    Mixing processing and reading of data 
    Fast internal reprocessing of data 
    Fast 
    The Minus
    Difficult to program/understand 
    Memory could be critical (use FREE or PACKAGE size) 
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the the driver table 
    Sorting the driver table 
    If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
    FOR ALL ENTRIES IN i_tab
      WHERE mykey >= i_tab-low and
            mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data 
    Mixing processing and reading of data 
    Easy to code - and understand 
    The minus:
    Large amount of data 
    when mixed processing isn’t needed 
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data 
    Similar to Nested selects - when the accesses are planned by the programmer 
    In some cases the fastest 
    Not so memory critical 
    The minus
    Very difficult to program/understand 
    Mixing processing and reading of data not possible 
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
      LOOP AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
    The runtime analysis (SE30)
    SQL Trace (ST05)
    Tips and Tricks tool
    The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
    Select DISTINCT 
    ORDER BY / GROUP BY / HAVING clause 
    Any WHERE clasuse that contains a subquery or IS NULL expression 
    JOIN s 
    A SELECT... FOR UPDATE 
    If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
    Avoid ther SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
    Regds
    Anver
    if hlped pls mark points

  • Planning to start the performance tuning but....

    Friends,
    Database OS: RHEL AS 3.0
    Database: Oracle Release 9.2.0.4.0
    Number of Tables: 503
    TableSpace size - 1.8GB out of 3GB
    Max.Records in a Table - 1 Million and its increasing..
    Our DB Optimizer mode is - CHOOSE (is it RBO?)
    We are not using enterprise manager and not installed any tuning scripts like statspack etc....
    Currently we are taking user managed backup without any problem so we are continuing the same from 2004 onwards.
    Now we want want to tune our database.(We have never tuned our database)
    We would like to change our optimizer from RBO to CBO.
    Can anybody tell me the first step for the performance tuning?
    Please dont suggest me oracle doc im already studying.....its taking time....
    In the mean time......
    Step 1: Can i Analyze the table or dbms_stat package?
    We have not at all used the analyze or dbms_stat. So can i start with any of the above or do u have any other suggestions for the 1st step?
    Thanks

    our manager feels that if we tune our db the performance will be more than compared to the current one.you have a mystique manager then, ask him what kind of "feelings" does he have about my database ;) there is no place for feelings in this game, this is life cycle to be successful ; testing->reporting->analyzing->take nedded actions->re-testing->reporting->analyzing..
    so while you are surely reading the documentation;
    Oracle9i Database Performance Planning Release 2 (9.2)
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96532/toc.htm
    Oracle9i Database Performance Tuning Guide and Reference Release 2 (9.2)
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/toc.htm
    first thing you have to do is to setup an appropriate test environment with same os-oracle releases, parameters;
    -- some of them to check
    SELECT NAME, VALUE
      FROM v$system_parameter a
    WHERE a.NAME IN
           ('compatible', 'optimizer_features_enable',
            'optimizer_mode', 'pga_aggregate_target', 'workarea_size_policy',
            'db_file_multiblock_read_count', .. )and of course schema set and data amount. Then you run your application on load and take statspack snapshots and do the same after collecting statistics;
    -- customize for your configuration, schema level object statistics
    exec dbms_stats.gather_schema_stats( ownname =>'YOUR_SCHEMA', degree=>16, options=>'GATHER AUTO', estimate_percent=>dbms_stats.auto_sample_size, cascade=>TRUE, method_opt=>'FOR ALL COLUMNS SIZE AUTO', granularity=>'ALL');
    -- check your system stats, with sys account
    SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname = 'SYSSTATS_MAIN';after you have the base report and the report after change compare the top 5 waits, the top queries which have dramatic logical I/O changes etc. At this point you go into session based tuning in order to understand why a specific query performs worser with CBO compared to RBO. You need to be able to create and read execution plans and i/o statistics at least. Here are some quick introductions;
    http://www.bhatipoglu.com/entry/17/oracle-performance-analysis-tracing-and-performance-evaluation
    http://psoug.org/reference/explain_plan.html
    http://coskan.wordpress.com/2007/03/04/viewing-explain-plan/
    and last words again goes to your manager; how does he "feel" about a 10gR2 migration? With Grid Control, AWR, ADDM and ASH performance tuning evolved a lot. Important note here, after 10g RBO is dead(unsupported).
    Best Regards,
    H.Tonguç YILMAZ
    http://tonguc.yilmaz.googlepages.com/
    Message was edited by:
    TongucY

Maybe you are looking for

  • Printing in GNOME on OES2 server

    Hello, I'm trying to set-up a network printer in GNOME for the root user on an OES2 server. Since I'm not familiar with Linux, I'm a little bit in the dark here. The iPrint part is working (tested from a workstation), but when I try to install an iPr

  • HP LaserJet P1006 won't print both sides (duplex)

    I have been using my printer for two years now and haven't had a problem till i needed to print on both sides of the page. It tells me to do all of the steps for it and then at the end it says push the go button or ok and i have no such buttons on my

  • HELP! Clicked format...now everything is gone!!!

    I connected my blackberry curve 9360 device to the computer to get the new 7.1 software for blackberry. First I got app error 200 on my phone screen, then when that got fixed, my phone was completely wiped. It then said my media card has errors on it

  • Jump to an item field in a transaciton like VA01

    Hi All, There're lots of fields in the the item table of transaction like VA01. It is difficult to input value to a specific field. For example, if I need to input the payment term for a sales order item, I have to drag the scroll bar to the far righ

  • TC - 2 macs and want to use as external also

    Hello, love my new Time Machine- I am currently backing up 2 macs to it. I have a lot of space and want to use the TC disc as external backup for some key documents also. I set up the folders on one Mac in the area outside of the backups. On the seco