Poorly Performing SQL Queries and AWR

version : 10.2.0.4 on OEL 5.
Snapshot duration is 30 mins. Its long, but sorry that is what is available right now.
I have three queries in my database which are performing pretty slow. I took AWR reports for these queries and they are as follows:
Query 1:
======
    Plan Hash           Total Elapsed                 1st Capture   Last Capture
#   Value                    Time(ms)    Executions       Snap ID        Snap ID
1   3714475000                113,449             9         60539          60540
-> % Total DB Time is the Elapsed Time of the SQL statement divided  into the Total Database Time multiplied by 100
Stat Name                                Statement   Per Execution % Snap
Elapsed Time (ms)                           113,449       12,605.4     3.9
CPU Time (ms)                               108,620       12,068.9     4.0
Executions                                        9            N/A     N/A
Buffer Gets                                4.25E+07    4,722,689.0    11.7
Disk Reads                                        0            0.0     0.0
Parse Calls                                       9            1.0     0.0
Rows                                             20            2.2     N/A
User I/O Wait Time (ms)                           0            N/A     N/A
Cluster Wait Time (ms)                            0            N/A     N/A
Application Wait Time (ms)                        0            N/A     N/A
Concurrency Wait Time (ms)                        0            N/A     N/A
Invalidations                                     0            N/A     N/A
Version Count                                     2            N/A     N/A
Sharable Mem(KB)                                252            N/A     N/AQuery 2:
======
    Plan Hash           Total Elapsed                 1st Capture   Last Capture
#   Value                    Time(ms)    Executions       Snap ID        Snap ID
1   4197000940              1,344,458             3         60539          60540
-> % Total DB Time is the Elapsed Time of the SQL statement divided   into the Total Database Time multiplied by 100
Stat Name                                Statement   Per Execution % Snap
Elapsed Time (ms)                         1,344,458      448,152.7    46.5
CPU Time (ms)                             1,353,670      451,223.3    49.7
Executions                                        3            N/A     N/A
Buffer Gets                                3.42E+07   11,383,856.7     9.4
Disk Reads                                        0            0.0     0.0
Parse Calls                                       3            1.0     0.0
Rows                                             48           16.0     N/A
User I/O Wait Time (ms)                           0            N/A     N/A
Cluster Wait Time (ms)                            0            N/A     N/A
Application Wait Time (ms)                        0            N/A     N/A
Concurrency Wait Time (ms)                        0            N/A     N/A
Invalidations                                     0            N/A     N/A
Version Count                                     2            N/A     N/A
Sharable Mem(KB)                                270            N/A     N/AQuery 3:
======
    Plan Hash           Total Elapsed                 1st Capture   Last Capture
#   Value                    Time(ms)    Executions       Snap ID        Snap ID
1   2000299266                104,060             7         60539          60540
-> % Total DB Time is the Elapsed Time of the SQL statement divided   into the Total Database Time multiplied by 100
Stat Name                                Statement   Per Execution % Snap
Elapsed Time (ms)                           104,060       14,865.7     3.6
CPU Time (ms)                               106,150       15,164.3     3.9
Executions                                        7            N/A     N/A
Buffer Gets                                4.38E+07    6,256,828.1    12.1
Disk Reads                                        0            0.0     0.0
Parse Calls                                       7            1.0     0.0
Rows                                             79           11.3     N/A
User I/O Wait Time (ms)                           0            N/A     N/A
Cluster Wait Time (ms)                            0            N/A     N/A
Application Wait Time (ms)                        0            N/A     N/A
Concurrency Wait Time (ms)                        0            N/A     N/A
Invalidations                                     0            N/A     N/A
Version Count                                     2            N/A     N/A
Sharable Mem(KB)                                748            N/A     N/AAny Ideas please as what is wrong with the above statistics? And what should I do next with it?
Thanks.

Here is one of the plan for query:
| Id  | Operation                            | Name                   | Rows  | Bytes | Cost (%CPU)|
Time     |
|   0 | SELECT STATEMENT                     |                        |       |       |  9628 (100)|
          |
|   1 |  VIEW                                |                        |    73 | 58546 |  9628   (1)|
00:01:56 |
|   2 |   WINDOW SORT PUSHED RANK            |                        |    73 | 22630 |  9628   (1)|
00:01:56 |
|   3 |    FILTER                            |                        |       |       |            |
          |
|   4 |     NESTED LOOPS                     |                        |    73 | 22630 |  9627   (1)|
00:01:56 |
|   5 |      NESTED LOOPS                    |                        |    73 | 20586 |  9554   (1)|
00:01:55 |
|   6 |       NESTED LOOPS OUTER             |                        |    72 | 15552 |  9482   (1)|
00:01:54 |
|   7 |        NESTED LOOPS                  |                        |    72 | 13320 |  9410   (1)|
00:01:53 |
|   8 |         NESTED LOOPS                 |                        |    72 | 12168 |  9338   (1)|
00:01:53 |
|   9 |          NESTED LOOPS                |                        |  4370 |   277K|    29   (0)|
00:00:01 |
|  10 |           TABLE ACCESS BY INDEX ROWID| test_ORG                |     1 |    34 |     2   (0)|
00:00:01 |
|  11 |            INDEX UNIQUE SCAN         | test_ORG_PK             |     1 |       |     1   (0)|
00:00:01 |
|  12 |           TABLE ACCESS FULL          | test_USER               |  4370 |   132K|    27   (0)|
00:00:01 |
|  13 |          TABLE ACCESS BY INDEX ROWID | REF_CLIENT_FOO_ACCT   |     1 |   104 |     7   (0)|
00:00:01 |
|  14 |           INDEX RANGE SCAN           | RCFA_test_ORG_IDX       |   165 |       |     2   (0)|
00:00:01 |
|  15 |         TABLE ACCESS BY INDEX ROWID  | test_ACCOUNT            |     1 |    16 |     1   (0)|
00:00:01 |
|  16 |          INDEX UNIQUE SCAN           | test_CUSTODY_ACCOUNT_PK |     1 |       |     0   (0)|
          |
|  17 |        TABLE ACCESS BY INDEX ROWID   | test_USER               |     1 |    31 |     1   (0)|
00:00:01 |
|  18 |         INDEX UNIQUE SCAN            | test_USER_PK_IDX        |     1 |       |     0   (0)|
          |
|  19 |       TABLE ACCESS BY INDEX ROWID    | REF_FOO               |     1 |    66 |     1   (0)|
00:00:01 |
|  20 |        INDEX UNIQUE SCAN             | REF_FOO_PK            |     1 |       |     0   (0)|
          |
|  21 |      TABLE ACCESS BY INDEX ROWID     | REF_FOO_FAMILY        |     1 |    28 |     1   (0)|
00:00:01 |
PLAN_TABLE_OUTPUT
|  22 |       INDEX UNIQUE SCAN              | REF_FOO_FAMILY_PK     |     1 |       |     0   (0)|
          |
40 rows selected.
SQL>

Similar Messages

  • Performing sql queries in java without using java libraries

    i wonder whether its possible to perform sql queries beginning from create table to updating queries without using the java sql library.
    has anyone written such code.

    You could use JNI to talk to a native driver like the Oracle OCI driver. Doing this is either exiting or asking for trouble depending on your attitude to lots of low level bugs.

  • Generating XML from SQL queries and saving to an xml file?

    Hi there,
    I was wondering if somebody could help with regards to the following:
    Generating XML from SQL queries and saving to a xml file?
    We want to have a procedure(PL/SQL) that accepts an order number as an input parameter(the procedure
    is accessed by our software on the client machine).
    Using this order number we do a couple of SQL queries.
    My first question: What would be our best option to convert the result of the
    queries to xml?
    Second Question: Once the XML has been generated, how do we save that XML to a file?
    (The XML file is going to be saved on the file system of the server that
    the database is running on.)
    Now our procedure will also have a output parameter which returns the filename to us. eg. Order1001.xml
    Our software on the client machine will then ftp this XML file(based on the output parameter[filename]) to
    the client hard drive.
    Any information would be greatly appreciated.
    Thanking you,
    Francois

    If you are using 9iR2 you do not need to do any of this..
    You can create an XML as an XMLType using the new SQL/XML operators. You can insert this XML into the XML DB repository using DBMS_XDB.createResource. You can then access the document from the resource. You can also return the XMLType containing the XML directly from the PL/SQL Procedure.

  • Generating XML from SQL queries and saving to a xml file?

    Hi there,
    I was wondering if somebody could help with regards to the following:
    Generating XML from SQL queries and saving to a xml file?
    We want to have a stored procedure(PL/SQL) that accepts an order number as an input parameter(the procedure
    is accessed by our software on the client machine).
    Using this order number we do a couple of SQL queries.
    My first question: What would be our best option to convert the result of the
    queries to xml?
    Second Question: Once the XML has been generated, how do we save that XML to a file?
    (The XML file is going to be saved on the file system of the server that
    the database is running on.)
    Now our procedure will also have a output parameter which returns the filename to us. eg. Order1001.xml
    Our software on the client machine will then ftp this XML file(based on the output parameter[filename]) to
    the client hard drive.
    Any information would be greatly appreciated.
    Thanking you,
    Francois

    Hi
    Here is an example of some code that i am using on Oracle 817.
    The create_file procedure is the one that creates the file.
    The orher procedures are utility procedures that can be used with any XML file.
    PROCEDURE create_file_with_root(po_xmldoc OUT xmldom.DOMDocument,
    pi_root_tag IN VARCHAR2,
                                            po_root_element OUT xmldom.domelement,
                                            po_root_node OUT xmldom.domnode,
                                            pi_doctype_url IN VARCHAR2) IS
    xmldoc xmldom.DOMDocument;
    root xmldom.domnode;
    root_node xmldom.domnode;
    root_element xmldom.domelement;
    record_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    BEGIN
    xmldoc := xmldom.newDOMDocument;
    xmldom.setVersion(xmldoc, '1.0');
    xmldom.setDoctype(xmldoc, pi_root_tag, pi_doctype_url,'');
    -- Create the root --
    root := xmldom.makeNode(xmldoc);
    -- Create the root element in the file --
    create_element_and_append(xmldoc, pi_root_tag, root, root_element, root_node);
    po_xmldoc := xmldoc;
    po_root_node := root_node;
    po_root_element := root_element;
    END create_file_with_root;
    PROCEDURE create_element_and_append(pi_xmldoc IN OUT xmldom.DOMDocument,
    pi_element_name IN VARCHAR2,
                                            pi_parent_node IN xmldom.domnode,
                                            po_new_element OUT xmldom.domelement,
                                            po_new_node OUT xmldom.domnode) IS
    element xmldom.domelement;
    child_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    BEGIN
    element := xmldom.createElement(pi_xmldoc, pi_element_name);
    child_node := xmldom.makeNode(element);
    -- Append the new node to the parent --
    newelenode := xmldom.appendchild(pi_parent_node, child_node);
    po_new_node := child_node;
    po_new_element := element;
    END create_element_and_append;
    FUNCTION create_text_element(pio_xmldoc IN OUT xmldom.DOMDocument, pi_element_name IN VARCHAR2,
    pi_element_data IN VARCHAR2, pi_parent_node IN xmldom.domnode) RETURN xmldom.domnode IS
    parent_node xmldom.domnode;                                   
    child_node xmldom.domnode;
    child_element xmldom.domelement;
    newelenode xmldom.DOMNode;
    textele xmldom.DOMText;
    compnode xmldom.DOMNode;
    BEGIN
    create_element_and_append(pio_xmldoc, pi_element_name, pi_parent_node, child_element, child_node);
    parent_node := child_node;
    -- Create a text node --
    textele := xmldom.createTextNode(pio_xmldoc, pi_element_data);
    child_node := xmldom.makeNode(textele);
    -- Link the text node to the new node --
    compnode := xmldom.appendChild(parent_node, child_node);
    RETURN newelenode;
    END create_text_element;
    PROCEDURE create_file IS
    xmldoc xmldom.DOMDocument;
    root_node xmldom.domnode;
    xml_doctype xmldom.DOMDocumentType;
    root_element xmldom.domelement;
    record_element xmldom.domelement;
    record_node xmldom.domnode;
    parent_node xmldom.domnode;
    child_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    textele xmldom.DOMText;
    compnode xmldom.DOMNode;
    BEGIN
    xmldoc := xmldom.newDOMDocument;
    xmldom.setVersion(xmldoc, '1.0');
    create_file_with_root(xmldoc, 'root', root_element, root_node, 'test.dtd');
    xmldom.setAttribute(root_element, 'interface_type', 'EXCHANGE_RATES');
    -- Create the record element in the file --
    create_element_and_append(xmldoc, 'record', root_node, record_element, record_node);
    parent_node := create_text_element(xmldoc, 'title', 'Mr', record_node);
    parent_node := create_text_element(xmldoc, 'name', 'Joe', record_node);
    parent_node := create_text_element(xmldoc,'surname', 'Blogs', record_node);
    -- Create the record element in the file --
    create_element_and_append(xmldoc, 'record', root_node, record_element, record_node);
    parent_node := create_text_element(xmldoc, 'title', 'Mrs', record_node);
    parent_node := create_text_element(xmldoc, 'name', 'A', record_node);
    parent_node := create_text_element(xmldoc, 'surname', 'B', record_node);
    -- write the newly created dom document into the buffer assuming it is less than 32K
    xmldom.writeTofile(xmldoc, 'c:\laiki\willow_data\test.xml');
    EXCEPTION
    WHEN xmldom.INDEX_SIZE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Index Size error');
    WHEN xmldom.DOMSTRING_SIZE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'String Size error');
    WHEN xmldom.HIERARCHY_REQUEST_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Hierarchy request error');
    WHEN xmldom.WRONG_DOCUMENT_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Wrong doc error');
    WHEN xmldom.INVALID_CHARACTER_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Invalid Char error');
    WHEN xmldom.NO_DATA_ALLOWED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Nod data allowed error');
    WHEN xmldom.NO_MODIFICATION_ALLOWED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'No mod allowed error');
    WHEN xmldom.NOT_FOUND_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Not found error');
    WHEN xmldom.NOT_SUPPORTED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Not supported error');
    WHEN xmldom.INUSE_ATTRIBUTE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'In use attr error');
    WHEN OTHERS THEN
    dbms_output.put_line('exception occured' || SQLCODE || SUBSTR(SQLERRM, 1, 100));
    END create_file;

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • ? in SQL Queries and not using prepared statements

    Using EclipseLink 1.1.1
    Prepared Statements are disabled
    In our production server something went wrong and one of our Read Queries started erroring. A DatabaseException was thrown and we log the "getQuery.getSQLString()" statement. We found this query in the logs:
    SELECT t1.ID, t1.NAME, t1.DESCRIPTION, t1.EXTREFID, t1.STYLESHEET, t1.DDSVNVERSION, t1.FIRSTNAME, t1.LASTNAME, t1.EMAILADDR, t1.PHONENUMBER, t1.ADDRESS, t1.ADDRESS2, t1.CITY, t1.STATE, t1.POSTALCODE, t1.COUNTRY, t1.ADMINACCTNAME, t1.HASDOCUMENTS, t1.HASTIMEDNOTIFICATIONS, t1.STATUS, t1.ENTRYDATE, t1.EVALEXPDATE, t1.LASTREMINDDATE, t1.FULLUSERS, t1.LIMUSERS, t1.REQUSERS, t1.ISENTERPRISE, t1.EXPDATE, t1.ISDISABLED, t1.DISABLEDDATE, t1.NEEDLICENSEAGREEMENT, t1.ISWARNINGDISABLED, t1.LOCALE, t1.TIMEZONE, t1.CURRENCY, t1.DOMAIN, t1.DOCUMENTSIZE, t1.EXTRADOCUMENTSTORAGE, t1.ONDEMANDOPTIONS, t1.SSOTYPE, t1.RESELLERID, t1.ACCOUNTREPID, t1.LASTUSAGEREPORTDATE, t1.NEXTUSAGEREPORTDATE, t1.USAGEREPORTATTEMPTS FROM T_SSOOPTIONS t0, T_CUSTOMERS t1 WHERE *((((t0.SSOENABLED = ?) AND (t1.SSOTYPE IN (?, ?))) AND (UPPER(t1.DOMAIN) = ?)) AND (t0.CUSTOMERID = t1.ID))*
    Notice the values weren't entered into the where clause. We had to bounce the application to fix the problem. I've never seen this before. I've added more debugging statements to the code - so if this happens again in the future I'll have more information to report on. In the mean time I'm wondering if anyone else has every seen a problem of this nature.

    Database error due to invalid SQL statement.
    I don't have a stack, we were catching the exception and not printing the stack :(
    Like I mentioned in my first post, I added more debugging code (e.printStackTrace()). I understand this is hard to track down without more information. I was just hoping you guys had seen something like this before and had any insight. Like I mentioned before: this is on our production server. I've never seen this type of error before. That particular server (we run in a cluster mode) had been up for several days and then started generating that error. IT bounced the node and everything went back to normal. We have been using toplink for about 5 years now and have never seen this problem, until August 3rd 2009. The only thing that has changed recently is our migration from toplink 10 to EclipseLink. I was wondering if anyone knows if anything had changed in EclipseLink/toplink 11 with the generation of SQL queries.
    I'll keep looking. There is more debugging code in there now. Since the error was "Database error due to invalid SQL statement" this implies the SQL was generated, exited that part of the code and was sent to the db where it failed. I'm afraid the printStackTrace won't help if this error happens again.

  • Reg : SQL Queries and Global warming -

    Hi Experts,
    I got a doubt while I was going through the Mar/Apr-2013 edition of Oracle Magazine. http://www.oracle.com/technetwork/issue-archive/2013/13-mar/o23peer-1906471.html
    Article - <tt>'Peer-To-Peer'</tt> on Oracle ACE - Satyendra Kumar Pasalapudi where he says :
    >
    Q - What green practices do you use in your work?
    A - I always avoid excessive disk I/Os by optimizing queries and creating indexes, which in turn saves power by avoiding unnecessary disk spindles and excessive heat dissemination in the data centers. And I use my Kindle extensively, to save paper.
    >
    This sounds really interesting.
    What I'm not getting is the concept of <tt>'Disk Spindles'</tt>. What is that exactly and how it causes heat dissemination?
    Help much appreciated!
    Thanks,
    Ranit

    rp0428 wrote:
    >
    Not everything written on the 'Net is true.
    >
    Now you are confusing us!
    I just read your statement above on the net but if not everything written on the 'Net' is true then your statement might be false.
    But if your statement is false, then everything written on the 'Net' is true.
    Which means your statement must be true!
    My head is spinning! :DGuy: Where did you hear that?
    Girl: On the internet
    Guy: And you believed it?
    Girl: Sure. They can't put anything on the internet that isn't true
    Guy: Where did you hear that?
    Girl: On the internet. Oh, here comes my date. He's a French Model.
    Guy 2 (with sly wink) : Bonjour

  • SQL queries and projection

    Hi,
    we need to use the sql-query functionality in kodo for some of our
    queries. When doing this, we would like to fetch the information for the
    candidate object out together with the primary key as in:
    select id, name
    from mytable
    where some-condition
    mapping to an object X having the name as field, id as primary key. The
    benefit here is that the name is then stored inside the
    persistence-capable object and is also stored in kodos cache.
    In a few queries, we furthermore need to get extra information from the
    query. Is this possible?? For instance:
    select id, name, count(*)
    from mytable
    where some-condition
    group by xxxx
    I know that I can map this to a custom result-class, but is it possible to
    get it out as a persistence capable object X and the count?? And if so,
    how?

    Henning-
    Unfortunately, you can't mix in a persistent class with other data in a
    single query using the ResultClass mechanism. You'll need to either
    using a custom ResultObjectProvider to perform the extra interpretation,
    or else execute separate queries.
    Henning Andersen wrote:
    Hi,
    we need to use the sql-query functionality in kodo for some of our
    queries. When doing this, we would like to fetch the information for the
    candidate object out together with the primary key as in:
    select id, name
    from mytable
    where some-condition
    mapping to an object X having the name as field, id as primary key. The
    benefit here is that the name is then stored inside the
    persistence-capable object and is also stored in kodos cache.
    In a few queries, we furthermore need to get extra information from the
    query. Is this possible?? For instance:
    select id, name, count(*)
    from mytable
    where some-condition
    group by xxxx
    I know that I can map this to a custom result-class, but is it possible to
    get it out as a persistence capable object X and the count?? And if so,
    how?--
    Marc Prud'hommeaux
    SolarMetric Inc.

  • Poor performance when Distinct and Order By Used

    Hello,
    I am getting an slow answer when I add Distinct and Order By to the query:
    Without Distinct and Order By lasts 3.57 seconds; without Distinct and Order By lasts 28.15 seconds, which it's too much for our app.
    The query is:
    select distinct CC.acceso, CC.ext_acceso, TIT.TITULO_SALIDA
    from (((Ocurrencias CT01 inner join
    palabras p0 on (CT01.cod_palabra = p0.cod_palabra and p0.palabra like 'VENEZUELA%' AND p0.campo = 'AUTOR')) INNER JOIN
    CENTRAL CC ON (CT01.ACCESO = CC.ACCESO AND CT01.EXT_ACCESO = CC.EXT_ACCESO))) inner join
    codtit ctt on (CC.acceso = ctt.acceso and CC.ext_acceso = ctt.ext_acceso) inner join
    titulos tit on (ctt.cod_titulo = tit.cod_titulo and ctt.portada = '1')
    where CC.nivel_reg <> 's'
    ORDER BY 3 ASC;
    The query plan for the query WITH Distinct and Order By is:
    Elapsed: 00:00:28.15
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=301 Card=47 Bytes=12220)
    1 0 SORT (ORDER BY) (Cost=301 Card=47 Bytes=12220)
    2 1 SORT (UNIQUE) (Cost=300 Card=47 Bytes=12220)
    3 2 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
    4 3 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
    5 4 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
    6 5 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
    7 6 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
    8 7 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
    9 6 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
    10 5 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
    11 10 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
    12 4 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
    13 12 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
    14 3 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
    15 14 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
    Statistics
    154 recursive calls
    0 db block gets
    32070 consistent gets
    1622 physical reads
    0 redo size
    305785 bytes sent via SQL*Net to client
    2807 bytes received via SQL*Net from client
    212 SQL*Net roundtrips to/from client
    10 sorts (memory)
    0 sorts (disk)
    3149 rows processed
    The query plan for the query WITHOUT Distinct and Order By is:
    Elapsed: 00:00:03.57
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=299 Card=47 Bytes=12220)
    1 0 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
    2 1 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
    3 2 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
    4 3 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
    5 4 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
    6 5 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
    7 4 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
    8 3 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
    9 8 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
    10 2 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
    11 10 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
    12 1 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
    13 12 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
    Statistics
    3376 recursive calls
    0 db block gets
    33443 consistent gets
    1061 physical reads
    0 redo size
    313751 bytes sent via SQL*Net to client
    2807 bytes received via SQL*Net from client
    422 SQL*Net roundtrips to/from client
    90 sorts (memory)
    0 sorts (disk)
    3149 rows processed
    I would appreciate a lot if somebody can tell me how to improve the performance of the query with Distinct and Order By.
    Thank you very much,
    Icaro Alzuru C.

    Hello,
    I am getting an slow answer when I add Distinct and Order By to the query:
    Without Distinct and Order By lasts 3.57 seconds; without Distinct and Order By lasts 28.15 seconds, which it's too much for our app.
    The query is:
    select distinct CC.acceso, CC.ext_acceso, TIT.TITULO_SALIDA
    from (((Ocurrencias CT01 inner join
    palabras p0 on (CT01.cod_palabra = p0.cod_palabra and p0.palabra like 'VENEZUELA%' AND p0.campo = 'AUTOR')) INNER JOIN
    CENTRAL CC ON (CT01.ACCESO = CC.ACCESO AND CT01.EXT_ACCESO = CC.EXT_ACCESO))) inner join
    codtit ctt on (CC.acceso = ctt.acceso and CC.ext_acceso = ctt.ext_acceso) inner join
    titulos tit on (ctt.cod_titulo = tit.cod_titulo and ctt.portada = '1')
    where CC.nivel_reg <> 's'
    ORDER BY 3 ASC;
    The query plan for the query WITH Distinct and Order By is:
    Elapsed: 00:00:28.15
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=301 Card=47 Bytes=12220)
    1 0 SORT (ORDER BY) (Cost=301 Card=47 Bytes=12220)
    2 1 SORT (UNIQUE) (Cost=300 Card=47 Bytes=12220)
    3 2 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
    4 3 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
    5 4 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
    6 5 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
    7 6 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
    8 7 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
    9 6 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
    10 5 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
    11 10 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
    12 4 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
    13 12 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
    14 3 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
    15 14 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
    Statistics
    154 recursive calls
    0 db block gets
    32070 consistent gets
    1622 physical reads
    0 redo size
    305785 bytes sent via SQL*Net to client
    2807 bytes received via SQL*Net from client
    212 SQL*Net roundtrips to/from client
    10 sorts (memory)
    0 sorts (disk)
    3149 rows processed
    The query plan for the query WITHOUT Distinct and Order By is:
    Elapsed: 00:00:03.57
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=299 Card=47 Bytes=12220)
    1 0 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
    2 1 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
    3 2 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
    4 3 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
    5 4 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
    6 5 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
    7 4 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
    8 3 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
    9 8 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
    10 2 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
    11 10 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
    12 1 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
    13 12 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
    Statistics
    3376 recursive calls
    0 db block gets
    33443 consistent gets
    1061 physical reads
    0 redo size
    313751 bytes sent via SQL*Net to client
    2807 bytes received via SQL*Net from client
    422 SQL*Net roundtrips to/from client
    90 sorts (memory)
    0 sorts (disk)
    3149 rows processed
    I would appreciate a lot if somebody can tell me how to improve the performance of the query with Distinct and Order By.
    Thank you very much,
    Icaro Alzuru C.

  • Poor performance by Matlab and Windows benchmark tests

    Hello
    I have a Lenovo Thinkpad W520 with Windows 7 64bit installed. Compared to other Notebooks with comparable hardware, my Lenovo is very slow. To prove my thought, I carried through a Matlab (64bit) benchmark test and the Windows 7 benchmark test. In both tests, my Lenovo was worse than the other notebooks. During the tests I set the Power Manager to performance and activated the Lenovo turbo boost.
    Now I want to ask, if there are any settings, perhaps in the bios, to speed the laptop up? Or why result such a bad performance although the hardware is very good?
    Thanks for your help
    Solved!
    Go to Solution.

    Thank you very much for your reply!
    I dated up the BIOS from version 1.27 to 1.32. Now the benchmark tests all look great!!
    Thanks

  • Poor performance - QT Player and Safari

    Ok, so I've been ignoring this issue on my computer for the longest time and I think it's time I got to the bottom of it.
    I'm using a Quicksilver G4, 733MHz, with 896MB RAM.
    Now, previously, prior to upgrading to Tiger, using Panther and QT 6.5 something, Quicktime worked pretty much flawlessly. I was so happy with it. After upgrading to Tiger such a long time ago, I noticed QT 7 wasn't nearly up to par and its performance was very bad.
    The real issue here is this: Whenever I watch a streaming movie downloaded from the internet, in Safari, it runs choppy. Choppy as in the framerate stutters considerably, the video will pause while the audio still plays, and it basically renders the video unwatchable. The workaround I've been doing all this time, is: downloading the video onto my desktop and playing it through VLC. (Which it does absolutely flawlessly, mind you.)
    Oh but it gets better - when I play these EXACT same video files in Quicktime Player - I get "improved" performance, but it still does the same thing: choppy video playback. Now, when I play these same streaming files in Firefox - they work, smooth as a baby's bottom. Quicktime's streaming plug-in works great. Doesn't make sense...
    On top of ALL that: After this problem was already engraved in my head after trying numerous troubleshooting (reinstalling latest versions of Quicktime, clearing caches, making a test user account, A&I, etc....) I eventually upgraded my computer, and put another internal 250GB HD in it. So I was basically starting from scratch - a complete erase and install of the HD.
    Lo and behold, Quicktime STILL did it. - same exact problem. So I gave up for a while and have been using VLC ever since. Or Firefox.
    So that's my problem guys - any suggestions would be nice, or if you'd like, just tell me to keep using VLC and/or Firefox as workarounds and don't even try to fix it. ¬_¬
    Quicksilver Powermac G4 733Mhz   Mac OS X (10.4.8)   894MB RAM, 250GB & 40GB HDs
    Quicksilver Powermac G4 733Mhz   Mac OS X (10.4.8)   896 MB RAM, 250GB & 40GB HD

    I do not use Safari. However, I found this troubleshooting technique in the MacAddict Magazine December 06 issue:
    To fix a QT related problem w/Safari, quit Safari & look in the /Library/Internet Plug-Ins folder (this is the Library folder at the root level of your HD, not the one inside your Users/user name folder).
    Try removing the VLC Plugin.plugin (if you have it) & QT Plugin.plugin files from this folder, & then relaunch Safari. If the problem isn’t fixed, a different plug-in may be to blame. Keep removing files from this folder (start with any third-party plug-ins that are present) until the problem disappears.
    Replace the plug-ins that don’t cause any problems.

  • SQL Inseting a Set Of Data To Run SQL Queries and Produce a Data Table

    Hello,
    I am an absolute newbie to SQL.
    I have purchased the book "Oracle 10g The Complete Reference" by Kevin Loney.
    I have read through the introductory chapters regarding relational databases and am attempting to copy and paste the following data and run an SQL search from the preset data.
    However when I attempt to start my database and enter the data by using copy and paste into the following:
    C:\Windows\system32 - "Cut Copy & Paste" rem **************** onwards
    I receive the following message: drop table Newspaper;
    'drop' is not recognised as an external or internal command, operable programme or batch file.
    Any idea how one would overcome this initial proble?
    Many thanks for any solutions to this problem.
    rem *******************
    rem The NEWSPAPER Table
    rem *******************
    drop table NEWSPAPER;
    create table NEWSPAPER (
    Feature VARCHAR2(15) not null,
    Section CHAR(1),
    Page NUMBER
    insert into NEWSPAPER values ('National News', 'A', 1);
    insert into NEWSPAPER values ('Sports', 'D', 1);
    insert into NEWSPAPER values ('Editorials', 'A', 12);
    insert into NEWSPAPER values ('Business', 'E', 1);
    insert into NEWSPAPER values ('Weather', 'C', 2);
    insert into NEWSPAPER values ('Television', 'B', 7);
    insert into NEWSPAPER values ('Births', 'F', 7);
    insert into NEWSPAPER values ('Classified', 'F', 8);
    insert into NEWSPAPER values ('Modern Life', 'B', 1);
    insert into NEWSPAPER values ('Comics', 'C', 4);
    insert into NEWSPAPER values ('Movies', 'B', 4);
    insert into NEWSPAPER values ('Bridge', 'B', 2);
    insert into NEWSPAPER values ('Obituaries', 'F', 6);
    insert into NEWSPAPER values ('Doctor Is In', 'F', 6);
    rem *******************
    rem The NEWSPAPER Table
    rem *******************
    drop table NEWSPAPER;
    create table NEWSPAPER (
    Feature VARCHAR2(15) not null,
    Section CHAR(1),
    Page NUMBER
    insert into NEWSPAPER values ('National News', 'A', 1);
    insert into NEWSPAPER values ('Sports', 'D', 1);
    insert into NEWSPAPER values ('Editorials', 'A', 12);
    insert into NEWSPAPER values ('Business', 'E', 1);
    insert into NEWSPAPER values ('Weather', 'C', 2);
    insert into NEWSPAPER values ('Television', 'B', 7);
    insert into NEWSPAPER values ('Births', 'F', 7);
    insert into NEWSPAPER values ('Classified', 'F', 8);
    insert into NEWSPAPER values ('Modern Life', 'B', 1);
    insert into NEWSPAPER values ('Comics', 'C', 4);
    insert into NEWSPAPER values ('Movies', 'B', 4);
    insert into NEWSPAPER values ('Bridge', 'B', 2);
    insert into NEWSPAPER values ('Obituaries', 'F', 6);
    insert into NEWSPAPER values ('Doctor Is In', 'F', 6);

    You need to be in SQL*Plus logged in as a user.
    This page which I created for my Oracle students may be of some help:
    http://www.morganslibrary.org/reference/setup.html
    But to logon using "/ as sysdba" you must be in SQL*Plus (sqlplus.exe).

  • Poor performance with Yosemite and early 2009 Mac Pro

    I have an early 2009 Mac Pro with the following specs:
    - 2.66 GHz Quad Core Intel Xeon
    - 10 GB of 1066 MHz RAM
    - NVidia GeForce GT 120 512 MB
    - 256 MB solid state drive for my system partition
    - Two monitors connected, each at 1680x1050 resolution
    Back when I was running OS X 10.7 or 10.8 I found that for every day tasks the performance using my computer was adequate. However, starting around 10.9, and even worse since upgrading to 10.10, thinks have gotten painfully slow. To give an example, activating Mission Control can take upwards of four seconds, with the animation being very choppy. Changing tabs on a Finder window can take two seconds for the switch to happen. Just switching between different windows, it can take several seconds for a window to activate. It's gotten to the point where I'm having difficulty working. So I'm thinking of upgrading some of my hardware.
    Given my specs the weakest link seems to be my graphics card, and all of these issues do seem to be related to graphics. So my questions are:
    - Do you think upgrading my graphics card will substantially improve things, and is there anything else I should upgrade?
    - Is this slowness just the result of the computer being nearly six years old, and no upgrades will really improve things that much?
    Thanks in advance!

    Between Setup Assistant, and your existing system "untouched" (or use CCC if say you want to use an existing SSD for the system) there is no reason it should be a lot of work setting up. Have you ever used Migration or Setup? it has also gotten better.
    Also, having 10.9.5 on another drive and running DU - and TRIM _now_ would be helpful.
    Looking at just what gremlins you have running around inside your current system is not bad but.... sometimes the "long road" turns a shortcut into a dead end, and avoiding doing what seems the long road and hardest gets you where you want to go: a solid stable system.
    Less is more. Most systems have more than needed and they get in the way and can cause trouble. Even handy "widgets" and those things that monitor system functions, even disk status. Which is why I like seeing a separate small system maintenance volume just for the weekly checkup. 30GB is more than enough so just slice out a partition somewhere - on another drive/device.
    Those things, more so than and a lot cheaper than a new GPU. If  your SSD is two years old, the 840 EVO from Samsung is down under $120 for 250GB, or use one for Lightroom / Aperture / iPhoto or scratch.
    One person was complaining about sluggish window issue and thought it was the driver. Turned out It happened in ONE APP, not everywhere - very telling - and the app in question needs update. Adobe updated CC (for Windows) last month to finally support dual Dxx and some of the newer AMD GPUs - can the mac be far behind?
    10GB RAM? that would not be 3 x 4GB or any combination using triple channel memory.

  • Poor performance after format and OS install.

    GT70 0ne
    3610qm
    680m
    24g Kingston ram
    Win7 64
    Yesterday I formatted my drive and installed win7 64, after installing all the (old) drivers on the MSI website (link below), in the correct order, then waiting a good 8 hours for all the windows updates to install, I booted up WoW and found horrible FPS. I ran 3dmarks 11 and scored a dismal 2600 with my graphic and combined scores around 2500 and the physics at the normal 8500.
    Searched online for some solutions, put the 680m as the default for WoW and 3dmarks but with no difference. I went ahead and got the latest drivers for the 680m off the Nvidia website and saw no improvement.
    I did not install any utilities from the MSI website, are any of them needed for the proper function of the laptop or are they all fluff?
    I did not install any firmware either, Is it needed?
    Should I download the latest drivers for all the components or stick with MSI's?
    The laptop itself is extremely quick when not needing graphical power.
    Thank you.

    The following factors will affect the performance score:
    - Using AC or DC.
    - Power plans (Windows). High-performance or Balanced mode.
    - Numbers of background processes.
    - Wi-Fi/BT on or off.
    - Improvements from newer BIOS, EC, or Video BIOS versions.
    - Screen brightness (slightly affected).
    Your score is indeed a little lower. Check the points above to improve it.

  • Poor performance in iMovie and iDVD in the UK. Anyone noticed?

    Hello everyone,
    I've made several very successful DVDs using stills for UK TVs with the PAL format, 1280 x 720 pixels. I'm running iLife 06, latest everything for Tiger.
    I use Keynote - export to Quicktime, custom settings, HDV 720p, Apple intermediate codec, 25 fps, for graphical introduction pieces.
    I also use Photo 2 Movie for the stills with pan & scan etc., again Pal format, exported for iMovie.
    These elements are then dropped into an iMovie project for the relevant subject and audio added. This project is then saved as a Full Quality QuickTime movie and dropped into a final assembly iMovie project, which is in turn sent to iDVD. This keeps the visual timeline manageably short.
    On my last projects, this gave a stunning result. 4 Quicktime upgrades later and using the same procedure, motion now seems slightly jerky even though all the frame rates for PAL are correct. Before I complete the project, has anyone else noticed this drop in quality? I don't want to waste time if I should be doing something differently.
    I've also noticed that once again, you can't use purchased music in an iMovie project - it freezes the video. Apple had this bug before and fixed it, now it's back. I don't know whether the two are connected.
    Can anyone give me any advice? Thanks.

    Why are you making a HD project if you then burn it as standard definition DVD via iDVD?
    iDVD has to downsample the HD to SD. I haven't experimented how well iDVD manages to do that.
    Have you tried to do a standard definition iMovie PAL project and feed that to a PAL iDVD project? Yes, I strongly recommend using Photo To Movie's High Quality PAL DV export for the slideshows because iMovie's slideshows are, erm, suboptimal in quality.

Maybe you are looking for

  • Problem with auto-UNdeployment

    Hi, I would know how can I undeploy an application which was copied in 'autodeploy' folder? I copied test.jar in 'autodeply' folder when the server was running. Test.jar was deployed well. But I can't delete a Test.jar from 'autodeploy' folder becaus

  • PR not generating for MRP type VB

    hi all,       i have  bill of material wich contain some raw material. for this raw material i have maintained MRP type VB , along with reorder point and safety stock in material master but after MRP run, i am not gettting requirenemnt of raw materia

  • How do I update apps in the new iTunes

    Can't find the old update available on the itunes site when I plug in my iPhone and look at apps. Was thee in the past.

  • Streamlining Business Process

    Dear Experts, In my company we have implemented SAP ECC6 in July 2010. Ours is a very traditional cement industry. 90% of the employees are abpve the age of 40 and more than 25 years in our company. The highest technology they worked on is MS Excell

  • Update exchange rates automatically

    Hi, We are a mid sized company moving into SAP. We have operations only in USA or Canada. We want to automatically update the exchange rates in the system. There are services out there like Ruetuers etc that provide this. Given the size of our compan