Performance of SQL in 2 cases

I am confused with respect to Performance of the below SQL in 2 cases
In order to get the total no. of records from the innermost select.
Once i am using the "COUNT(*) OVER () cnt" function
SELECT b.lastname,
b.firstname,
b.department,
b.org_relationship,
b.enterprise_name,
b.cnt
FROM (
SELECT a.*,
rownum rn
FROM (
SELECT p.lastname,
p.firstname,
porg.DEPARTMENT,
porg.org_relationship,
porg.enterprise_name,
COUNT() OVER () cnt -------------->the "COUNT(*) OVER () cnt" function*
FROM t_person p,
t_contact c1,
t_o_person porg
WHERE p.person_id = c1.ref_id(+)
AND p.person_id = porg.o_person_id
ORDER BY upper(p.lastname), upper(p.firstname)
) a
WHERE rownum <= <<max row requested>>
) b
WHERE rn > <<min row requested>>
Next,I am using an INNER SELECT QUERY to get the total count.
SELECT b.lastname,
b.firstname,
b.department,
b.org_relationship,
b.enterprise_name,
b.cnt
FROM (
SELECT a.*,
rownum rn
FROM (
SELECT p.lastname,
p.firstname,
porg.DEPARTMENT,
porg.org_relationship,
porg.enterprise_name,
*(SELECT count(*) from -------------->INNER SELECT QUERY to get the total count.*
FROM t_person p,
t_contact c1,
t_o_person porg
WHERE p.person_id = c1.ref_id(+)
AND p.person_id = porg.o_person_id
ORDER BY upper(p.lastname), upper(p.firstname)
*) as cnt*
FROM t_person p,
t_contact c1,
t_o_person porg
WHERE p.person_id = c1.ref_id(+)
AND p.person_id = porg.o_person_id
ORDER BY upper(p.lastname), upper(p.firstname)
) a
WHERE rownum <= <<max row requested>>
) b
WHERE rn > <<min row requested>>
So wanted to know which option would perform better in the above case? Is it the SEPERATE INNER SELECT to get the COUNT or using the COUNT() over () cnt ?*
Edited by: [email protected] on Mar 10, 2009 12:41 PM

Hi Thanks for the inputs.
Even,if I put a filter on the main query also,it still shows the total records.
SELECT * FROM (
  SELECT
                beta.*, rownum as alpha
            from(
SELECT p.lastname,
p.firstname,
porg.DEPARTMENT,
porg.org_relationship,
porg.enterprise_name,
(select count(*) from
tmp_person p,
tmp_contact c1,
tmp_o_person porg
WHERE p.CLM_ID ='1862' and
        p.person_id = c1.ref_id(+)
        AND p.person_id = porg.o_person_id
                and porg.O_ORG_ID ='1862'
) as cnt
FROM tmp_person p,
tmp_contact c1,
tmp_o_person porg
  WHERE p.CLM_ID ='1862' and
        p.person_id = c1.ref_id(+)
        AND p.person_id = porg.o_person_id
                and porg.O_ORG_ID ='1862'
) beta
WHERE rownum <= 100
WHERE alpha >=1
Plan hash value: 132875926
| Id  | Operation                          | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                   |                    |   100 |   840K|       | 21433   (1)| 00:04:18 |
|*  1 |  VIEW                              |                    |   100 |   840K|       | 21433   (1)| 00:04:18 |
|*  2 |   COUNT STOPKEY                    |                    |       |       |       |            |          |
|   3 |    VIEW                            |                    | 22858 |   187M|       | 21433   (1)| 00:04:18 |
|*  4 |     SORT ORDER BY STOPKEY          |                    | 22858 |  6875K|    14M| 21433   (1)| 00:04:18 |
|   5 |      MERGE JOIN OUTER              |                    | 22858 |  6875K|       | 18304   (1)| 00:03:40 |
|   6 |       MERGE JOIN                   |                    | 22858 |  4397K|       | 11337   (1)| 00:02:17 |
|   7 |        SORT JOIN                   |                    | 22858 |  3013K|  7192K|  5148   (1)| 00:01:02 |
|*  8 |         TABLE ACCESS FULL          | TMP_PERSON         | 22858 |  3013K|       |  3716   (1)| 00:00:45 |
|*  9 |        SORT JOIN                   |                    | 24133 |  1461K|  3800K|  6189   (1)| 00:01:15 |
|  10 |         TABLE ACCESS BY INDEX ROWID| TMP_ORG_PERSON     | 24133 |  1461K|       |  5535   (1)| 00:01:07 |
|* 11 |          INDEX RANGE SCAN          | TMP_ORG_PERSON_FK1 | 24133 |       |       |   102   (1)| 00:00:02 |
|* 12 |       SORT JOIN                    |                    | 68472 |  7422K|    15M|  6968   (1)| 00:01:24 |
|  13 |        TABLE ACCESS FULL           | TMP_CONTACT        | 68472 |  7422K|       |  2895   (1)| 00:00:35 |
Query Block Name / Object Alias (identified by operation id):
   1 - SEL$2 / from$_subquery$_001@SEL$1
   2 - SEL$2
   3 - SEL$3 / BETA@SEL$2
   4 - SEL$3
   8 - SEL$3 / P@SEL$3
  10 - SEL$3 / PORG@SEL$3
  11 - SEL$3 / PORG@SEL$3
  13 - SEL$3 / C1@SEL$3
Predicate Information (identified by operation id):
   1 - filter("ALPHA">=1)
   2 - filter(ROWNUM<=100)
   4 - filter(ROWNUM<=100)
   8 - filter("P"."CLM_ID"='1862')
   9 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
       filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
  11 - access("PORG"."O_ORG_ID"='1862')
  12 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
       filter("P"."PERSON_ID"="C1"."REF_ID"(+))
Column Projection Information (identified by operation id):
   1 - "from$_subquery$_001"."LASTNAME"[NVARCHAR2,100],
       "from$_subquery$_001"."FIRSTNAME"[NVARCHAR2,100], "from$_subquery$_001"."PERSON_ID"[VARCHAR2,30],
       "from$_subquery$_001"."MIDDLENAME"[NVARCHAR2,100], "from$_subquery$_001"."SOCSECNUMBER"[NVARCHAR2,22],
       "from$_subquery$_001"."BIRTHDAY"[VARCHAR2,10], "from$_subquery$_001"."U_NAME"[NVARCHAR2,100],
       "from$_subquery$_001"."U_ID"[VARCHAR2,30], "from$_subquery$_001"."PERSON_XML_DATA"[VARCHAR2,4000],
       "from$_subquery$_001"."BUSPHONE"[VARCHAR2,4000], "from$_subquery$_001"."EMLNAME"[VARCHAR2,4000],
       "from$_subquery$_001"."ORG_NAME"[VARCHAR2,4000], "from$_subquery$_001"."EMPID"[NVARCHAR2,150],
       "from$_subquery$_001"."EMPSTATUS"[NVARCHAR2,40], "from$_subquery$_001"."DEPARTMENT"[NVARCHAR2,200],
       "from$_subquery$_001"."ORG_RELATIONSHIP"[NVARCHAR2,120],
       "from$_subquery$_001"."ENTERPRISE_NAME"[VARCHAR2,100], "from$_subquery$_001"."TOTAL_RESULTS"[NUMBER,22],
       "ALPHA"[NUMBER,22]
   2 - "BETA"."LASTNAME"[NVARCHAR2,100], "BETA"."FIRSTNAME"[NVARCHAR2,100],
       "BETA"."PERSON_ID"[VARCHAR2,30], "BETA"."MIDDLENAME"[NVARCHAR2,100],
       "BETA"."SOCSECNUMBER"[NVARCHAR2,22], "BETA"."BIRTHDAY"[VARCHAR2,10], "BETA"."U_NAME"[NVARCHAR2,100],
       "BETA"."U_ID"[VARCHAR2,30], "BETA"."PERSON_XML_DATA"[VARCHAR2,4000], "BETA"."BUSPHONE"[VARCHAR2,4000],
       "BETA"."EMLNAME"[VARCHAR2,4000], "BETA"."ORG_NAME"[VARCHAR2,4000], "BETA"."EMPID"[NVARCHAR2,150],
       "BETA"."EMPSTATUS"[NVARCHAR2,40], "BETA"."DEPARTMENT"[NVARCHAR2,200],
       "BETA"."ORG_RELATIONSHIP"[NVARCHAR2,120], "BETA"."ENTERPRISE_NAME"[VARCHAR2,100],
       "BETA"."TOTAL_RESULTS"[NUMBER,22], ROWNUM[4]
   3 - "BETA"."LASTNAME"[NVARCHAR2,100], "BETA"."FIRSTNAME"[NVARCHAR2,100],
       "BETA"."PERSON_ID"[VARCHAR2,30], "BETA"."MIDDLENAME"[NVARCHAR2,100],
       "BETA"."SOCSECNUMBER"[NVARCHAR2,22], "BETA"."BIRTHDAY"[VARCHAR2,10], "BETA"."U_NAME"[NVARCHAR2,100],
       "BETA"."U_ID"[VARCHAR2,30], "BETA"."PERSON_XML_DATA"[VARCHAR2,4000], "BETA"."BUSPHONE"[VARCHAR2,4000],
       "BETA"."EMLNAME"[VARCHAR2,4000], "BETA"."ORG_NAME"[VARCHAR2,4000], "BETA"."EMPID"[NVARCHAR2,150],
       "BETA"."EMPSTATUS"[NVARCHAR2,40], "BETA"."DEPARTMENT"[NVARCHAR2,200],
       "BETA"."ORG_RELATIONSHIP"[NVARCHAR2,120], "BETA"."ENTERPRISE_NAME"[VARCHAR2,100],
       "BETA"."TOTAL_RESULTS"[NUMBER,22]
   4 - (#keys=2) UPPER("P"."LASTNAME")[100], UPPER("P"."FIRSTNAME")[100], "P"."LASTNAME"[NVARCHAR2,100],
       "P"."FIRSTNAME"[NVARCHAR2,100], "P"."PERSON_ID"[VARCHAR2,30], "P"."MIDDLENAME"[NVARCHAR2,100],
       "P"."SOCSECNUMBER"[NVARCHAR2,22], TO_CHAR(INTERNAL_FUNCTION("P"."BIRTHDAY"),'mm-dd-yyyy')[10],
       "P"."USERNAME"[NVARCHAR2,100], "P"."CLM_ID"[VARCHAR2,30],
       "XMLTYPE"."GETSTRINGVAL"("XMLTYPE"."EXTRACT"(SYS_MAKEXML("P"."SYS_NC00008$"),'/'))[4000],
       "XMLTYPE"."GETSTRINGVAL"("XMLTYPE"."EXTRACT"(SYS_MAKEXML("C1"."SYS_NC00005$"),'//phone[1]/number/text()')
       )[4000], "XMLTYPE"."GETSTRINGVAL"("XMLTYPE"."EXTRACT"(SYS_MAKEXML("C1"."SYS_NC00005$"),'//email[2]/addres
       s/text()'))[4000], "XMLTYPE"."GETSTRINGVAL"("XMLTYPE"."EXTRACT"(SYS_MAKEXML("C1"."SYS_NC00005$"),'//compa
       ny/text()'))[4000], "PORG"."EMPLID"[NVARCHAR2,150], "PORG"."EMPL_STATUS"[NVARCHAR2,40],
       "PORG"."DEPARTMENT"[NVARCHAR2,200], "PORG"."ORG_RELATIONSHIP"[NVARCHAR2,120],
       "PORG"."ENTERPRISE_NAME"[VARCHAR2,100],  (SELECT /*+ */ COUNT(*) FROM "CLM_ORG_PERSON"
       "PORG","CLM_CONTACT" "C1","CLM_PERSON" "P" WHERE "P"."PERSON_ID"="PORG"."O_PERSON_ID" AND
       "P"."CLM_ID"='1862' AND "P"."PERSON_ID"="C1"."REF_ID"(+) AND "PORG"."O_ORG_ID"='1862')[22]
   5 - (#keys=0) "P"."PERSON_ID"[VARCHAR2,30], "PORG"."ENTERPRISE_NAME"[VARCHAR2,100],
       "P"."CLM_ID"[VARCHAR2,30], "P"."FIRSTNAME"[NVARCHAR2,100], "P"."LASTNAME"[NVARCHAR2,100],
       "P"."MIDDLENAME"[NVARCHAR2,100], "P"."SOCSECNUMBER"[NVARCHAR2,22], "P"."BIRTHDAY"[DATE,7],
       "P"."SYS_NC00008$"[LOB,4000], "P"."USERNAME"[NVARCHAR2,100], "PORG"."ORG_RELATIONSHIP"[NVARCHAR2,120],
       "PORG"."EMPLID"[NVARCHAR2,150], "PORG"."DEPARTMENT"[NVARCHAR2,200], "PORG"."EMPL_STATUS"[NVARCHAR2,40],
       "C1"."SYS_NC00005$"[LOB,4000]
   6 - (#keys=0) "P"."PERSON_ID"[VARCHAR2,30], "P"."CLM_ID"[VARCHAR2,30],
       "P"."FIRSTNAME"[NVARCHAR2,100], "P"."LASTNAME"[NVARCHAR2,100], "P"."MIDDLENAME"[NVARCHAR2,100],
       "P"."SOCSECNUMBER"[NVARCHAR2,22], "P"."BIRTHDAY"[DATE,7], "P"."SYS_NC00008$"[LOB,4000],
       "P"."USERNAME"[NVARCHAR2,100], "PORG"."ORG_RELATIONSHIP"[NVARCHAR2,120], "PORG"."EMPLID"[NVARCHAR2,150],
       "PORG"."DEPARTMENT"[NVARCHAR2,200], "PORG"."EMPL_STATUS"[NVARCHAR2,40],
       "PORG"."ENTERPRISE_NAME"[VARCHAR2,100]
   7 - (#keys=1) "P"."PERSON_ID"[VARCHAR2,30], "P"."CLM_ID"[VARCHAR2,30],
       "P"."FIRSTNAME"[NVARCHAR2,100], "P"."LASTNAME"[NVARCHAR2,100], "P"."MIDDLENAME"[NVARCHAR2,100],
       "P"."SOCSECNUMBER"[NVARCHAR2,22], "P"."BIRTHDAY"[DATE,7], "P"."SYS_NC00008$"[LOB,4000],
       "P"."USERNAME"[NVARCHAR2,100]
   8 - "P"."PERSON_ID"[VARCHAR2,30], "P"."FIRSTNAME"[NVARCHAR2,100], "P"."LASTNAME"[NVARCHAR2,100],
       "P"."MIDDLENAME"[NVARCHAR2,100], "P"."SOCSECNUMBER"[NVARCHAR2,22], "P"."BIRTHDAY"[DATE,7],
       "P"."SYS_NC00008$"[LOB,4000], "P"."USERNAME"[NVARCHAR2,100], "P"."CLM_ID"[VARCHAR2,30]
   9 - (#keys=1) "PORG"."O_PERSON_ID"[VARCHAR2,30], "PORG"."ORG_RELATIONSHIP"[NVARCHAR2,120],
       "PORG"."EMPLID"[NVARCHAR2,150], "PORG"."DEPARTMENT"[NVARCHAR2,200], "PORG"."EMPL_STATUS"[NVARCHAR2,40],
       "PORG"."ENTERPRISE_NAME"[VARCHAR2,100]
  10 - "PORG"."O_PERSON_ID"[VARCHAR2,30], "PORG"."EMPLID"[NVARCHAR2,150],
       "PORG"."DEPARTMENT"[NVARCHAR2,200], "PORG"."EMPL_STATUS"[NVARCHAR2,40],
       "PORG"."ENTERPRISE_NAME"[VARCHAR2,100], "PORG"."ORG_RELATIONSHIP"[NVARCHAR2,120]
  11 - "PORG".ROWID[ROWID,10]
  12 - (#keys=1) "C1"."REF_ID"[VARCHAR2,50], "C1"."SYS_NC00005$"[LOB,4000]
  13 - "C1"."REF_ID"[VARCHAR2,50], "C1"."SYS_NC00005$"[LOB,4000]

Similar Messages

  • How to measure the performance of sql query?

    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i...
    It ll be useful for me to write efficient query....
    Thanks & Regards

    psram wrote:
    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
    This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
    Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
    You can use the following approach to get a deeper understanding of the operations performed by each row source:
    alter session set statistics_level=all;
    alter session set timed_statistics = true;
    select /* findme */ ... <your query here>
    SELECT
             SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
             OBJECT_NAME,
             CARDINALITY,
             LAST_OUTPUT_ROWS,
             LAST_CR_BUFFER_GETS,
             LAST_DISK_READS,
             LAST_DISK_WRITES,
    FROM     V$SQL_PLAN_STATISTICS_ALL P,
             (SELECT *
              FROM   (SELECT   *
                      FROM     V$SQL
                      WHERE    SQL_TEXT LIKE '%findme%'
                               AND SQL_TEXT NOT LIKE '%V$SQL%'
                               AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
                      ORDER BY LAST_LOAD_TIME DESC)
              WHERE  ROWNUM < 2) S
    WHERE    S.HASH_VALUE = P.HASH_VALUE
             AND S.CHILD_NUMBER = P.CHILD_NUMBER
    ORDER BY ID
    /Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
    Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
    http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
    http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Which is better for performance Azure SQL Database or SQL Server in Azure VM?

    Hi,
    We are building an ASP.NET app that will be running on Microsoft Cloud which I think is the new name for Windows Azure. We're expecting this app to have many simultaneous users and want to make sure that we provide excellent performance to end users.
    Here are our main concerns/desires:
    Performance is paramount. Fast response times are very very important
    We want to have as little to do with platform maintenance as possible e.g. managing OS or SQL Server updates, etc.
    We are trying to use "out-of-the-box" standard features.
    With that said, which option would give us the best possible database performance: a SQL Server instance running in a VM on Azure or SQL Server Database as a fully managed service?
    Thanks, Sam

    hello,
    SQL Database using shared resources on the Microsft data centre. Microsoft balance the resource usage of SQL Database so that no one application continuously dominates any resource.You can try the 
    Premium Preview
    for Windows Azure SQL Database which offers better performance by guaranteeing a fixed amount of dedicated resources for a database.
    If you using SQL Server instance running in a VM, you control the operating system and database configuration. And the
    performance of the database depends on many factors such as the size of a virtual machine, and the configuration of the data disks.
    Reference:
    Choosing between SQL Server in Windows Azure VM & Windows Azure SQL Database
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here. 
    Fanny Liu
    TechNet Community Support

  • Performance will go down in case of Dynamic UI element Creation.

    Performance will go down in case of Dynamic UI element Creation

    suryar wrote:
    Performance will go down in case of Dynamic UI element Creation
    hi,
    is this an information or a question?
    Please be more specific so that your queries can be answered quickly
    Regards,
    Sahai.S

  • Entry-SQL syntax error: CASE not allowed

    Hello All. When I use sap netweaver developer to develop webdynpro applications, I always meet that jdbc error.
    when using inner join,left join, or using case when in the sql, it will pop up Entry-SQL syntax error.
    But I have run the sql in Microsoft SQL Server studio successfully.
    The SQL statement "UPDATE SAPNWDDB.Z_SERIAL SET LASTSERIAL = CASE WHEN ENDWITH IS NOT NULL THEN CASE WHEN LASTSERIAL + 1 > ENDWITH THEN ISNULL(STARTWITH, 0) ELSE LASTSERIAL + 1 END ELSE LASTSERIAL + 1 END WHERE SERIALNO = ?" contains the syntax error[s]: - 1:43 - Entry-SQL syntax error: CASE not allowed
    - 1:78 - Entry-SQL syntax error: CASE not allowed
    - 1:124 - SQL syntax error: the token "(" was not expected here
    Can someone help me? Thank you.

    Hi Arun Jaiswal ,
    Thank you for your answer. But I have tried query in sql editor. Actually it can work. Other than "CASE" syntax, it seems not support inner join and left join in webdynpro either. I even can not query the db views.
    That's imposible jdbc not support these simple syntex. I wonder there is any config control the sql compatibility level.
    I have developped a java programm to test it. It is ok. No error. But the same case pop up error in java webdynpro application.I don't know why.
    I am entry level webdynpro developper.
    Hopefully you can help me.
    Thank you.
    Edited by: zegunlee330 on Sep 3, 2010 4:18 AM

  • How to use dynamic SQL in this case for best performance

    I have the table with following columns
    ID NUMBER,
    DATA LONG,
    TAG VARCHAR2(255)
    Records in this table will be like following
    1 this is an abstract ABSTRACT
    1 this is author AUTHOR
    1 100 PRICE
    2 this is an abstract ABSTRACT
    2 this is author AUTHOR
    3 contract is this CONTRACT
    Basically all the records with the same number constitute 1 record for another table. Tag in the above table indicates that what column it is and DATAwill have the actual data for that column. I need to populate the second table based an the above table but will not get the same number of TAGS all the time. I need to insert the values only for the columns provided in the TAG field. How will I accomplish this by dynamic sql. Do I create a loop and create two strings one with columns and one with values and then combine them and use execute immediate to insert into table? Is there an easier way to do this??
    Please respond quickly.
    Thanks
    Bhawna
    null

    > so which collection should i use to perform it..
    so that performance is best......
    Program to interfaces. That way, you can switch out implementations and test for yourself which performance is best in an actual production context. But first, write your program so that it works. Worry about refactoring for performance once your program is written and it works.
    > plz send me the logic....
    Give it a shot on your own first; we can help if you get stuck.
    ~

  • Performance between SQL Statement and Dynamic SQL

    Select emp_id
    into id_val
    from emp
    where emp_id = 100
    EXECUTE IMMEDIATE
    'Select '|| t_emp_id ||
    'from emp '
    'where emp_id = 100'
    into id_valWill there be more impact in performance while using Dynamic SQL?

    CP wrote:
    Will there be more impact in performance while using Dynamic SQL?All SQLs are parsed and executed as SQL cursors.
    The 2 SQLs (dynamic and static) results in the exact same SQL cursor. So both methods will use an identical cursor. There are therefore no performance differences ito of how fast that SQL cursor will be.
    If an identical SQL cursor is not found (a soft parse), the SQL engine needs to compile the SQL source code supplied, into a SQL cursor (a hard parse).
    Hard parsing burns a lot of CPU cycles. Soft parsing burns less CPU cycles and is therefore better. However, no parsing at all is the best.
    To explain: if the code creates a cursor (e.g. INSERT INTO tab VALUES( :1, :2, :3 ) for inserting data), it can do it as follows:
    while More Data Found loop
      parse INSERT cursor
      bind variables to INSERT cursor
      execute INSERT cursor
      close INSERT cursor
    end loopIf that INSERT cursor does not yet exists, it will be hard parsed and a cursor created. Each subsequent loop iteration will result in a soft parse.
    However, the code will be far more optimal as follows:
    parse INSERT cursor
    while More Data Found loop
      bind variables to INSERT cursor
      execute INSERT cursor
    end loop
    close INSERT cursorWith this approach the cursor is parsed (hard or soft), once only. The cursor handle is then used again and again. And when the application is done inserting data, the cursor handle is released.
    With dynamic SQL in PL/SQL, you cannot really follow the optimal approach - unless you use DBMS_SQL (a complex cursor interface). With static SQL, the PL/SQL's optimiser can kick in and it can optimise its access to the cursors your code create and minimise parsing all together.
    This is however not the only consideration when using dynamic SQL. Dynamic SQL makes coding a lot more complex. The SQL code can now only be checked at execution time and not at development time. There is the issue of creating shareable SQL cursors using bind variables. There is the risk of SQL injection. Etc.
    So dynamic SQL is seldom a good idea. And IMO, the vast majority of people that post problems here relating to dynamic SQL, are using dynamic SQL unnecessary. For no justified and logical reasons. Creating unstable code, insecure code and non-performing code.

  • How to Improve the Performance of SQL Server and/or the hardware it resides on?

    There's a particular stored procedure I call from my ASP.NET 4.0 Web Forms app that generates the data for a report.  Using SQL Server Management Studio, I did some benchmarking today and found some interesting results:
    FYI SQL Server Express 2014 and the same DB reside on both computers involved with the test:
    My laptop is a 3 year old i7 computer with 8GB of RAM.  It's fine but one would no longer consider it a "speed demon" compared to what's available today.  The query consistently took 30 - 33 seconds.
    My client's server has an Intel Xeon 5670 Processor and 12GB of RAM.  That seems like pretty good specs.  However, the query consistently took between 120 - 135 seconds to complete ... about 4 times what my laptop did!
    I was very surprised by how slow the server was.  Considering that it's also set to host IIS to run my web app, this is a major concern for me.   
    If you were in my shoes, what would be the top 3 - 5 things you'd recommend looking at on the server and/or SQL Server to try to boost its performance?
    Robert

    What else runs on the server besides IIS and SQL ? Is it used for other things except the database and IIS ?
    Is IIS causing a lot of I/O or CPU usage ?
    Is there a max limit set for memory usage on SQL Server ? There SHOULD be and since you're using IIS too you need to keep more memory free for that too.
    How is the memory pressure (check PLE counter) and post results.
    SELECT [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
    Check the error log and the event viewer maybe something bad there.
    Check the indexes for fragmenation, see if the statistics are up to date (and enable trace flag 2371 if you have large tables > 1 million rows)
    Is there an antivirus present on the server ? Do you have SQL processes/services/directories as exceptions ?
    There are lot of unknowns, you should run at least profiler and post results to see what goes on while you're having slow responses.
    "If there's nothing wrong with me, maybe there's something wrong with the universe!"

  • Performance: Open SQL vs. Native SQL (Oracle)

    Hi everybody,
    I have an interesting issue here. For a DB selection I use an Open SQL query from a table view into an internal table. It works fine, but the performance is not very well. The SELECT uses LIKE and wildcards (%) to search for customer master data (names and address fields).
    Because of the bad performance I made some tests in transaction DB02 with native SQL, but exactly the same SELECT structure. It looks like this:
      SELECT *
        FROM zzrm_cust_s_hlp
       WHERE client = 100
         AND mc_name1      LIKE '<name>%'
         AND mc_name2      LIKE '<name>%'
         AND valid_from    <= <timestamp>
         AND valid_to      >= <timestamp>
    Ok, now I tried exactly the same SELECT statement with the same data to search for (<name> and <timestamp) with Open SQL and Native SQL. The Difference is quite suprising, the Native SQL query is about 5-10 times faster (arount 1 sec) than the Open SQL query (around 5-10 sec). Even with the LIKE keywords and the wildcards.
    Any ideas what could be the problem with the Open SQL query?
    And: what can I do to achive the same performance as with the Native SQL query?
    Kind regards and thanks in advance for any help,
    Matthias

    Ok, here is the the SQL explaination from the DB02 query:
    SELECT STATEMENT ( Estimated Costs = 194 , Estimated #Rows = 1 )
           9 COUNT STOPKEY
             Filter Predicates
               8 NESTED LOOPS
                 ( Estim. Costs = 193 , Estim. #Rows = 1 )
                 Estim. CPU-Costs = 1,665,938 Estim. IO-Costs = 193
                   5 NESTED LOOPS
                     ( Estim. Costs = 144 , Estim. #Rows = 98 )
                     Estim. CPU-Costs = 1,162,148 Estim. IO-Costs = 144
                       2 TABLE ACCESS BY INDEX ROWID BUT000
                         ( Estim. Costs = 51 , Estim. #Rows = 93 )
                         Estim. CPU-Costs = 468,764 Estim. IO-Costs = 51
                         Filter Predicates
                           1 INDEX SKIP SCAN BUT000~NAM
                             ( Estim. Costs = 6 , Estim. #Rows = 93 )
                             Search Columns: 1
                             Estim. CPU-Costs = 59,542 Estim. IO-Costs = 6
                             Access Predicates Filter Predicates
                       4 TABLE ACCESS BY INDEX ROWID BUT020
                         ( Estim. Costs = 1 , Estim. #Rows = 1 )
                         Estim. CPU-Costs = 7,456 Estim. IO-Costs = 1
                         Filter Predicates
                           3 INDEX RANGE SCAN BUT020~0
                             ( Estim. Costs = 1 , Estim. #Rows = 1 )
                             Search Columns: 2
                             Estim. CPU-Costs = 3,661 Estim. IO-Costs = 1
                             Access Predicates
                   7 TABLE ACCESS BY INDEX ROWID ADRC
                     ( Estim. Costs = 1 , Estim. #Rows = 1 )
                     Estim. CPU-Costs = 5,141 Estim. IO-Costs = 1
                     Filter Predicates
                       6 INDEX UNIQUE SCAN ADRC~0
                         Search Columns: 4
                         Estim. CPU-Costs = 525 Estim. IO-Costs = 0
                         Access Predicates
    And this is the one from the Open SQL query in ABAP:
    SELECT STATEMENT ( Estimated Costs = 15,711 , Estimated #Rows = 29 )
           7 NESTED LOOPS
             ( Estim. Costs = 15,710 , Estim. #Rows = 29 )
             Estim. CPU-Costs = 3,021,708,117 Estim. IO-Costs = 15,482
               4 NESTED LOOPS
                 ( Estim. Costs = 15,411 , Estim. #Rows = 598 )
                 Estim. CPU-Costs = 3,018,711,707 Estim. IO-Costs = 15,183
                   1 TABLE ACCESS FULL BUT020
                     ( Estim. Costs = 9,431 , Estim. #Rows = 11,951 )
                     Estim. CPU-Costs = 2,959,067,612 Estim. IO-Costs = 9,207
                     Filter Predicates
                   3 TABLE ACCESS BY INDEX ROWID ADRC
                     ( Estim. Costs = 1 , Estim. #Rows = 1 )
                     Estim. CPU-Costs = 4,991 Estim. IO-Costs = 1
                     Filter Predicates
                       2 INDEX UNIQUE SCAN ADRC~0
                         Search Columns: 4
                         Estim. CPU-Costs = 525 Estim. IO-Costs = 0
                         Access Predicates
               6 TABLE ACCESS BY INDEX ROWID BUT000
                 ( Estim. Costs = 1 , Estim. #Rows = 1 )
                 Estim. CPU-Costs = 5,011 Estim. IO-Costs = 1
                 Filter Predicates
                   5 INDEX UNIQUE SCAN BUT000~0
                     Search Columns: 2
                     Estim. CPU-Costs = 525 Estim. IO-Costs = 0
                     Access Predicates
    Of course I can see the difference.
    But since the statements are identical, I don't understand why this difference exists
    Thanks for your help!
    Kind regards, Matthias

  • Performance manager sql action rule for updating metric table

    Hi, I need to update metric stop_date using a sql action rule (Performance Manager execute sql action rule). My problem is I can't update stop_date into the PM Repository Database. Sql action database connection is properly set, but when I set sql for executing update in table ci_probe and I schedule the rule the system doesn't seem to connect to Database (the rule run successfully, but the table ci_probe is not updated). I don't understand if the problem is database connection or wrong sql code.
    Can Anyone help me with suggestions or sql action rule samples?
    Thanks
    Luigi
    Edited by: Luigi Oliva on Jun 13, 2008 1:32 PM

    Hi It's working, Problem was in repeat_interval it's working now,
    Thanks,
    I changed
      repeat_interval          => 'FREQ=DAILY;BYSECOND=10',to
      repeat_interval          => 'FREQ=SECONDLY;BYSECOND=10',Thanks,
    Edited by: NSK2KSN on Jul 26, 2010 11:14 AM

  • Improving CLR performance in SQL Server (redux)

    I have been spending a lot of time trying to eek out the maximum performance out of a C# CLR UDF. I have already set IsDeterministic and IsPrecise to true, as well as SystemDataAccessKind.None and DataAccessKind.None.
    I am now experimenting with the overhead of transferring to CLR.  I created a simple CLR UDF that just returns the input value, e.g.,
    [Microsoft.SqlServer.Server.SqlFunction(IsDeterministic=true, IsPrecise=true)]
    public static SqlString MyUDF(SqlString data)
    return data;
    Defined as:
    CREATE FUNCTION dbo.MyUDF(@data nvarchar(4000)) RETURNS nvarchar(4000) WITH EXECUTE AS CALLER
    AS EXTERNAL NAME [MyAssembly].[UserDefinedFunctions].[MyUDF];
    I then use the UDF in a View on the Primary Key (nvarchar) of a table with about 6M rows.
    I know there is a small overhead going through a View versus a Table. However, when I query through the table, it is about 2000% faster than querying through the View with the CLR UDF.  E.g., 3 seconds for the table and 60 seconds for the view! I checked
    the Query Plans for each and they are both using Parallelization.
    I have to assume that all the overhead is in the transition to CLR.  Is that much overhead to be expected?  Is there any way to improve that?
    Incidentally, this is a followup to this question:
    http://stackoverflow.com/questions/24722708/sql-server-clr-udf-parallelism-redux

    Assuming that a way is found to reduce this apparent overhead, what is the intended operation within the function? I ask because the advantages of SqlChars over SqlString might be moot if you will need to operate on the full string all
    at once as opposed to reading it as a stream of characters.
    Also, with regards to why the CLR UDF is so much faster than the T-SQL version, some amount of it certainly could be the ability to participate in a Parallel plan, but also a change was made in SQL Server 2012 that improved performance of deterministic CLR
    functions:
    Behavior Changes to Database Engine Features in SQL Server 2012
          Constant Folding for CLR User-Defined Functions and Methods
          In SQL Server 2012, the following user-defined CLR objects are now foldable:
    Deterministic scalar-valued CLR user-defined functions.
    Deterministic methods of CLR user-defined types.
          This improvement seeks to enhance performance when these functions or methods are called more than once with the same arguments.
    Also, 60 seconds down to 3 seconds is a 95% improvement, not 2000%.  Or you could say that the operation is 20x faster without the UDF.
    Now, outside of that, I recall seeing in another forum that someone was converting their string to VARBINARY using SqlBinary / SqlBytes and then returning VARBINARY and converting it back in T-SQL. Might be worth a test.

  • SQL - Select Help - Case When? Return Value from Second Table?

    Hi - next to folks on this board I am probably somewhere between a Beginner and an Intermediate SQL user.
    Ive been using a case when statement in plsql to find "all those who's status in any program was cancelled during a specific time, but have become or are still active in a second program"
    So, Im effectively trying to return a value from a second table in a case when, but its not liking anthing other than a declared text like 'Yes' or 'No'.
    Here is the select statement - is there another way to do this where I can get the results I need?
    case when pp.party_id in (select pp1.party_id  -- Cancelled clients Active in another program
                                       from asa.program_participation          pp1,
                                            asa.curr_prog_participation_status cpps1
                                      where pp1.program_participation_id = cpps1.program_participation_id
                                        and pp1.party_id = pp.party_id
                                        and cpps1.code_value = 'ACT')
                then 'Yes' else 'No' end  as Active_in_Other_Prg
    So - in place of 'Yes' i basically want the program that they are active in or pp1.program_id, else Null
    It is possible that the client can be active in more than one program as well.
    Any assistance is greatly appreciated, i explored with if's and decodes but I cant get anything to work.
    Batesey

    Sounds like an outer join. See ora doc: Joins
    select p.*
    ,      q.party_id
    ,      q.program_id
    from   table_with_party_id p
    ,    ( select pp1.party_id  -- Cancelled clients Active in another program
           ,      pp1.program_id
           from   asa.program_participation          pp1,
                  asa.curr_prog_participation_status cpps1
           where  pp1.program_participation_id = cpps1.program_participation_id
           and    pp1.party_id = pp.party_id
           and    cpps1.code_value = 'ACT') q
    where p.party_id = q.party_id ( +)
    Note: In the example above there shoudn't be a space between the ( and +), but the forum software automagically converts this to
    The outer join will link show all records from the p table and only records from q if the party_id matches, ie q.party_id and q.program_id  will be null if there is no match.
    edit: added program_id

  • Sql 2008 nested case statement

    I have a question about understanding nested case statements in sql server 2008:
     The sql looks like the folloiwng:
     select numberofcases
      from inventory
      where inventory_cnt > 1000
      (when select top 1
        from inventory
         where  inventory_cnt > 750
      then  numberofcases = 750 * 30
      when select top 2
        from inventory
         where  inventory_cnt > 975
      then  numberofcases = 975 * 35
       when select top 3
        from inventory
         where  inventory_cnt > 1025
      then  numberofcases = 1025 / 10
      when select top 4
        from inventory
         where  inventory_cnt > 1050
      then  numberofcases = 1050 / 5) c * 2
       as casesused, select CustomerNumber from inventory
    I would like you to explain the following:
    1. There are 4 when statements. Will the logic hit each when statemnet or will the logic
       stop once the first when statement is true?
    2. Would you explain what the c* 2 means from the school listed above?

    Please post DDL, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. Learn how to follow ISO-11179 data element naming conventions and formatting rules. Temporal data should
    use ISO-8601 formats. Code should be in Standard SQL as much as possible and not local dialect. 
    This is minimal polite behavior on SQL forums. 
     SELECT CASE 
            WHEN Inventory_cnt > 1050 THEN 1050 / 5
            WHEN Inventory_cnt > 1025 THEN 1025 / 10
            WHEN Inventory_cnt > 750 THEN 750 * 30
            WHEN Inventory_cnt > 975 THEN 975 * 35
            ELSE NULL END AS cases_used
     FROM Inventory;
    I would like you to explain the following:
    >> 1. There are 4 when statements. Will the logic hit each when statement or will the logic stop once the first when statement is true? <<
    This code is garbage, not SQL. CASE is an expression, not a statement. Expressions return a scalar value. Your are trying to do control flow! And the answer is that a CASE works this way
    1) look at the THEN clauses and determine the data type to use
    2) test each WHEN clause and execute the first one that tests TRUE in left to right order. 
    >> 2. Would you explain what the c* 2 means from the school listed above? <<
    Syntax error and more garbage code. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • SQL Report - IF/CASE

    Hi guys,
    This is mainly a SQL question related to SCCM 2012. I've got the below query which works fine with x86 VM's:
    SELECT distinct SYS.Netbios_Name0, Gvm.PhysicalHostName0, SYS.User_Name0,
    SYS.Resource_Domain_OR_Workgr0,OPSYS.Caption0 as C054, OPSYS.Version0, ENCL.Manufacturer0,
    CSYS.Model0, Processor.MaxClockSpeed0, MEM.TotalPhysicalMemory0, WSTATUS.LastHWScan
    FROM v_R_System SYS
    LEFT JOIN v_GS_VIRTUAL_MACHINE gvm ON SYS.ResourceID=gvm.ResourceID
    LEFT JOIN v_RA_System_IPAddresses IPAddr on SYS.ResourceID = IPAddr.ResourceID
    LEFT JOIN v_GS_X86_PC_MEMORY MEM on SYS.ResourceID = MEM.ResourceID
    LEFT JOIN v_GS_COMPUTER_SYSTEM CSYS on SYS.ResourceID = CSYS.ResourceID
    LEFT JOIN v_GS_PROCESSOR Processor on Processor.ResourceID = SYS.ResourceID
    LEFT JOIN v_GS_OPERATING_SYSTEM OPSYS on SYS.ResourceID=OPSYS.ResourceID
    LEFT JOIN v_GS_PC_BIOS BIOS on SYS.ResourceID=BIOS.ResourceID
    LEFT JOIN v_GS_SYSTEM_ENCLOSURE ENCL on SYS.ResourceID=ENCL.ResourceID
    LEFT JOIN v_GS_WORKSTATION_STATUS wSTATUS on SYS.ResourceID=WSTATUS.ResourceID
    LEFT JOIN v_R_User USERS on SYS.User_Name0 = USERS.User_Name0
    WHERE OPSYS.Caption0 is not null and CSYS.Model0 = 'Virtual Machine'
    ORDER BY SYS.Netbios_Name0, SYS.Resource_Domain_OR_Workgr0
    If I replace v_GS_VIRTUAL_MACHINE with v_GS_VIRTUAL_MACHINE_64, it'll show x64 VM's and their host but leave the x86 hosts empty as they're not in v_GS_VIRTUAL_MACHINE_64.PhysicalHostName0.
    How can I combine the two queries and check whether v_GS_VIRTUAL_MACHINE_64.PhysicalHostName0 is NULL or not and in case it is, use v_GS_VIRTUAL_MACHINE_.PhysicalHostName0 value?
    That will require, I believe, also another condition in the LEFT JOIN part?
    Thanks

    Simone,
    This query maybe help you, I don't specific what version or do you really know version different between x86 ad x64?
    Select SMS_R_System.Name, SMS_G_System_COMPUTER_SYSTEM.Manufacturer, SMS_R_System.SMSAssignedSites, SMS_R_System.IPAddresses, SMS_R_System.IPSubnets, SMS_R_System.OperatingSystemNameandVersion, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.LastLogonUserDomain,
    SMS_R_System.LastLogonUserName, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.NetbiosName from SMS_R_System inner join SMS_G_System_COMPUTER_SYSTEM on SMS_G_System_COMPUTER_SYSTEM.ResourceID = SMS_R_System.ResourceId
    where SMS_G_System_COMPUTER_SYSTEM.Manufacturer in ("VMware, Inc.","Microsoft Corporation")
    Thanks.

  • Performance Appraisal: Automatic updation in case of Manager's Transfer

    Dear All,
    In performance appraisal Manager(n) is Appraiser and Manager's Manager(n1)  is included as further participant so that he can view appraisals of all employees who indirectly report to him. Now whenever this higher Manager (n1) is transferred new manager coming in can not view appraisals of his new department automatically. To enable this we update this further participant list manually of all old appraisal forms of his employees. Is there a way to do this automatically as such cases are quite a few and doing this manually is very time consuming.
    Thank You.

    i havent seen any automatic update
    in general if an employee changes the manager, the old manager
    should be replaced in the document as well by the new manager to get
    access to the entire document.
    I would suggets to replace the appraiser in the documnet e.g. by
    transaction PHAP_ADMIN so that the new manager sees the entire
    document.
    scenarios could be
    solved using BAdI HRHAP00_ACC_HEADER. It'll enable you to change the
    appraiser of the created appraisal documents when manager has been
    changed.

Maybe you are looking for

  • IPhone 4 repeatedly not recognized by iTunes on PC

    I have an iPhone 4 and back-up up to iCloud daily.  As a secondary precaution every week or so I also try to back-up to iTunes on my PC (Windows 7). I seem to be regularly getting a warning in iTunes when I do this that it does not recognize the phon

  • I need to update from Mac OS 10.4.11 to Snow Leopard

    I need to update from Mac OS 10.4.11 to Snow Leopard, do I just buy Snow Leopard???

  • FI Report Issue

    Hi, My requirement is i am having FEBEP and FEBKO tables , i want to fetch LIFNR AND KUNNR ,XBLNR , From which table i need to fetch these values and after FF67 transaction processing  which tables are updated can any one know ?

  • New Spool Request not generating

    Dear All, I have seen print preview of one PO and taken print which has created a spool request. Another PO also I have seen the preview and taken print, but it has added to previous spool request as second page. My problem is I need to generate new

  • Adding more items in the SharePoint breadcrumbs

    Hi. It looks like we can show or add sites/pages in the breadcrump. Can we also add 'Documents' there as Below? Thanks srabon