CPU Usage by a sql query/insert
Hi,
I want to know in which cases, a sql statement uses a lot of CPU usages and what are the wayouts to prevent 90-100% CPU usage.
Thanks
Deepak
Of course it will! If you're doing your Select statement on production, then you will get an accurate representation of what the CPU does on production!
However, be aware that it is production, and that anything you do could cause performance problems to the rest of the database (we once had a developer who caused massive performance problems in prod, just from running a single select statement, although I forget why it caused the problems now *{:-( )
Similar Messages
-
SQL Query / Insert Builder
I am making a Java program to access / update a database via JDBC. I need to make string queries like this:
db.update(
"INSERT INTO sample_table(str_col,num_col) VALUES('Ford', 100)");
The query above is simple, but you can see how assembling the query made out of constant string fragments and variables that my program updates can become more complicated. For example, I need to also store a date. Is there a utility similar to StringBuilder for building strings that can make this process easier? I am looking for a function specifically to build SQL queries. It might look something like this:
querybuilder.insert(<table>, <name1>, <value1>, <name2>, <value2>....<nameN>, <valueN>);
Where table is the name of the table that I would be inserting into and name is the column name of the table, value would be the value to be stored under the column name. Are there SQL functions for this?I made such a library myself (called it an SQLFormatter), but haven't put it on the web because I've not thought there would be a general interest in it. Essentially, I can create an Insert Into statement like this:
SQLFormatter formatter = SQLFormatter.get(myStatement);
String sql = formatter.createInsertIntoStatement("MyTable", new String[] {"Field1", "Field2", "Field3"}, new Object[] {value1, value2, value3});I'd be happy to share it with you, but I'm not sure about how I can make it reach you. Do I need to post it on a web site? -
Access SQL Query - INSERT Help!
Hi,
I'm trying to do a simple INSERT statement in an Access Database.
My code is:
public SQLConnect() {
// SQL Connection Part
try {
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); //have tried with .newInstance() as well but same effect
String filename = "C://Maxi//maxi.mdb";
String database = "jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=";
database += filename.trim() + ";DriverID=22;READONLY=true}";
Connection con = DriverManager.getConnection(database, "", "");
PreparedStatement ps = con.prepareStatement("INSERT INTO Student (ID,FamilyName,FirstName,OtherName,Email,Phone,DOB,CourseType,Sex) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)");
ps.setString(1, "071111114");
ps.setString(2, "Crane");
ps.setString(3, "Frasier");
ps.setString(4, "Winslow");
ps.setString(5, "[email protected]");
ps.setString(6, "012345");
ps.setString(7, "01/01/2008");
ps.setString(8, "M");
ps.setString(9, "M");
ps.executeUpdate();
//Statement t = con.createStatement();
//t.execute("SELECT * FROM STUDENT");
//con.close();
} catch (Exception e) {
System.out.println("ERROR!");
e.printStackTrace();
}I am experiencing weird behaviour with the INSERT command being carried out successfully.
The code as above will not INSERT the data in the PreparedStatement.
However if I use any of the two methods in the commented out code, the INSERT will be carried out. Why is this happening? Surely it should just INSERT the data with the line ps.executeUpdate() without having to wait for any further commands.
The reason I don't want to close the connection is that I am attempting to make a class so I can do simple INSERT, UPDATE and SELECT queries without having to go through the initial connection steps each time. I have read that opening and closing a connection for each query takes a lot of overhead, but it seems to be the only way this will properly execute.
I am welcome to any suggestions.
Cheers.Ha it always seems to fix itself as soon as you post about it.
It would look as if when creating the connection if you do:
con.setAutoCommit(false);and then after the ps.executeUpdate(); if you commit with:
con.commit();this actually works.
If you omit to set the AutoCommit then this does not work. I am still stumped as to why I have to do this. Any explanations?
Edited by: adamuk on Mar 15, 2008 10:19 PM -
During SQL query inserting a record - will it appear on the query result
Hi,
Oracle 10g SELECT-query executes ca 10 seconds and contains Joins over multiple tables in on database conenction.
On execution 3rd second a new record is inserted in another database connection and commited to the database, the 3rd query satisgy the SELECT-Query fileters.
Wil lthe inserted record on 3rd second appear on the query results?
Thx.CharlesRoos wrote:
Hi,
Oracle 10g SELECT-query executes ca 10 seconds and contains Joins over multiple tables in on database conenction.
On execution 3rd second a new record is inserted in another database connection and commited to the database, the 3rd query satisgy the SELECT-Query fileters.
Wil lthe inserted record on 3rd second appear on the query results?
Thx.In general Keith is right and a select will not see changes done and committed in a different session, that happend during the select. This is called READ CONSISTENCY. THis means a select will see the data from the exact time when it was called.
However with database links in place read consistency can not be fully garanteed.
See this explaination and solution from the 10g docs : http://docs.oracle.com/cd/B19306_01/server.102/b14231/ds_txnman.htm#i1008473 -
Whats wrong with the sql query insert .... plz tell me
{
String sql = "INSERT INTO candidate VALUES (";
sql += candidate.getEnrollmentNumber() + ",";
sql += candidate.getDateOfEnrollment() + ",";
sql += candidate.getName() + ",";
sql += candidate.getContactNumber() + ",";
sql += candidate.getGender() + ",";
sql += candidate.getMailingAddress() + ",";
sql += candidate.getEmailId() +",";
sql += candidate.getClassOfVehicle() + ",";
sql += candidate.getTypeOfVehicle() + ",";
sql += candidate.getDateOfBirth() + ",";
sql += candidate.getBatchTime() +")";
System.out.println("sql: " +sql);
//executeQuery(sql);
executeUpdate(sql);
}catch(SQLException e){
e.printStackTrace();
Giving me error as " com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: You have an error in your SQL syntax;If you are inserting a string into your sql, you need to put it in single quotes (but not numbers).
Example:
sql += " ' "+candidate.getFirstName() " ', "
Additionally, if firstName contains a single quote, it needs to be escaped by converting it to two single quotes.
Note you don't have to do any of this if you use preparedStatements.
Note: remove both spaces in " ' " so you get "'" in the above example, otherwise you are inserting spaces into your value.
I use " ' " so you can more easily see what I did.. -
Issue in creation of group in oim database through sql query.
hi guys,
i am trying to create a group in oim database through sql query:
insert into ugp(ugp_key,ugp_name,ugp_create,ugp_update,ugp_createby,ugp_updateby,)values(786,'dbrole','09-jul-12','09-jul-12',1,1);
it is inserting the group in ugp table but it is not showing in admin console.
After that i also tried with this query:
insert into gpp(ugp_key,gpp_ugp_key,gpp_write,gpp_delete,gpp_create,gpp_createby,gpp_update,gpp_updateby)values(786,1,1,1,'09-jul-12',1,'09-jul-12',1);
After that i tried with this query.but still no use.
and i also tried to assign a user to the group through query:
insert into usg(ugp_key,usr_key,usg_priority,usg_create,usg_update,usg_createby,usg_updateby)values(4,81,1,'09-jul-12','09-jul-12',1,1);
But still the same problem.it is inserting in db.but not listing in admin console.
thanks,
hanuman.Hanuman Thota wrote:
hi vladimir,
i didn't find this 'ugp_seq'.is this a table or column?where is it?
It is a sequence.
See here for details on oracle sequences:
http://www.techonthenet.com/oracle/sequences.php
Most of the OIM database schema is created with the following script, located in the RCU distribution:
$RCU_HOME/rcu/integration/oim/sql/xell.sql
there you'll find plenty of sequence creation directives like:
create sequence UGP_SEQ
increment by 1
start with 1
cache 20
to create a sequence, and
INSERT INTO UGP (UGP_KEY, UGP_NAME, UGP_UPDATEBY, UGP_UPDATE, UGP_CREATEBY, UGP_CREATE,UGP_ROWVER, UGP_DATA_LEVEL, UGP_ROLE_CATEGORY_KEY, UGP_ROLE_OWNER_KEY, UGP_DISPLAY_NAME, UGP_ROLENAME, UGP_DESCRIPTION, UGP_NAMESPACE)
VALUES (ugp_seq.nextval,'SYSTEM ADMINISTRATORS', sysadmUsrKey , SYSDATE,sysadmUsrKey , SYSDATE, hextoraw('0000000000000000'), 1, roleCategoryKey, sysadmUsrKey, 'SYSTEM ADMINISTRATORS', 'SYSTEM ADMINISTRATORS', 'System Administrator role for OIM', 'Default');
as a sequence usage example.
Regards,
Vladimir -
How to measure the performance of sql query?
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i...
It ll be useful for me to write efficient query....
Thanks & Regardspsram wrote:
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
You can use the following approach to get a deeper understanding of the operations performed by each row source:
alter session set statistics_level=all;
alter session set timed_statistics = true;
select /* findme */ ... <your query here>
SELECT
SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
OBJECT_NAME,
CARDINALITY,
LAST_OUTPUT_ROWS,
LAST_CR_BUFFER_GETS,
LAST_DISK_READS,
LAST_DISK_WRITES,
FROM V$SQL_PLAN_STATISTICS_ALL P,
(SELECT *
FROM (SELECT *
FROM V$SQL
WHERE SQL_TEXT LIKE '%findme%'
AND SQL_TEXT NOT LIKE '%V$SQL%'
AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
ORDER BY LAST_LOAD_TIME DESC)
WHERE ROWNUM < 2) S
WHERE S.HASH_VALUE = P.HASH_VALUE
AND S.CHILD_NUMBER = P.CHILD_NUMBER
ORDER BY ID
/Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
ORA-02393 Exceeded Call Limit on CPU Usage
I have created a Profile and attached it to a user, in this example:
Create Profile percall
Limit
CPU_PER_CALL 10
IDLE_TIME 5;
I have attached it to one user - USER1
When USER1 runs a SQL Statement -
SELECT COUNT(*) FROM TABLE1 A WHERE A.EFFDT = (SELECT MAX(B.EFFDT) WHERE B.EMPLID = A.EMPLID AND B.EFFDT <= SYSDATE);
I get an error (Which I want to receive) ORA-02393 Exceeded Call Limit on CPU Usage.
The SQL statement shows in the table DBA_COMMON_AUDIT_TRAIL, but shows a success even though the user received an error ORA-02393.
What I want is a way for a DBA to be able to report on those ORA-02393 errors. I don't see any entries in the Log files, and don't notice any errors in the Oracle Tables.
I would like to be able to show the user (after a week when they bring up the issue) what the SQL statement was and why it Exceeded the CPU Usage. If the error could place the SQL statement in a table or just display it in an error log with the Statement to verify that THIS is the statement which exceeded the CPU Usage.
Thank you
Aaroncan you modify the procedure in which the SELECT resides.
If so, trap & log the error. -
Send SQL query or call stored procedure, which is best???
hello experts!!
i would like to ask what will be the best implementation on querying a database using Java, send a SQL query(insert into table values...) or call stored procedure(just pass parameters)?? and also in stored procedure does "autoCommit/rollback" applies/make sense??I searched google for this:
Stored Procedures vs. SQL
and found this useful link:
http://www.karlkatzke.com/stored-procedures-vs-sql-calls/
I thiink you should also read up on other articles based on such google searches
to get other opinions. Also, read the links provided by that article.
Personnally, I think stored procedures should be avoided unless there is a definite
advantage as specified in the above article. Especially not used for CRUD operations.
Also, no matter where you put the SQL, it should be documented. I've seen too many stored procedures that aren't documented. -
can a single SQL query insert data into 2 different table ?
means i would like have something like
String sql=INSERT TABLE1 (value1) AND INSERT TABLE 2 (value 2) // only one queryi know its wrong...but any good way ? bcoz i dont want create 2 sql query, 2 statement etc.
is it possible ?Why you dont want to create 2 sql statementbcoz do not like to make 2 statements, 2 close statements......just it looks bad.
well ok, if there is no other compact way then i will do that.
As java-Ang asks, why do you want to do this? i want to insert into 2 different table that means i need 2 sql query . i was thinking if there is any short cut way to achieve this.
thanks for the time -
SQL query against ConfigMgr DB causing 100% cpu usage on SQL server throughout day
Hi, I am hoping someone here may have an idea of what I can do to resolve this as I am not much of a SQL guy. Here is what I know. Throughout the day, monday-friday, my ConfigMgr SQL server sits at 98-100% processor usage.
Using the following query I have narrowed down the culprit of the high cpu usage throughout the day:
SELECT p.spid, p.status, p.hostname, p.loginame, p.cpu, r.start_time, r.command,
p.program_name, text
FROM sys.dm_exec_requests AS r,
master.dbo.sysprocesses AS p
CROSS APPLY sys.dm_exec_sql_text(p.sql_handle)
WHERE p.status NOT IN ('sleeping', 'background')
AND r.session_id = p.spid
Even though my knowledge of SQL is light, I believe the following output clearly defines the thread responsible for the high cpu usage. This "program_name" is present throughout the day and as long as its there, cpu usage remains at or near 100%:
The text output of the above culprit is as follows (sorry this is long):
-- Name : spDrsSummarizeSendHistory
-- Definition : SqlObjs
-- Scope : CAS_OR_PRIMARY_OR_SECONDARY
-- Object : P
-- Dependencies : <Detect>
-- Description : Summarize DrsSendHistory information into DrsSendHistorySummary
CREATE PROCEDURE spDrsSummarizeSendHistory
AS
BEGIN
SET NOCOUNT ON;
DECLARE @CurrentTime DateTime;
DECLARE @SiteCode nvarchar(3);
DECLARE @LastSummarizationTime DateTime;
DECLARE @NextSummarizationTime DateTime;
DECLARE @Interval INT; -- summarize interval in minutes
DECLARE @ReplicationID INT
DECLARE @iMaxID INT
DECLARE @iCurrentID INT
DECLARE @iBatchSize INT = 50
DECLARE @ReplicationPattern NVARCHAR(255)
DECLARE @TargetSite NVARCHAR(3);
DECLARE @TimeTable TABLE (TargetSite NVARCHAR(3), StartTime DATETIME, EndTime DATETIME, PRIMARY KEY (TargetSite, StartTime, EndTime))
DECLARE @TargetSites TABLE (SiteCode NVARCHAR(3) PRIMARY KEY)
DECLARE @ChildTargetSites TABLE (TargetSite NVARCHAR(3), LastSummarizationTime DATETIME, [Interval] INT, PRIMARY KEY (TargetSite))
DECLARE @ReplicationGroup TABLE (ID INT, ReplicationPattern NVARCHAR(255))
IF OBJECT_ID(N'TempDB..#DrsSendHistorySummary') IS NOT NULL
DROP TABLE #DrsSendHistorySummary
CREATE TABLE #DrsSendHistorySummary (ID BIGINT IDENTITY(1, 1) NOT NULL, SourceSite NVARCHAR(3), TargetSite NVARCHAR(3), ReplicationGroupID INT, SyncDataSize BIGINT, CompressedSize BIGINT, UnCompressedSize BIGINT, MessageCount INT, ChangeCount INT, SummarizationTime DATETIME, PRIMARY KEY (ID))
SET @CurrentTime = GETUTCDATE();
SET @SiteCode = dbo.fnGetSiteCode();
INSERT INTO @ReplicationGroup (ID, ReplicationPattern)
SELECT RD.ID, RD.ReplicationPattern
FROM ReplicationData RD
WHILE EXISTS (SELECT * FROM @ReplicationGroup)
BEGIN
SELECT TOP (1) @ReplicationID = ID, @ReplicationPattern = ReplicationPattern FROM @ReplicationGroup
TRUNCATE TABLE #DrsSendHistorySummary
IF (((@ReplicationPattern = N'global' OR @ReplicationPattern=N'site') AND dbo.fnIsPrimary() = 1)
OR (@ReplicationPattern = N'global_proxy' AND dbo.fnIsSecondary() = 1))
BEGIN -- when current site is child
SET @LastSummarizationTime = ISNULL(( SELECT MAX(SummarizationTime) FROM DrsSendHistorySummary WHERE SourceSite = @SiteCode AND ReplicationGroupID = @ReplicationID),
(SELECT MIN(EndTime) FROM DrsSendHistory WHERE ReplicationGroupID = @ReplicationID) );
SET @Interval = ISNULL((SELECT VALUE FROM RCMSQLCONTROLPROPERTY SCP, RCMSQLCONTROL SC
WHERE SCP.ControlID = SC.ID AND SCP.Name = N'Send History Summarize Interval' AND SC.TypeID = 3 and SC.SiteNumber = dbo.fnGetSiteNumber()), 15); -- default 15 minutes
DELETE @TargetSites
INSERT INTO @TargetSites (SiteCode)
SELECT DISTINCT SiteCode FROM DRS_MessageActivity_Send WHERE ReplicationID = @ReplicationID
IF @LastSummarizationTime IS NOT NULL AND EXISTS (SELECT * FROM @TargetSites)
BEGIN
SET @NextSummarizationTime = DATEADD( minute, DATEDIFF(minute, 0, @LastSummarizationTime)/@Interval *@Interval + @Interval, 0);
-- Summarized according to interval
DELETE @TimeTable
WHILE (@CurrentTime >= @NextSummarizationTime )
BEGIN
INSERT INTO @TimeTable (TargetSite, StartTime, EndTime)
SELECT SiteCode, @LastSummarizationTime, @NextSummarizationTime FROM @TargetSites
SET @LastSummarizationTime = @NextSummarizationTime;
SET @NextSummarizationTime = DATEADD(mi, @Interval, @NextSummarizationTime );
END
INSERT INTO #DrsSendHistorySummary (SourceSite, TargetSite, ReplicationGroupID, SyncDataSize, CompressedSize, UnCompressedSize, MessageCount, ChangeCount, SummarizationTime)
SELECT @SiteCode, T.TargetSite, @ReplicationID, SUM(ISNULL(D.SyncDataSize, 0)), SUM(ISNULL(D.CompressedSize, 0)), SUM(ISNULL(D.UnCompressedSize, 0)), SUM(ISNULL(D.MessageCount, 0)), SUM(ISNULL(ChangeCount, 0)), T.EndTime
FROM DrsSendHistory D RIGHT JOIN @TimeTable T ON D.EndTime >= T.StartTime AND D.EndTime < T.EndTime AND ReplicationGroupID = @ReplicationID AND T.TargetSite = D.TargetSite
GROUP BY T.TargetSite, T.StartTime, T.EndTime;
END
END
ELSE IF ((@ReplicationPattern = N'global' AND dbo.fnIsCAS() = 1)
OR (@ReplicationPattern = N'global_proxy' AND dbo.fnIsPrimary() = 1))
BEGIN -- when current site is parent
DELETE @ChildTargetSites
INSERT INTO @ChildTargetSites(TargetSite, LastSummarizationTime, [Interval])
SELECT DISTINCT(DSH.SiteCode),
ISNULL(( SELECT MAX(SummarizationTime) FROM DrsSendHistorySummary WHERE SourceSite = @SiteCode AND ReplicationGroupID = @ReplicationID AND TargetSite=DSH.SiteCode),
(SELECT MIN(EndTime) FROM DrsSendHistory WHERE ReplicationGroupID = @ReplicationID) ) AS LastSummarizeTime,
ISNULL((SELECT VALUE FROM RCMSQLCONTROLPROPERTY SCP, RCMSQLCONTROL SC WHERE SCP.ControlID = SC.ID AND SCP.Name = N'Send History Summarize Interval' AND SC.TypeID = 3 and SC.SiteNumber = dbo.fnGetSiteNumberBySiteCode(DSH.SiteCode)), 15) AS Interval
FROM DRS_MessageActivity_Send DSH WHERE DSH.ReplicationID = @ReplicationID
GROUP BY DSH.SiteCode;
DELETE @TimeTable
WHILE EXISTS (SELECT * FROM @ChildTargetSites)
BEGIN
SELECT TOP (1) @TargetSite = TargetSite, @LastSummarizationTime = LastSummarizationTime, @Interval = Interval FROM @ChildTargetSites
SET @NextSummarizationTime = DATEADD( minute, DATEDIFF(minute, 0, @LastSummarizationTime)/@Interval *@Interval + @Interval, 0);
-- Summarized according to interval
WHILE (@CurrentTime >= @NextSummarizationTime )
BEGIN
INSERT INTO @TimeTable (TargetSite, StartTime, EndTime)
SELECT @TargetSite, @LastSummarizationTime, @NextSummarizationTime
SET @LastSummarizationTime = @NextSummarizationTime;
SET @NextSummarizationTime = DATEADD(mi, @Interval, @NextSummarizationTime );
END
DELETE @ChildTargetSites WHERE TargetSite = @TargetSite
END;
INSERT INTO #DrsSendHistorySummary (SourceSite, TargetSite, ReplicationGroupID, SyncDataSize, CompressedSize, UnCompressedSize, MessageCount, ChangeCount, SummarizationTime)
SELECT @SiteCode, T.TargetSite, @ReplicationID, SUM(ISNULL(D.SyncDataSize, 0)), SUM(ISNULL(D.CompressedSize, 0)), SUM(ISNULL(D.UnCompressedSize, 0)), SUM(ISNULL(D.MessageCount, 0)), SUM(ISNULL(D.ChangeCount, 0)), @NextSummarizationTime
FROM DrsSendHistory D RIGHT JOIN @TimeTable T ON D.EndTime >= T.StartTime AND D.EndTime < T.EndTime AND ReplicationGroupID = @ReplicationID AND T.TargetSite = D.TargetSite
GROUP BY T.TargetSite, T.StartTime, T.EndTime;
END;
SELECT @iMaxID = MAX(ID), @iCurrentID = 0 FROM #DrsSendHistorySummary
-- BATCH INSERT TO REAL TABLE
WHILE (@iCurrentID < @iMaxID)
BEGIN
INSERT INTO DrsSendHistorySummary (SourceSite, TargetSite, ReplicationGroupID, SyncDataSize, MessageCount, ChangeCount, SummarizationTime)
SELECT SourceSite, TargetSite, ReplicationGroupID, SyncDataSize, MessageCount, ChangeCount, SummarizationTime
FROM #DrsSendHistorySummary WHERE ID > @iCurrentID AND ID <= (@iCurrentID + @iBatchSize)
SET @iCurrentID = @iCurrentID + @iBatchSize
END
TRUNCATE TABLE #DrsSendHistorySummary
DELETE @ReplicationGroup WHERE ID = @ReplicationID
END
END
If anyone has any experience with this issue or anything similar and might be able to point me in the right direction, any responses would be greatly appreciated. Thanks for reading!Normally, this summary job every few mins which only summarize last few mins sending history. If the job was not running for long time, the first time, this sproc will summarize a lot more data than normal. So it could fail which will make things worse and
worse over time.
If sending summary data is not a big concern here, you can manually insert a dummy row with most recent time as Summarizationtime, so that, when summary job kicks in next time, it will only summary short interval data.
DrsSendHistory is that the table records every sending on every group. This table is one of the key of replication logic. For your environment, on primary site, it could have up to 24(hrs) * 60 (mins) / 2 (sync interval) * 36(secondary sites) records per
day for Secondary_Site_Replication_Configuration group and 24 * 60 / 5 * 22 records for Secondary Site Data.
Umair Khan | http://blogs.technet.com/umairkhan
Hi Umair,
Thank you for the reply. My plan was to go ahead with your suggestions but what I tried first was to get my VM guy to give the SQL server access to more CPU resources. We had already configured it with what would have seemed to be plenty, as per Microsoft's
sizing requirements, but I figured it couldn't hurt to rule out - he gave the SQL server access to a huge portion of the host processing power and since then CPU usage on the server has NOT been an issue.
It would appear that the large amounts of secondary sites we have just simply put a lot of stress on the SQL server with out of the box replication settings. Since the entire VM host is for System Center only, and our other System Center servers are using
very little resources, this solution has ended up being valid.
Thanks again for everyone's help with this. -
High CPU Usage On client application when upgrade remote SQL database
Hi,
Is anyone here encountered very high CPU usage after upgraded the database from SQL 2005 to SQL 2012? Following describes my issue.
I have my client application running on Windows 7 that access to central database (SQL 2005) on Windows server 2003 using ODBC connection, this client application basically query data and perform data insert and update. There are a number of this similar
clients connected to the database.
Due to slowness of the database server, company decided to upgrade the database server to SQL 2012 and also new hardware runing Windows Server 2012. There is no change on the client application and client hardware and the client application still using the
same ODBC connection to query, insert or update the new database server.
The problem I am now experiencing now is that my client PC where my client application running the CPU usage is very high almost hitting 100% when accessing to the central database. I am also noticed this program lsass.exe that utilized almost 40-50% of
the CPU time when client application accessing the central database.
Is anyone here know, why client PC CPU usage 100% when access to upgraded SQL 2012 database? What is lsass.exe program doing, it only appear when accessing to database server. How to reduce the CPU usage on client?
Thanks.
Chee Wee.Hello,
After upgrading, please rebuild indexes and update statistics on the databases.
http://www.mssqltips.com/sqlservertip/1367/sql-server-script-to-rebuild-all-indexes-for-all-tables-and-all-databases/
http://www.mssqltips.com/sqlservertip/1606/execute-update-statistics-for-all-sql-server-databases/
Configure maxdop on the instance.
http://blogs.msdn.com/b/sqlsakthi/archive/2012/05/24/wow-we-have-maxdop-calculator-for-sql-server-it-makes-my-job-easier.aspx
If the above does not solve the issue, let us know.
Hope this helps.
Regards,
Alberto Morillo
SQLCoffee.com -
We have a table named "tbl_geodata" which has address and latitude and longitude values. We
also have a "History" table which has only latitude and longitude values including other information. What we need is like following...
We get a set of records based on a query from "History" (lat long values), say 5000 records
Now we are using the following formula to calculate address from the "tbl_geodata" for each row
(5000 rows).
SELECT top 1 geo_street,geo_town,geo_country,( 3959 acos( cos( radians(History.lat) ) cos( radians(
gps_latitude ) ) cos( radians( gps_longitude ) - radians(History.long) ) + sin( radians(History.lat) ) sin( radians( gps_latitude ) ) ) ) AS distance FROM tbl_geodata ORDER BY distance
Currently we are seeing high cpu utilisation and performance issue. What would be the optimized way to do
thisWe are using SQL Server Web Edition. We have a table which has around 120 million records (every second around 100 insertion). It has a AFTER INSERT trigger to update another table. There is following performance issue we are facing.
High CPU Usage
Insertion failed (Connection Timeout Expired. The timeout period elapsed during the post-login phase. The connection could have timed out while waiting for server to complete the login process and respond; Or it could have timed out while attempting
to create multiple active connections. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=0; handshake=10046; [Login] initialization=0; authentication=0; [Post-Login] complete=3999;)
Insertion Failed (A severe error occurred on the current command. The results, if any, should be discarded.)
Insertion Failed (Some time we are getting connection pool error.. There is no limit set in the connection string)
I ran sp_who2 command and found a lot of queries are in suspended mode..
here is the "sys.dm_os_sys_info" result...
cpu_count hyperthread_ratio
physical_memory_in_bytes virtual_memory_in_bytes
8 8
12853739520
8796092891136
Can anyone please suggest the improvement steps... -
Hi,
I have a Oracle 9.2 DB with 2 cpu's,when i took a statspack report i found that CPU Time is very hight i,e more that 81%....
one of the query is taking near about 51% CPU Usage...
i.e
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */DISTINCT CF_COURSE_CODE FROM OM_COURSE_FEE,OT_STUDENT_FEE_COL_HEAD,OT_STUDENT_FEE_COL_DETL WHERE SFCH_SYS_ID = SFCD_SFCH_SYS_ID AND
SFCD_FEE_TYPE_CODE = CF_TYPE_CODE AND CF_COURSE_CODE IN ( 'PE1','PE2','CCT','CPT' ) AND SFCH_TXN_CODE = :b1 AND SFCD_SUBSCRBE_JOURNAL_YN IN ( 'T','R','1','C' ) AND SFCH_APPR_UID IS NOT NULL
AND SFCH_APPR_DT IS NOT NULL AND SFCH_NO = :b2 AND NOT EXISTS (SELECT 'X' FROM OM_STUDENT_HEAD WHERE STUD_SRN = SFCH_STUD_SRN_TEMP_NO AND NVL(STUD_CLO_STATUS,0) != 1 AND
NVL(STUD_REG_STATUS,0) != 23 AND STUD_COURSE_CODE != 'CCT' AND CF_COURSE_CODE= STUD_COURSE_CODE ) ORDER BY 1 DESCExplain Plan for......
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
| Id | Operation | Name
| Rows | Bytes | Cost | Pstart| Pstop |
| 0 | SELECT STATEMENT |
PLAN_TABLE_OUTPUT
| 1 | 69 | 45 | | |
| 1 | SORT UNIQUE |
| 1 | 69 | 27 | | |
| 2 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_DETL
| 1 | 12 | 2 | ROWID | ROW L |
| 3 | NESTED LOOPS |
| 1 | 69 | 9 | | |
PLAN_TABLE_OUTPUT
| 4 | NESTED LOOPS |
| 1 | 57 | 7 | | |
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_HEAD
| 1 | 48 | 5 | ROWID | ROW L |
| 6 | INDEX SKIP SCAN | OT_STUDENT_FEE_COL_HEAD_UK0
1 | 1 | | 4 | | |
| 7 | INLIST ITERATOR |
| | | | | |
PLAN_TABLE_OUTPUT
| 8 | TABLE ACCESS BY INDEX ROWID | OM_COURSE_FEE
| 1 | 9 | 2 | | |
| 9 | INDEX RANGE SCAN | OCF_CFC_CODE
| 1 | | 1 | | |
| 10 | FILTER |
| | | | | |
| 11 | TABLE ACCESS BY GLOBAL INDEX ROWID| OM_STUDENT_HEAD
PLAN_TABLE_OUTPUT
| 1 | 21 | 4 | ROWID | ROW L |
| 12 | INDEX RANGE SCAN | IDM_STUD_SRN_COURSE
| 1 | | 3 | | |
| 13 | INDEX RANGE SCAN | IDM_SFCD_FEE_TYPE_CODE
| 34600 | | 1 | | |
PLAN_TABLE_OUTPUT
Note: cpu costing is off, PLAN_TABLE' is old version
21 rows selected.
SQL>Statspack report
DB Name DB Id Instance Inst Num Release Cluster Host
ai 1372079993 ai11 1 9.2.0.6.0 YES ai1
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 175 12-Dec-08 13:21:33 ####### .0
End Snap: 176 12-Dec-08 13:56:09 ####### .0
Elapsed: 34.60 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 3,264M Std Block Size: 8K
Shared Pool Size: 608M Log Buffer: 977K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 5,727.62 21,658.54
Logical reads: 16,484.89 62,336.32
Block changes: 32.49 122.88
Physical reads: 200.46 758.03
Physical writes: 5.08 19.23
User calls: 97.43 368.44
Parses: 11.66 44.11
Hard parses: 0.39 1.48
Sorts: 3.22 12.19
Logons: 0.02 0.06
Executes: 36.70 138.77
Transactions: 0.26
% Blocks changed per Read: 0.20 Recursive Call %: 28.65
Rollback per transaction %: 20.95 Rows per Sort: 131.16
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.79 In-memory Sort %: 99.99
Library Hit %: 98.92 Soft Parse %: 96.65
Execute to Parse %: 68.21 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 60.50 % Non-Parse CPU: 99.48
Shared Pool Statistics Begin End
Memory Usage %: 90.06 89.79
% SQL with executions>1: 72.46 72.46
% Memory for SQL w/exec>1: 69.42 69.51
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 3,337 81.43
db file sequential read 60,550 300 7.32
global cache cr request 130,852 177 4.33
db file scattered read 72,915 101 2.46
db file parallel read 3,384 75 1.84
Cluster Statistics for DB: ai Instance: ai11 Snaps: 175 -176
Global Cache Service - Workload Characteristics
Ave global cache get time (ms): 1.3
Ave global cache convert time (ms): 2.1
Ave build time for CR block (ms): 0.1
Ave flush time for CR block (ms): 0.3
Ave send time for CR block (ms): 0.3
Ave time to process CR block request (ms): 0.7
Ave receive time for CR block (ms): 4.9
Ave pin time for current block (ms): 0.0
Ave flush time for current block (ms): 0.0
Ave send time for current block (ms): 0.3
Ave time to process current block request (ms): 0.3
Ave receive time for current block (ms): 2.8
Global cache hit ratio: 1.5
Ratio of current block defers: 0.0
% of messages sent for buffer gets: 1.4
% of remote buffer gets: 0.1
Ratio of I/O for coherence: 1.1
Ratio of local vs remote work: 9.7
Ratio of fusion vs physical writes: 0.1
Global Enqueue Service Statistics
Ave global lock get time (ms): 0.8
Ave global lock convert time (ms): 0.0
Ratio of global lock gets vs global lock releases: 1.1
GCS and GES Messaging statistics
Ave message sent queue time (ms): 0.4
Ave message sent queue time on ksxp (ms): 2.7
Ave message received queue time (ms): 0.2
Ave GCS message process time (ms): 0.1
Ave GES message process time (ms): 0.1
% of direct sent messages: 19.4
% of indirect sent messages: 43.5
% of flow controlled messages: 37.1
GES Statistics for DB: ai Instance: ai11 Snaps: 175 -176
Statistic Total per Second per Trans
dynamically allocated gcs resourc 0 0.0 0.0
dynamically allocated gcs shadows 0 0.0 0.0
flow control messages received 0 0.0 0.0
flow control messages sent 0 0.0 0.0
gcs ast xid 0 0.0 0.0
gcs blocked converts 1,231 0.6 2.2
gcs blocked cr converts 2,432 1.2 4.4
gcs compatible basts 0 0.0 0.0
gcs compatible cr basts (global) 658 0.3 1.2
gcs compatible cr basts (local) 57,822 27.9 105.3
gcs cr basts to PIs 0 0.0 0.0
gcs cr serve without current lock 0 0.0 0.0
gcs error msgs 0 0.0 0.0
gcs flush pi msgs 821 0.4 1.5
gcs forward cr to pinged instance 0 0.0 0.0
gcs immediate (compatible) conver 448 0.2 0.8
gcs immediate (null) converts 1,114 0.5 2.0
gcs immediate cr (compatible) con 42,094 20.3 76.7
gcs immediate cr (null) converts 396,284 190.9 721.8
gcs msgs process time(ms) 42,220 20.3 76.9
gcs msgs received 545,133 262.6 993.0
gcs out-of-order msgs 5 0.0 0.0
gcs pings refused 1 0.0 0.0
gcs queued converts 0 0.0 0.0
gcs recovery claim msgs 0 0.0 0.0
gcs refuse xid 0 0.0 0.0
gcs retry convert request 0 0.0 0.0
gcs side channel msgs actual 2,397 1.2 4.4
gcs side channel msgs logical 232,024 111.8 422.6
gcs write notification msgs 15 0.0 0.0
gcs write request msgs 278 0.1 0.5
gcs writes refused 1 0.0 0.0
ges msgs process time(ms) 4,873 2.3 8.9
ges msgs received 39,769 19.2 72.4
global posts dropped 0 0.0 0.0
global posts queue time 0 0.0 0.0
global posts queued 0 0.0 0.0
global posts requested 0 0.0 0.0
global posts sent 0 0.0 0.0
implicit batch messages received 39,098 18.8 71.2
implicit batch messages sent 33,386 16.1 60.8
lmd msg send time(ms) 635 0.3 1.2
lms(s) msg send time(ms) 2 0.0 0.0
messages flow controlled 196,546 94.7 358.0
messages received actual 182,783 88.0 332.9
messages received logical 584,848 281.7 1,065.3
messages sent directly 102,657 49.4 187.0
messages sent indirectly 230,329 110.9 419.5
msgs causing lmd to send msgs 9,169 4.4 16.7
msgs causing lms(s) to send msgs 3,347 1.6 6.1
msgs received queue time (ms) 142,759 68.8 260.0
msgs received queued 584,818 281.7 1,065.2
msgs sent queue time (ms) 99,300 47.8 180.9
msgs sent queue time on ksxp (ms) 608,239 293.0 1,107.9
msgs sent queued 230,391 111.0 419.7
msgs sent queued on ksxp 229,013 110.3 417.1
GES Statistics for DB: ai Instance: ai11 Snaps: 175 -176
Statistic Total per Second per Trans
process batch messages received 65,059 31.3 118.5
process batch messages sent 50,959 24.5 92.8
Wait Events for DB: ai Instance: ai11 Snaps: 175 -176
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 60,550 0 300 5 110.3
global cache cr request 130,852 106 177 1 238.3
db file scattered read 72,915 0 101 1 132.8
db file parallel read 3,384 0 75 22 6.2
latch free 7,253 1,587 52 7 13.2
enqueue 44,947 0 16 0 81.9
log file parallel write 2,140 0 6 3 3.9
db file parallel write 341 0 5 14 0.6
global cache open x 1,134 3 4 4 2.1
CGS wait for IPC msg 166,993 164,390 4 0 304.2
library cache lock 3,169 0 3 1 5.8
log file sync 494 0 3 5 0.9
row cache lock 702 0 3 4 1.3
DFS lock handle 6,900 0 2 0 12.6
control file parallel write 689 0 2 3 1.3
control file sequential read 2,785 0 2 1 5.1
wait for master scn 687 0 2 2 1.3
global cache null to x 699 0 2 2 1.3
global cache s to x 778 5 1 2 1.4
direct path write 148 0 0 3 0.3
SQL*Net more data to client 3,621 0 0 0 6.6
global cache open s 149 0 0 2 0.3
library cache pin 78 0 0 2 0.1
ksxr poll remote instances 3,536 2,422 0 0 6.4
LGWR wait for redo copy 12 6 0 9 0.0
buffer busy waits 23 0 0 5 0.0
direct path read 9 0 0 10 0.0
buffer busy global CR 5 0 0 17 0.0
SQL*Net break/reset to clien 172 0 0 0 0.3
global cache quiesce wait 4 1 0 7 0.0
KJC: Wait for msg sends to c 86 0 0 0 0.2
BFILE get length 67 0 0 0 0.1
global cache null to s 9 0 0 1 0.0
BFILE open 6 0 0 0 0.0
BFILE read 60 0 0 0 0.1
BFILE internal seek 60 0 0 0 0.1
BFILE closure 6 0 0 0 0.0
cr request retry 4 4 0 0 0.0
SQL*Net message from client 201,907 0 180,583 894 367.8
gcs remote message 171,236 43,749 3,975 23 311.9
ges remote message 79,324 40,722 2,017 25 144.5
SQL*Net more data from clien 447 0 9 21 0.8
SQL*Net message to client 201,901 0 0 0 367.8
Background Wait Events for DB: ai Instance: ai11 Snaps: 175 -176
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 44,666 0 16 0 81.4
latch free 4,895 108 6 1 8.9
log file parallel write 2,140 0 6 3 3.9
db file parallel write 341 0 5 14 0.6
CGS wait for IPC msg 166,969 164,366 4 0 304.1
DFS lock handle 6,900 0 2 0 12.6
control file parallel write 689 0 2 3 1.3
control file sequential read 2,733 0 2 1 5.0
wait for master scn 687 0 2 2 1.3
ksxr poll remote instances 3,528 2,419 0 0 6.4
LGWR wait for redo copy 12 6 0 9 0.0
db file sequential read 7 0 0 9 0.0
global cache null to x 26 0 0 2 0.0
global cache open x 16 0 0 1 0.0
global cache cr request 1 0 0 0 0.0
rdbms ipc message 28,636 20,600 16,937 591 52.2
gcs remote message 171,254 43,746 3,974 23 311.9
pmon timer 708 686 2,022 2856 1.3
ges remote message 79,305 40,719 2,017 25 144.5
smon timer 125 0 1,972 15776 0.2
SQL ordered by Gets for DB: ai Instance: ai11 Snaps: 175 -176
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
17,503,305 84 208,372.7 51.1 676.25 1218.80 17574787
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */DISTINCT CF_COU
RSE_CODE FROM OM_COURSE_FEE,OT_STUDENT_FEE_COL_HEAD,OT_STUDENT
FEECOL_DETL WHERE SFCH_SYS_ID = SFCD_SFCH_SYS_ID AND SFCD_FE
E_TYPE_CODE = CF_TYPE_CODE AND CF_COURSE_CODE IN ( 'PE1','PE2',
'CCT','CPT' ) AND SFCH_TXN_CODE = :b1 AND SFCD_SUBSCRBE_JOURNAuser00726 wrote:
somw of the changes that have been done....
cai11_lmd0_15771.trc.
Thu Oct 2 13:35:48 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 2512 MB, Target size = 3072 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 13:35:50 2008
ALTER SYSTEM SET db_cache_size='3072M' SCOPE=BOTH SID='icai11';
Thu Oct 2 14:04:34 2008
ALTER SYSTEM SET sga_max_size='5244772679680' SCOPE=SPFILE SID='*';
ALTER SYSTEM SET sga_max_size='5244772679680' SCOPE=SPFILE SID='*';
Thu Oct 2 15:24:14 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 3072 MB, Target size = 2512 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 15:24:20 2008
ALTER SYSTEM SET db_cache_size='2512M' SCOPE=BOTH SID='icai11';
Thu Oct 2 15:32:33 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 2512 MB, Target size = 3072 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 15:32:34 2008
ALTER SYSTEM SET db_cache_size='3072M' SCOPE=BOTH SID='icai11';
Thu Oct 2 15:36:46 2008
ALTER SYSTEM SET shared_pool_size='640M' SCOPE=BOTH SID='icai11';
Thu Oct 2 16:33:52 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 3072 MB, Target size = 2512 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 16:33:56 2008
ALTER SYSTEM SET db_cache_size='2512M' SCOPE=BOTH SID='icai11';
Thu Oct 2 16:39:30 2008
ALTER SYSTEM SET pga_aggregate_target='750M' SCOPE=BOTH SID='icai11';Just to make certain that I am not missing anything, if the above you set (all scaled to GB just for the sake of comparison):
sga_max_size=4885GB
db_cache_size=3GB
shared_pool_size=0.625GB
pga_aggregate_target=0.733GB
The SQL statement is forcing the use of a specific index, is this the best index?:
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */DISTINCT CF_COURSE_CODE FROM OM_COURSE_FEE,OT_STUDENT_FEE_COL_HEAD,OT_STUDENT_FEE_COL_DETL WHERE SFCH_SYS_ID = SFCD_SFCH_SYS_ID AND
SFCD_FEE_TYPE_CODE = CF_TYPE_CODE AND CF_COURSE_CODE IN ( 'PE1','PE2','CCT','CPT' ) AND SFCH_TXN_CODE = :b1 AND SFCD_SUBSCRBE_JOURNAL_YN IN ( 'T','R','1','C' ) AND SFCH_APPR_UID IS NOT NULL
AND SFCH_APPR_DT IS NOT NULL AND SFCH_NO = :b2 AND NOT EXISTS (SELECT 'X' FROM OM_STUDENT_HEAD WHERE STUD_SRN = SFCH_STUD_SRN_TEMP_NO AND NVL(STUD_CLO_STATUS,0) != 1 AND
NVL(STUD_REG_STATUS,0) != 23 AND STUD_COURSE_CODE != 'CCT' AND CF_COURSE_CODE= STUD_COURSE_CODE ) ORDER BY 1 DESC
Unfortunately, explain plans, even with dbms_xplan.display, may not show the actual execution plan - this is more of a problem when the SQL statement includes bind variables (capturing a 10046 trace at level 8 or 12 will help). With the information provided, it looks like the problem is with the number of logical reads performed: 17,503,305 in 84 executions = 208,373 logical reads per execution. Something is causing the SQL statement to execute inefficiently, possibly a join problem between tables, possibly the forced use of an index.
From one of my previous posts related to this same SQL statement:
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */ DISTINCT
CF_COURSE_CODE
FROM
OM_COURSE_FEE,
OT_STUDENT_FEE_COL_HEAD,
OT_STUDENT_FEE_COL_DETL
WHERE
SFCH_SYS_ID = SFCD_SFCH_SYS_ID
AND SFCD_FEE_TYPE_CODE = CF_TYPE_CODE
AND CF_COURSE_CODE IN ( 'PE1','PE2','CCT','CPT' )
AND SFCH_TXN_CODE = :b1
AND SFCD_SUBSCRBE_JOURNAL_YN IN ( 'T','R','1','C' )
AND SFCH_APPR_UID IS NOT NULL
AND SFCH_APPR_DT IS NOT NULL
AND SFCH_NO = :b2
AND NOT EXISTS (
SELECT
'X'
FROM
OM_STUDENT_HEAD
WHERE
STUD_SRN = SFCH_STUD_SRN_TEMP_NO
AND NVL(STUD_CLO_STATUS,0) != 1
AND NVL(STUD_REG_STATUS,0) != 23
AND STUD_COURSE_CODE != 'CCT'
AND CF_COURSE_CODE = STUD_COURSE_CODE)
ORDER BY
1 DESC
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
| 0 | SELECT STATEMENT 1 69 34
| 1 | SORT UNIQUE 1 69 22
| 2 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_DETL | 1 | 12 | 2 | ROWID | ROW L |
| 3 | NESTED LOOPS 1 69 9
| 4 | NESTED LOOPS 1 57 7
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_HEAD | 1 | 48 | 5 | ROWID | ROW L |
| 6 | INDEX SKIP SCAN OT_STUDENT_FEE_COL_HEAD_UK0| 1 | 1 | 4 |
| 7 | INLIST ITERATOR |
| 8 | TABLE ACCESS BY INDEX ROWID | OM_COURSE_FEE | 1 | 9 | 2 | | |
| 9 | INDEX RANGE SCAN | OCF_CFC_CODE | 1 | | 1 | | |
| 10 | FILTER
| 11 | TABLE ACCESS BY GLOBAL INDEX ROWID| OM_STUDENT_HEAD | 1 | 21 | 4 | ROWID | ROW L |
| 12 | INDEX RANGE SCAN | IDM_STUD_SRN_COURSE | 1 | | 3 | | |
| 13 | INDEX RANGE SCAN | IDM_SFCD_FEE_TYPE_CODE | 34600 | | 1 | | |It appears, based just on the SQL statement, that there is no direct relationship between OM_COURSE_FEE and OT_STUDENT_FEE_COL_HEAD, yet the plan above indicates that the two tables are being joined together, likely as a result of the index hint. There is the possibility of additional predicates being generated by Oracle which will make this possible without introducing a Cartesian join, but those predicates are not displayed with an explain plan (they will appear with a DBMS_XPLAN). I may not be remembering correctly, but if the optimizer goal is set to first rows, a Cartesian join might appear in a plan as a nested loops join - Jonathan will know for certain if that is the case.
Have you investigated the above as a possible cause? If you know that there is a problem with this one SQL statement, a 10046 trace at level 8 or 12 is much more helpful than a Statspack report.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Sql Server 2012 Integration Services Catalog views - cpu usage history
Hi,
I am new to SQL Server 2012. I have deployed and executed a SSIS package on the Integration Services catalog.
Now, to analyze the performance of the executed package I wish to query the SSISDB catalog views to retrieve the cpu & memory usage history. Please let me know in which catalog view/table I can find this info.
There is a column named "Process_Id" in the "catalog.operations" table. Can we tie this id to the Sql Server pid/kid and then retrieve the cpu
usage history ?
** I am using Sql Server 2012 **
ThanksHi All,
Thanks all for your inputs. One final question -
As I mentioned earlier, the Sql Server 2012 catalog views
DO NOT capture various performance metrics
(e.g. cpu usage history, memory usage history etc) of
an executing SSIS package . However, I am cognizant of the fact that it is possible to retrieve this exact info
(ie. cpu usage history, memory usage history etc) for
a Sql-Server internally generated SPID (present in [sys].[dm_exec_sessions]).
The approach is to find a way to join/relate the catalog provided "Process_id" to the Sql Server generated internal SPIDs in a session. If I am successful in joining these 2 process ids together,
it will allow me to gather all the information available from both sides - ie the catalog views + the sql-server internal process SPIDs of the session. Thus, by tagging the SSIS pprocess_id to
its sql-server session it will allow to retrieve the resource usage details by the executed SSIS process.
**Please also let me know if the above could be achieved by any other approach.
Any help is appreciated.
Thanks
Maybe you are looking for
-
"0" doesn't display before comma in float numbers...
Hi fellow APEX users, Have you ever experienced this issue with float numbers in your APEX forms? When I type float numbers between -1 and 1 in forms (e.g. 0,7 or -0,2), and after having submitted the form then reloaded it for editing, "0" doesn't di
-
How do I let user enter a primary key value on a form
I'm creating my first APEX application. I have a simple table called HOSTS with 2 columns HOSTNAME and IP_ADDRESS. HOSTNAME is the primary key for the table. HOSTNAME is a perfectly acceptable primary key and I don't wish to create an additional colu
-
Quality Collection Plan Data Interface in Oracle Apps 11i
Hi All, I want to Know is there any Public API available for Create and import Collection Element. Thanks in Advance Nikunj
-
Hyperion Reporting 7.21 - E-mail / SMPT Configuration
Good afternoon, I need to update my SMTP configuration to point to our new Exchange server as my Scheduler within HR no longer sends e-mails. I have updated our hr_global.properties file to reflect the new server and recycled the services. I can also
-
No matter how hard I try I can't get this ONE song by this artist to group with the 216 other songs. I've tried manually typing the artist name, I tried typing the first letter and selecting from the suggested names, I've tried changing the sorting o