Slow Query

Hi Experts,
A query on our production box is running slow (takes almost 2 minutes to complete). Below is the query:
SELECT a.stage, a.instnum, a.reason1, b.stage AS cur_stage
FROM lot_tbl a, part_tbl b
WHERE a.lid = b.lid
AND (b.partid LIKE 'PH3253%'
OR b.partid LIKE 'PH3561%'
OR b.partid LIKE 'PH3631%'
OR b.partid LIKE 'PH3711%'
OR b.partid LIKE 'PH3721%')
AND b.lottype = 'PR'
AND a.triggertype = 'P'
AND a.action = 'H'
AND a.instnum >= b.curprcdcurinstnum
AND b.stageorder < '7970'
AND a.recflag <> 'D'
ORDER BY stage, reason1 ;Explain plan shows nested loops taking the bulk of the time as shown in the excerpt from the explain plan:
|  24 |     NESTED LOOPS                |                   | 11416 |  1036K|
6059   (1)| 00:01:13 |
|* 25 |      TABLE ACCESS BY INDEX ROWID| MES_TBL_ACTL      |   314 | 19782 |
  168   (0)| 00:00:03 |
|* 26 |       INDEX RANGE SCAN          | MES_IDX_ACTL_PART |  1020 |       |
    4   (0)| 00:00:01 |
|* 27 |      INDEX RANGE SCAN           | MES_IDX_FUTA_LOT  |    88 |       |
    1   (0)| 00:00:01 |When we analyzed this query using sqltrpt.sql (we have license for that), there was a sql profile recommendation with an slightly different explain plan (part of which is shown below):
|  24 |     NESTED LOOPS                |                    |   164 | 15252 |
   188   (0)| 00:00:03 |
|* 25 |      TABLE ACCESS BY INDEX ROWID| MES_TBL_ACTL       |   314 | 19782 |
    31   (0)| 00:00:01 |
|* 26 |       INDEX SKIP SCAN           | MES_IDX_ACTL_CLASS |     1 |       |
    31   (0)| 00:00:01 |
|* 27 |      INDEX RANGE SCAN           | MES_IDX_FUTA_LOT   |     1 |       |
     1   (0)| 00:00:01 |Further details:
select num_rows,last_analyzed from dba_tables where table_name='lot_tbl';
NUM_ROWS   LAST_ANALYZ
   6850820     10-NOV-2011
select num_rows,last_analyzed from dba_tables where table_name='part_tbl';
  NUM_ROWS LAST_ANALYZ
    265360       10-NOV-2011
    685260       18-OCT-2011The stats for the schemas are collected every week and there are not many changes to the above objects.
Can you please help in tuning the above query? I have tried passing various hints, but nothing seems to be working.
Also one observation is that the optimizer is returning the number of output rows as 26 in the explain plan where as aroung 5000 rows are fetched in reality. I can post full explain plans if desired.
Thanks and Regards,
Rajesh K.

Hi Dom,
Please find the explain plan of the query run with the gather_plan_statistics hint :
| Id  | Operation                       | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |
1Mem | Used-Mem |
PLAN_TABLE_OUTPUT
|   1 |  SORT ORDER BY                  |                   |      1 |     26 |   5145 |00:01:40.35 |   25453 |  14770 |   619K|
472K|  550K (0)|
|   2 |   CONCATENATION                 |                   |      1 |        |   5145 |00:00:58.86 |   25453 |  14770 |       |
     |          |
|*  3 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL           |      1 |      1 |    491 |00:00:39.98 |    2529 |   1330 |       |
     |          |
PLAN_TABLE_OUTPUT
|   4 |     NESTED LOOPS                |                   |      1 |      2 |   2609 |00:00:02.30 |     922 |     62 |       |
     |          |
|*  5 |      TABLE ACCESS BY INDEX ROWID| PART_TBL          |      1 |     34 |    262 |00:00:00.01 |     386 |      0 |       |
     |          |
|*  6 |       INDEX RANGE SCAN          | PART_IDX_PART     |      1 |    111 |    448 |00:00:00.01 |       7 |      0 |       |
     |          |
|*  7 |      INDEX RANGE SCAN           | LOT_IDX_LOT       |    262 |     88 |   2346 |00:00:00.60 |     536 |     62 |       |
     |          |
PLAN_TABLE_OUTPUT
|*  8 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL           |      1 |      1 |    592 |00:00:01.61 |    1912 |   1087 |       |
     |          |
|   9 |     NESTED LOOPS                |                   |      1 |      2 |   2566 |00:00:00.61 |     549 |     29 |       |
     |          |
|* 10 |      TABLE ACCESS BY INDEX ROWID| PART_TBL          |      1 |     34 |    169 |00:00:00.01 |     197 |      0 |       |
     |          |
|* 11 |       INDEX RANGE SCAN          | PART_IDX_PART     |      1 |    111 |    246 |00:00:00.01 |       5 |      0 |       |
PLAN_TABLE_OUTPUT
     |          |
|* 12 |      INDEX RANGE SCAN           | LOT_IDX_LOT       |    169 |     88 |   2396 |00:00:00.21 |     352 |     29 |       |
     |          |
|* 13 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL           |      1 |      1 |   2573 |00:00:06.50 |   15411 |   9451 |       |
     |          |
|  14 |     NESTED LOOPS                |                   |      1 |      2 |  23226 |00:00:05.62 |    3533 |    215 |       |
     |          |
PLAN_TABLE_OUTPUT
|* 15 |      TABLE ACCESS BY INDEX ROWID| PArt_TBL          |      1 |     34 |   1114 |00:00:00.01 |    1213 |      0 |       |
     |          |
|* 16 |       INDEX RANGE SCAN          | PART_IDX_PART     |      1 |    111 |   1549 |00:00:00.01 |      18 |      0 |       |
     |          |
|* 17 |      INDEX RANGE SCAN           | LOT_IDX_LOT       |   1114 |     88 |  22111 |00:00:01.69 |    2320 |    215 |       |
     |          |
|* 18 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL           |      1 |      1 |    526 |00:00:00.65 |    1503 |    844 |       |
     |          |
PLAN_TABLE_OUTPUT
|  19 |     NESTED LOOPS                |                   |      1 |      2 |   2458 |00:00:00.60 |     428 |     21 |       |
     |          |
|* 20 |      TABLE ACCESS BY INDEX ROWID| PART_TBL          |      1 |     34 |    113 |00:00:00.01 |     191 |      0 |       |
     |          |
|* 21 |       INDEX RANGE SCAN          | PART_IDX_PART     |      1 |    111 |    226 |00:00:00.01 |       5 |      0 |       |
     |          |
|* 22 |      INDEX RANGE SCAN           | LOT_IDX_LOT       |    113 |     88 |   2344 |00:00:00.19 |     237 |     21 |       |
PLAN_TABLE_OUTPUT
     |          |
|* 23 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL           |      1 |      1 |    963 |00:00:05.06 |    4098 |   2058 |       |
     |          |
|  24 |     NESTED LOOPS                |                   |      1 |     18 |   5028 |00:00:03.01 |    1931 |     75 |       |
     |          |
|* 25 |      TABLE ACCESS BY INDEX ROWID| PART_TBL          |      1 |    314 |    495 |00:00:00.04 |     911 |      0 |       |
     |          |
PLAN_TABLE_OUTPUT
|* 26 |       INDEX RANGE SCAN          | PART_IDX_PART     |      1 |   1020 |   1088 |00:00:00.01 |      15 |      0 |       |
     |          |
|* 27 |      INDEX RANGE SCAN           | LOT_IDX_LOT       |    495 |     88 |   4532 |00:00:00.57 |    1020 |     75 |       |
     |          |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
   3 - filter(("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P' AND "A"."INSTNUM">="B"."CURPRCDCURINSTNUM"))
   5 - filter(("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970'))
   6 - access("B"."PARTID" LIKE 'PH3721%')
       filter("B"."PARTID" LIKE 'PH3721%')
   7 - access("A"."LOTID"="B"."LOTID")
   8 - filter(("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P' AND "A"."INSTNUM">="B"."CURPRCDCURINSTNUM"))
  10 - filter(("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970'))
  11 - access("B"."PARTID" LIKE 'PH3711%')
       filter(("B"."PARTID" LIKE 'PH3711%' AND LNNVL("B"."PARTID" LIKE 'PH3721%')))
PLAN_TABLE_OUTPUT
  12 - access("A"."LOTID"="B"."LOTID")
  13 - filter(("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P' AND "A"."INSTNUM">="B"."CURPRCDCURINSTNUM"))
  15 - filter(("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970'))
  16 - access("B"."PARTID" LIKE 'PH3631%')
       filter(("B"."PARTID" LIKE 'PH3631%' AND LNNVL("B"."PARTID" LIKE 'PH3711%') AND LNNVL("B"."PARTID" LIKE 'PH3721%')))
  17 - access("A"."LOTID"="B"."LOTID")
  18 - filter(("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P' AND "A"."INSTNUM">="B"."CURPRCDCURINSTNUM"))
  20 - filter(("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970'))
  21 - access("B"."PARTID" LIKE 'PH3561%')
       filter(("B"."PARTID" LIKE 'PH3561%' AND LNNVL("B"."PARTID" LIKE 'PH3631%') AND LNNVL("B"."PARTID" LIKE 'PH3711%') AND
              LNNVL("B"."PARTID" LIKE 'PH3721%')))
PLAN_TABLE_OUTPUT
  22 - access("A"."LOTID"="B"."LOTID")
  23 - filter(("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P' AND "A"."INSTNUM">="B"."CURPRCDCURINSTNUM"))
  25 - filter(("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970'))
  26 - access("B"."PARTID" LIKE 'PH3253%')
       filter(("B"."PARTID" LIKE 'PH3253%' AND LNNVL("B"."PARTID" LIKE 'PH3561%') AND LNNVL("B"."PARTID" LIKE 'PH3631%') AND
              LNNVL("B"."PARTID" LIKE 'PH3711%') AND LNNVL("B"."PARTID" LIKE 'PH3721%')))
  27 - access("A"."LOTID"="B"."LOTID")Running SQL Advisor (sqltrpt.sql) on this query is giving sql profile recommendation which shows the run time of this query as 12 secs:
Explain plan of the recommendation:
2- Using SQL Profile
Plan hash value: 1628846902
| Id  | Operation                       | Name                                | Rows  | Bytes |
Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                |                                 |  2106 |   191K|
   976   (1)| 00:00:12 |
|   1 |  SORT ORDER BY                  |                                    |  2106 |   191K|
   976   (1)| 00:00:12 |
|   2 |   CONCATENATION                 |                                  |       |       |
            |          |
|*  3 | tABLE ACCESS BY INDEX ROWID  | LOT_TBL                |     1 |    30 |
     1   (0)| 00:00:01 |
|   4 |     NESTED LOOPS                |                                     |  1336 |   121K|
   149   (0)| 00:00:02 |
|*  5 |      TABLE ACCESS BY INDEX ROWID|PART_TBL             |   260 | 16380 |
    19   (0)| 00:00:01 |
|*  6 |       INDEX RANGE SCAN          | PART_IDX_PART           |   111 |       |
     1   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN           | LOT_IDX_LOT               |     1 |       |
     1   (0)| 00:00:01 |
|*  8 |    TABLE ACCESS BY INDEX ROWID  |LOT_TBL                 |     1 |    30 |
     1   (0)| 00:00:01 |
|   9 |     NESTED LOOPS                |                                       |    68 |  6324 |
    84   (0)| 00:00:02 |
|* 10 |      TABLE ACCESS BY INDEX ROWID| PART_TBL              |   130 |  8190 |
    19   (0)| 00:00:01 |
|* 11 |       INDEX RANGE SCAN          | PART_IDX_PART             |   111 |       |
     1   (0)| 00:00:01 |
|* 12 |      INDEX RANGE SCAN           | LOT_IDX_LOT                  |     1 |       |
     1   (0)| 00:00:01 |
|* 13 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL                 |  1 |    30 |
     1   (0)| 00:00:01 |
|  14 |     NESTED LOOPS                |                                        |   520 | 48360 |
   518   (0)| 00:00:07 |
|* 15 |      TABLE ACCESS BY INDEX ROWID| PART_TBL               |   998 | 62874 |
    19   (0)| 00:00:01 |
|* 16 |       INDEX RANGE SCAN          | PART_IDX_PART              |   111 |       |
     1   (0)| 00:00:01 |
|* 17 |      INDEX RANGE SCAN           | LOT_IDX_LOT                 |     1 |       |
     1   (0)| 00:00:01 |
|* 18 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL                |     1 |    30 |
     1   (0)| 00:00:01 |
|  19 |     NESTED LOOPS                |                                       |    18 |  1674 |
    36   (0)| 00:00:01 |
|* 20 |      TABLE ACCESS BY INDEX ROWID| PART_TBL              |    34 |  2142 |
    19   (0)| 00:00:01 |
|* 21 |       INDEX RANGE SCAN          | PART_IDX_PART             |   111 |       |
     1   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN           | LOT_IDX_LOT                |      1 |       |
     1   (0)| 00:00:01 |
|* 23 |    TABLE ACCESS BY INDEX ROWID  | LOT_TBL                |      1 |    30 |
     1   (0)| 00:00:01 |
|  24 |     NESTED LOOPS                |                                       |    164 | 15252 |
   188   (0)| 00:00:03 |
|* 25 |      TABLE ACCESS BY INDEX ROWID| PART_TBL              |   314 | 19782 |
    31   (0)| 00:00:01 |
|* 26 |       INDEX SKIP SCAN           | PART_IDX_CLASS              |     1 |       |
    31   (0)| 00:00:01 |
|* 27 |      INDEX RANGE SCAN           | LOT_IDX_LOT                  |     1 |       |
     1   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   3 - filter("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P
' AND
              "A"."INSTNUM">="B"."CURPRCDCURINSTNUM")
   5 - filter("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970')
   6 - access("B"."PARTID" LIKE 'PH3721%')
       filter("B"."PARTID" LIKE 'PH3721%')
   7 - access("A"."LOTID"="B"."LOTID")
   8 - filter("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P
' AND
              "A"."INSTNUM">="B"."CURPRCDCURINSTNUM")
  10 - filter("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970')
  11 - access("B"."PARTID" LIKE 'PH3711%')
       filter("B"."PARTID" LIKE 'PH3711%' AND LNNVL("B"."PARTID" LIKE 'PH3721%
  12 - access("A"."LOTID"="B"."LOTID")
  13 - filter("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P
' AND
              "A"."INSTNUM">="B"."CURPRCDCURINSTNUM")
  15 - filter("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970')
  16 - access("B"."PARTID" LIKE 'PH3631%')
       filter("B"."PARTID" LIKE 'PH3631%' AND LNNVL("B"."PARTID" LIKE 'PH3711%
') AND
              LNNVL("B"."PARTID" LIKE 'PH3721%'))
  17 - access("A"."LOTID"="B"."LOTID")
  18 - filter("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P
' AND
              "A"."INSTNUM">="B"."CURPRCDCURINSTNUM")
  20 - filter("B"."LOTTYPE"='PR' AND "B"."STAGEORDER"<'7970')
  21 - access("B"."PARTID" LIKE 'PH3561%')
       filter("B"."PARTID" LIKE 'PH3561%' AND LNNVL("B"."PARTID" LIKE 'PH3631%
') AND
              LNNVL("B"."PARTID" LIKE 'PH3711%') AND LNNVL("B"."PARTID" LIKE '
PH3721%'))
  22 - access("A"."LOTID"="B"."LOTID")
  23 - filter("A"."RECFLAG"<>'D' AND "A"."ACTION"='H' AND "A"."TRIGGERTYPE"='P
' AND
              "A"."INSTNUM">="B"."CURPRCDCURINSTNUM")
  25 - filter("B"."PARTID" LIKE 'PH3253%' AND "B"."STAGEORDER"<'7970' AND LNNV
L("B"."PARTID"
              LIKE 'PH3721%') AND LNNVL("B"."PARTID" LIKE 'PH3711%') AND LNNVL
("B"."PARTID" LIKE 'PH3631%')
              AND LNNVL("B"."PARTID" LIKE 'PH3561%'))
  26 - access("B"."LOTTYPE"='PR')
       filter("B"."LOTTYPE"='PR')
  27 - access("A"."LOTID"="B"."LOTID")Notice that an index skip scan of PART_IDX_CLASS which is on columns: COMCLASS and LOTTYPE is being used, which reduces the nested loops expense. But forcing index skip scan of that index through hint, the optimizer is not considering the index and is giving the same execution plan as before.
Thanks and Regards,
Rajesh K.

Similar Messages

  • SharePoint 2010 Slow query duration when setting metadata on folder

    I'm getting "Slow Query Duration" when I programmatically set a default value for a default field to apply to documents at a specified location on a SP 2010 library.
    It has nothing to do with performance most probably as I'm getting this working with a folder within a library with only a 1 document on a UAT environment. Front-end: AMD Opteron 6174 2.20GHz x 2 + 8gb RAM, Back-end: AMD Opteron 6174 2.20GHz x 2 + 16gb
    RAM.
    The specific line of code causing this is:
    folderMetadata.SetFieldDefault(createdFolder, fieldData.Field.InternalName, thisFieldTextValue);
    What SP says:
    02/17/2014 16:29:03.24 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa42 Monitorable A large block of literal text was sent to sql. This can result in blocking in sql and excessive memory use on the front end. Verify that no binary parameters are
    being passed as literals, and consider breaking up batches into smaller components. If this request is for a SharePoint list or list item, you may be able to resolve this by reducing the number of fields.
    02/17/2014 16:29:03.24 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa43 High Slow Query Duration: 254.705556153086
    02/17/2014 16:29:03.26 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High Slow Query StackTrace-Managed: at Microsoft.SharePoint.Utilities.SqlSession.OnPostExecuteCommand(SqlCommand command, SqlQueryData monitoringData) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand
    command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock) at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean retryfordeadlock) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock) at Microsoft.SharePoint.Library.SPRequestInternalClass.PutFile(String
    bstrUrl, String bstrWebRelativeUrl, Object punkFile, Int32 cbFile, Object punkFFM, PutFileOpt PutFileOpt, String bstrCreatedBy, String bstrModifiedBy, Int32 iCreatedByID, Int32 iModifiedByID, Object varTimeCreated, Object varTimeLastModified, Obje...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...ct varProperties, String bstrCheckinComment, Byte partitionToCheck, Int64 fragmentIdToCheck, String bstrCsvPartitionsToDelete, String bstrLockIdMatch, String bstEtagToMatch,
    Int32 lockType, String lockId, Int32 minutes, Int32 fRefreshLock, Int32 bValidateReqFields, Guid gNewDocId, UInt32& pdwVirusCheckStatus, String& pVirusCheckMessage, String& pEtagReturn, Byte& piLevel, Int32& pbIgnoredReqProps) at Microsoft.SharePoint.Library.SPRequest.PutFile(String
    bstrUrl, String bstrWebRelativeUrl, Object punkFile, Int32 cbFile, Object punkFFM, PutFileOpt PutFileOpt, String bstrCreatedBy, String bstrModifiedBy, Int32 iCreatedByID, Int32 iModifiedByID, Object varTimeCreated, Object varTimeLastModified, Object varProperties,
    String bstrCheckinComment, Byte partitionToCheck, Int64 fragmentIdToCheck...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ..., String bstrCsvPartitionsToDelete, String bstrLockIdMatch, String bstEtagToMatch, Int32 lockType, String lockId, Int32 minutes, Int32 fRefreshLock, Int32 bValidateReqFields,
    Guid gNewDocId, UInt32& pdwVirusCheckStatus, String& pVirusCheckMessage, String& pEtagReturn, Byte& piLevel, Int32& pbIgnoredReqProps) at Microsoft.SharePoint.SPFile.SaveBinaryStreamInternal(Stream file, String checkInComment, Boolean checkRequiredFields,
    Boolean autoCheckoutOnInvalidData, Boolean bIsMigrate, Boolean bIsPublish, Boolean bForceCreateVersion, String lockIdMatch, SPUser modifiedBy, DateTime timeLastModified, Object varProperties, SPFileFragmentPartition partitionToCheck, SPFileFragmentId fragmentIdToCheck,
    SPFileFragmentPartition[] partitionsToDelete, Stream formatMetadata, String etagToMatch, Boolea...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...n bSyncUpdate, SPLockType lockType, String lockId, TimeSpan lockTimeout, Boolean refreshLock, Boolean requireWebFilePermissions, Boolean failIfRequiredCheckout, Boolean
    validateReqFields, Guid newDocId, SPVirusCheckStatus& virusCheckStatus, String& virusCheckMessage, String& etagReturn, Boolean& ignoredRequiredProps) at Microsoft.SharePoint.SPFile.SaveBinary(Stream file, Boolean checkRequiredFields, Boolean
    createVersion, String etagMatch, String lockIdMatch, Stream fileFormatMetaInfo, Boolean requireWebFilePermissions, String& etagNew) at Microsoft.SharePoint.SPFile.SaveBinary(Byte[] file) at Microsoft.Office.DocumentManagement.MetadataDefaults.Update()
    at TWINSWCFAPI.LibraryManager.CreatePathFromFolderCollection(String fullPathUrl, SPListItem item, SPWeb web, Dictionary2...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ... folderToCreate, Boolean setDefaultValues, Boolean mainFolder) at TWINSWCFAPI.LibraryManager.CreatePathFromFolderCollection(String fullPathUrl, List1 resultDataList,
    SPListItem item, SPWeb web, Boolean setDefaultValues, Boolean mainFolder) at TWINSWCFAPI.LibraryManager.CreateExtraFolders(List1
    pathResultDataList, List1 resultDataList, String fullPathUrl, SPWeb web, SPListItem item, Boolean setDefaultValues) at TWINSWCFAPI.LibraryManager.CreateFolders(SPWeb web, List1
    pathResultDataList, SPListItem item, String path, Boolean setDefaultValues) at TWINSWCFAPI.LibraryManager.MoveFileAfterMetaChange(SPListItem item) at TWINSWCFAPI.DocMetadataChangeEventReceiver.DocMetadataChangeEventReceiver.FileDocument(SPWeb web, SPListItem
    listItem) at TWINSWCFAPI.DocMetadataChang...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...eEventReceiver.DocMetadataChangeEventReceiver.ItemCheckedIn(SPItemEventProperties properties) at Microsoft.SharePoint.SPEventManager.RunItemEventReceiver(SPItemEventReceiver
    receiver, SPUserCodeInfo userCodeInfo, SPItemEventProperties properties, SPEventContext context, String receiverData) at Microsoft.SharePoint.SPEventManager.RunItemEventReceiverHelper(Object receiver, SPUserCodeInfo userCodeInfo, Object properties, SPEventContext
    context, String receiverData) at Microsoft.SharePoint.SPEventManager.<>c__DisplayClassc1.b__6() at Microsoft.SharePoint.SPSecurity.RunAsUser(SPUserToken userToken, Boolean bResetContext, WaitCallback code, Object param) at Microsoft.SharePoint.SPEventManager.InvokeEventReceivers[ReceiverType](SPUserToken
    userToken, Gu...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...id tranLockerId, RunEventReceiver runEventReceiver, Object receivers, Object properties, Boolean checkCancel) at Microsoft.SharePoint.SPEventManager.InvokeEventReceivers[ReceiverType](Byte[]
    userTokenBytes, Guid tranLockerId, RunEventReceiver runEventReceiver, Object receivers, Object properties, Boolean checkCancel) at Microsoft.SharePoint.SPEventManager.HandleEventCallback[ReceiverType,PropertiesType](Object callbackData) at Microsoft.SharePoint.Utilities.SPThreadPool.WaitCallbackWrapper(Object
    state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext
    execu...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database fa44 High ...tionContext, ContextCallback callback, Object state) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack)
    at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(Object state)
    02/17/2014 16:29:03.26 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzku High ConnectionString: 'Data Source=PFC-SQLUAT-202;Initial Catalog=TWINSDMS_LondonDivision_Content;Integrated Security=True;Enlist=False;Asynchronous Processing=False;Connect
    Timeout=15' ConnectionState: Open ConnectionTimeout: 15
    02/17/2014 16:29:03.26 w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High SqlCommand: 'DECLARE @@iRet int;BEGIN TRAN EXEC @@iRet = proc_WriteChunkToAllDocStreams @wssp0, @wssp1, @wssp2, @wssp3, @wssp4, @wssp5, @wssp6;IF @@iRet <> 0 GOTO
    done; DECLARE @@S uniqueidentifier; DECLARE @@W uniqueidentifier; DECLARE @@DocId uniqueidentifier; DECLARE @@DoclibRowId int; DECLARE @@Level tinyint; DECLARE @@DocUIVersion int;DECLARE @@IsCurrentVersion bit; DECLARE @DN nvarchar(256); DECLARE @LN nvarchar(128);
    DECLARE @FU nvarchar(260); SET @DN=@wssp7;SET @@iRet=0; ;SET @LN=@wssp8;SET @FU=@wssp9;SET @@S=@wssp10;SET @@W=@wssp11;SET @@DocUIVersion = 512;IF @@iRet <> 0 GOTO done; ;SET @@Level =@wssp12; EXEC @@iRet = proc_UpdateDocument @@S, @@W, @DN, @LN, @wssp13,
    @wssp14, @wssp15, @wssp16, @wssp17, @wssp18, @wssp19, @wssp20, @wssp21, @wssp22, @wssp23, @wssp24, @wssp25, @wssp26,...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ... @wssp27, @wssp28, @wssp29, @wssp30, @wssp31, @wssp32, @wssp33, @wssp34, @wssp35, @wssp36, @wssp37, @wssp38, @wssp39, @wssp40, @wssp41, @wssp42, @wssp43, @wssp44, @wssp45,
    @wssp46, @wssp47, @wssp48, @wssp49, @wssp50, @wssp51, @@DocId OUTPUT, @@Level OUTPUT , @@DoclibRowId OUTPUT,@wssp52 OUTPUT,@wssp53 OUTPUT,@wssp54 OUTPUT,@wssp55 OUTPUT ; IF @@iRet <> 0 GOTO done; EXEC @@iRet = proc_TransferStream @@S, @@DocId, @@Level,
    @wssp56, @wssp57, @wssp58; IF @@iRet <> 0 GOTO done; EXEC proc_AL @@S,@DN,@LN,@@Level,0,N'London/Broking/Documents/E/E _ E Foods Corporation',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,1,N'London/Broking/Documents/E',N'E _ E Foods Corporation',72,85,83,1,N'';EXEC
    proc_AL @@S,@DN,@LN,@@Level,2,N'London/Broking/Documents/E/E _ E Foods Corporation',N'2013',72,85,...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ...83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,3,N'London/Broking/Documents/E/E _ E Foods Corporation/2013',N'QA11G029601',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,4,N'London/Broking/Documents/K',N'Konig
    _ Reeker',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,5,N'London/Broking/Documents/K/Konig _ Reeker',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,6,N'London/Broking/Documents/K/Konig _ Reeker/2012',N'QA12E013201',72,85,83,1,N'';EXEC proc_AL
    @@S,@DN,@LN,@@Level,7,N'London/Broking/Documents/K/Konig _ Reeker/2012',N'A12EL00790',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,8,N'London/Broking/Documents/K/Konig _ Reeker/2012',N'A12DA00720',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,9,N'London/Broking/Documents/K/Konig
    _ Reeker/2012',N'A12DC00800',72,85,83,1,N'';EXEC proc...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ..._AL @@S,@DN,@LN,@@Level,10,N'London/Broking/Documents/A',N'Ace European Group Limited',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,11,N'London/Broking/Documents/A/Ace
    European Group Limited',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,12,N'London/Broking/Documents/A/Ace European Group Limited/2012',N'JXB88435',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,13,N'London/Broking/Documents/A/Ace European Group
    Limited/2012/JXB88435/Closings',N'PRM 1',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,14,N'London/Broking/Documents/A/Ace European Group Limited/2012/JXB88435/Closings',N'PRM 2',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,15,N'London/Broking/Documents/C',N'C
    Moore-Gordon',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,16,N'London/Broking/Documents/C/C Moore-Gordo...
    02/17/2014 16:29:03.26* w3wp.exe (0x10D0) 0x0DB0 SharePoint Foundation Database tzkv High ...n',N'2012',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,17,N'London/Broking/Documents/C/C Moore-Gordon/2012',N'QY13P700201',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,18,N'London/Broking/Documents/C/C
    Moore-Gordon',N'2013',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,19,N'London/Broking/Documents/C/C Moore-Gordon/2013',N'Y13PF07010',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,20,N'London/Broking/Documents/A/Ace European Group Limited/2012/JXB88435/Closings',N'ARP
    7',72,85,83,1,N'';EXEC proc_AL @@S,@DN,@LN,@@Level,21,N'London/Broking/Documents/A/Ace European Group Limited/2012/JXB88435/Closings',N'ARP 8',72,85,83,1,N'';EXEC proc_AL . . .
    Thanks in advance A

    SharePoint and SQL Server installed on same server or how is the setup?
    i would start to enable the developer dashboard, analyze the report of the developer dashboard...
    you will see if any webpart, or page or sql server query taking too much time.
    http://www.sharepoint-journey.com/developer-dashboard-in-sharepoint-2013.html
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Very Slow Query with CTE inner join

    I have 2 tables (heavily simplified here to show relevant columns):
    CREATE TABLE tblCharge
    (ChargeID int NOT NULL,
    ParentChargeID int NULL,
    ChargeName varchar(200) NULL)
    CREATE TABLE tblChargeShare
    (ChargeShareID int NOT NULL,
    ChargeID int NOT NULL,
    TotalAmount money NOT NULL,
    TaxAmount money NULL,
    DiscountAmount money NULL,
    CustomerID int NOT NULL,
    ChargeShareStatusID int NOT NULL)
    I have a very basic View to Join them:
    CREATE VIEW vwBASEChargeShareRelation as
    Select c.ChargeID, ParentChargeID, s.CustomerID, s.TotalAmount, isnull(s.TaxAmount, 0) as TaxAmount, isnull(s.DiscountAmount, 0) as DiscountAmount
    from tblCharge c inner join tblChargeShare s
    on c.ChargeID = s.ChargeID Where s.ChargeShareStatusID < 3
    GO
    I then have a view containing a CTE to get the children of the Parent Charge:
    ALTER VIEW [vwChargeShareSubCharges] AS
    WITH RCTE AS
    SELECT ParentChargeId, ChargeID, 1 AS Lvl, ISNULL(TotalAmount, 0) as TotalAmount, ISNULL(TaxAmount, 0) as TaxAmount,
    ISNULL(DiscountAmount, 0) as DiscountAmount, CustomerID, ChargeID as MasterChargeID
    FROM vwBASEChargeShareRelation Where ParentChargeID is NULL
    UNION ALL
    SELECT rh.ParentChargeID, rh.ChargeID, Lvl+1 AS Lvl, ISNULL(rh.TotalAmount, 0), ISNULL(rh.TaxAmount, 0), ISNULL(rh.DiscountAmount, 0) , rh.CustomerID
    , rc.MasterChargeID
    FROM vwBASEChargeShareRelation rh
    INNER JOIN RCTE rc ON rh.PArentChargeID = rc.ChargeID and rh.CustomerID = rc.CustomerID
    Select MasterChargeID as ChargeID, CustomerID, Sum(TotalAmount) as TotalCharged, Sum(TaxAmount) as TotalTax, Sum(DiscountAmount) as TotalDiscount
    from RCTE
    Group by MasterChargeID, CustomerID
    GO
    So far so good, I can query this view and get the total cost for a line item including all children.
    The problem occurs when I join this table. The query:
    Select t.* from vwChargeShareSubCharges t
    inner join
    tblChargeShare s
    on t.CustomerID = s.CustomerID
    and t.MasterChargeID = s.ChargeID
    Where s.ChargeID = 1291094
    Takes around 30 ms to return a result (tblCharge and Charge Share have around 3.5 million records).
    But the query:
    Select t.* from vwChargeShareSubCharges t
    inner join
    tblChargeShare s
    on t.CustomerID = s.CustomerID
    and t.MasterChargeID = s.ChargeID
    Where InvoiceID = 1045854
    Takes around 2 minutes to return a result - even though the only charge with that InvoiceID is the same charge as the one used in the previous query.
    The same thing occurs if I do the join in the same query that the CTE is defined in.
    I ran the execution plan for each query. The first (fast) query looks like this:
    The second(slow) query looks like this:
    I am at a loss, and my skills at decoding execution plans to resolve this are lacking.
    I have separate indexes on tblCharge.ChargeID, tblCharge.ParentChargeID, tblChargeShare.ChargeID, tblChargeShare.InvoiceID, tblChargeShare.ChargeShareStatusID
    Any ideas? Tested on SQL 2008R2 and SQL 2012

    >> The database is linked [sic] to an established app and the column and table names can't be changed. <<
    Link? That is a term from pointer chains and network databases, not SQL. I will guess that means the app came back in the old pre-RDBMS days and you are screwed. 
    >> I am not too worried about the money field [sic], this is used for money and money based calculations so the precision and rounding are acceptable at this level. <<
    Field is a COBOL concept; columns are totally different. MONEY is how Sybase mimics the PICTURE clause that puts currency signs, commas, period, etc in a COBOL money field. 
    Using more than one operation (multiplication or division) on money columns will produce severe rounding errors. A simple way to visualize money arithmetic is to place a ROUND() function calls after 
    every operation. For example,
    Amount = (Portion / total_amt) * gross_amt
    can be rewritten using money arithmetic as:
    Amount = ROUND(ROUND(Portion/total_amt, 4) * 
    gross_amt, 4)
    Rounding to four decimal places might not seem an 
    issue, until the numbers you are using are greater 
    than 10,000. 
    BEGIN
    DECLARE @gross_amt MONEY,
     @total_amt MONEY,
     @my_part MONEY,
     @money_result MONEY,
     @float_result FLOAT,
     @all_floats FLOAT;
     SET @gross_amt = 55294.72;
     SET @total_amt = 7328.75;
     SET @my_part = 1793.33;
     SET @money_result = (@my_part / @total_amt) * 
    @gross_amt;
     SET @float_result = (@my_part / @total_amt) * 
    @gross_amt;
     SET @Retult3 = (CAST(@my_part AS FLOAT)
     / CAST( @total_amt AS FLOAT))
     * CAST(FLOAT, @gross_amt AS FLOAT);
     SELECT @money_result, @float_result, @all_floats;
    END;
    @money_result = 13525.09 -- incorrect
    @float_result = 13525.0885 -- incorrect
    @all_floats = 13530.5038673171 -- correct, with a -
    5.42 error 
    >> The keys are ChargeID(int, identity) and ChargeShareID(int, identity). <<
    Sorry, but IDENTITY is not relational and cannot be a key by definition. But it sure works just like a record number in your old COBOL file system. 
    >> .. these need to be int so that they are assigned by the database and unique. <<
    No, the data type of a key is not determined by physical storage, but by logical design. IDENTITY is the number of a parking space in a garage; a VIN is how you identify the automobile. 
    >> What would you recommend I use as keys? <<
    I do not know. I have no specs and without that, I cannot pull a Kabbalah number from the hardware. Your magic numbers can identify Squids, Automobile or Lady Gaga! I would ask the accounting department how they identify a charge. 
    >> Charge_Share_Status_ID links [sic] to another table which contains the name, formatting [sic] and other information [sic] or a charge share's status, so it is both an Id and a status. <<
    More pointer chains! Formatting? Unh? In RDBMS, we use a tiered architecture. That means display formatting is in a presentation layer. A properly created table has cohesion – it does one and only one data element. A status is a state of being that applies
    to an entity over a period time (think employment, marriage, etc. status if that is too abstract). 
    An identifier is based on the Law of Identity from formal logic “To be is to be something in particular” or “A is A” informally. There is no entity here! The Charge_Share_Status table should have the encoded values for a status and perhaps a description if
    they are unclear. If the list of values is clear, short and static, then use a CHECK() constraint. 
    On a scale from 1 to 10, what color is your favorite letter of the alphabet? Yes, this is literally that silly and wrong. 
    >> I understand what a CTE is; is there a better way to sum all children for a parent hierarchy? <<
    There are many ways to represent a tree or hierarchy in SQL.  This is called an adjacency list model and it looks like this:
    CREATE TABLE OrgChart 
    (emp_name CHAR(10) NOT NULL PRIMARY KEY, 
     boss_emp_name CHAR(10) REFERENCES OrgChart(emp_name), 
     salary_amt DECIMAL(6,2) DEFAULT 100.00 NOT NULL,
     << horrible cycle constraints >>);
    OrgChart 
    emp_name  boss_emp_name  salary_amt 
    ==============================
    'Albert'    NULL    1000.00
    'Bert'    'Albert'   900.00
    'Chuck'   'Albert'   900.00
    'Donna'   'Chuck'    800.00
    'Eddie'   'Chuck'    700.00
    'Fred'    'Chuck'    600.00
    This approach will wind up with really ugly code -- CTEs hiding recursive procedures, horrible cycle prevention code, etc.  The root of your problem is not knowing that rows are not records, that SQL uses sets and trying to fake pointer chains with some
    vague, magical non-relational "id".  
    This matches the way we did it in old file systems with pointer chains.  Non-RDBMS programmers are comfortable with it because it looks familiar -- it looks like records and not rows.  
    Another way of representing trees is to show them as nested sets. 
    Since SQL is a set oriented language, this is a better model than the usual adjacency list approach you see in most text books. Let us define a simple OrgChart table like this.
    CREATE TABLE OrgChart 
    (emp_name CHAR(10) NOT NULL PRIMARY KEY, 
     lft INTEGER NOT NULL UNIQUE CHECK (lft > 0), 
     rgt INTEGER NOT NULL UNIQUE CHECK (rgt > 1),
      CONSTRAINT order_okay CHECK (lft < rgt));
    OrgChart 
    emp_name         lft rgt 
    ======================
    'Albert'      1   12 
    'Bert'        2    3 
    'Chuck'       4   11 
    'Donna'       5    6 
    'Eddie'       7    8 
    'Fred'        9   10 
    The (lft, rgt) pairs are like tags in a mark-up language, or parens in algebra, BEGIN-END blocks in Algol-family programming languages, etc. -- they bracket a sub-set.  This is a set-oriented approach to trees in a set-oriented language. 
    The organizational chart would look like this as a directed graph:
                Albert (1, 12)
        Bert (2, 3)    Chuck (4, 11)
                       /    |   \
                     /      |     \
                   /        |       \
                 /          |         \
            Donna (5, 6) Eddie (7, 8) Fred (9, 10)
    The adjacency list table is denormalized in several ways. We are modeling both the Personnel and the Organizational chart in one table. But for the sake of saving space, pretend that the names are job titles and that we have another table which describes the
    Personnel that hold those positions.
    Another problem with the adjacency list model is that the boss_emp_name and employee columns are the same kind of thing (i.e. identifiers of personnel), and therefore should be shown in only one column in a normalized table.  To prove that this is not
    normalized, assume that "Chuck" changes his name to "Charles"; you have to change his name in both columns and several places. The defining characteristic of a normalized table is that you have one fact, one place, one time.
    The final problem is that the adjacency list model does not model subordination. Authority flows downhill in a hierarchy, but If I fire Chuck, I disconnect all of his subordinates from Albert. There are situations (i.e. water pipes) where this is true, but
    that is not the expected situation in this case.
    To show a tree as nested sets, replace the nodes with ovals, and then nest subordinate ovals inside each other. The root will be the largest oval and will contain every other node.  The leaf nodes will be the innermost ovals with nothing else inside them
    and the nesting will show the hierarchical relationship. The (lft, rgt) columns (I cannot use the reserved words LEFT and RIGHT in SQL) are what show the nesting. This is like XML, HTML or parentheses. 
    At this point, the boss_emp_name column is both redundant and denormalized, so it can be dropped. Also, note that the tree structure can be kept in one table and all the information about a node can be put in a second table and they can be joined on employee
    number for queries.
    To convert the graph into a nested sets model think of a little worm crawling along the tree. The worm starts at the top, the root, makes a complete trip around the tree. When he comes to a node, he puts a number in the cell on the side that he is visiting
    and increments his counter.  Each node will get two numbers, one of the right side and one for the left. Computer Science majors will recognize this as a modified preorder tree traversal algorithm. Finally, drop the unneeded OrgChart.boss_emp_name column
    which used to represent the edges of a graph.
    This has some predictable results that we can use for building queries.  The root is always (left = 1, right = 2 * (SELECT COUNT(*) FROM TreeTable)); leaf nodes always have (left + 1 = right); subtrees are defined by the BETWEEN predicate; etc. Here are
    two common queries which can be used to build others:
    1. An employee and all their Supervisors, no matter how deep the tree.
     SELECT O2.*
       FROM OrgChart AS O1, OrgChart AS O2
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND O1.emp_name = :in_emp_name;
    2. The employee and all their subordinates. There is a nice symmetry here.
     SELECT O1.*
       FROM OrgChart AS O1, OrgChart AS O2
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND O2.emp_name = :in_emp_name;
    3. Add a GROUP BY and aggregate functions to these basic queries and you have hierarchical reports. For example, the total salaries which each employee controls:
     SELECT O2.emp_name, SUM(S1.salary_amt)
       FROM OrgChart AS O1, OrgChart AS O2,
            Salaries AS S1
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND S1.emp_name = O2.emp_name 
       GROUP BY O2.emp_name;
    4. To find the level and the size of the subtree rooted at each emp_name, so you can print the tree as an indented listing. 
    SELECT O1.emp_name, 
       SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt 
       THEN O2.sale_amt ELSE 0.00 END) AS sale_amt_tot,
       SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt 
       THEN 1 ELSE 0 END) AS subtree_size,
       SUM(CASE WHEN O1.lft BETWEEN O2.lft AND O2.rgt
       THEN 1 ELSE 0 END) AS lvl
      FROM OrgChart AS O1, OrgChart AS O2
     GROUP BY O1.emp_name;
    5. The nested set model has an implied ordering of siblings which the adjacency list model does not. To insert a new node, G1, under part G.  We can insert one node at a time like this:
    BEGIN ATOMIC
    DECLARE rightmost_spread INTEGER;
    SET rightmost_spread 
        = (SELECT rgt 
             FROM Frammis 
            WHERE part = 'G');
    UPDATE Frammis
       SET lft = CASE WHEN lft > rightmost_spread
                      THEN lft + 2
                      ELSE lft END,
           rgt = CASE WHEN rgt >= rightmost_spread
                      THEN rgt + 2
                      ELSE rgt END
     WHERE rgt >= rightmost_spread;
     INSERT INTO Frammis (part, lft, rgt)
     VALUES ('G1', rightmost_spread, (rightmost_spread + 1));
     COMMIT WORK;
    END;
    The idea is to spread the (lft, rgt) numbers after the youngest child of the parent, G in this case, over by two to make room for the new addition, G1.  This procedure will add the new node to the rightmost child position, which helps to preserve the idea
    of an age order among the siblings.
    6. To convert a nested sets model into an adjacency list model:
    SELECT B.emp_name AS boss_emp_name, E.emp_name
      FROM OrgChart AS E
           LEFT OUTER JOIN
           OrgChart AS B
           ON B.lft
              = (SELECT MAX(lft)
                   FROM OrgChart AS S
                  WHERE E.lft > S.lft
                    AND E.lft < S.rgt);
    7. To find the immediate parent of a node: 
    SELECT MAX(P2.lft), MIN(P2.rgt)
      FROM Personnel AS P1, Personnel AS P2
     WHERE P1.lft BETWEEN P2.lft AND P2.rgt 
       AND P1.emp_name = @my_emp_name;
    I have a book on TREES & HIERARCHIES IN SQL which you can get at Amazon.com right now. It has a lot of other programming idioms for nested sets, like levels, structural comparisons, re-arrangement procedures, etc. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Slow query execution time

    Hi,
    I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
    The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
    The query:
    SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
    FROM MyTable
    WHERE SomeDate= date_entered_by_user  AND SomeString IN ("aaa","bbb")
    GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
    To check I replaced the where clause with
    WHERE SomeDate= date_entered_by_user  AND SomeString = "aaa"No improvements.
    What could be the problem?
    Thank you,
    Lobo

    It's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
    When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
    When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
    The stored execution plan will enable the engine to execute the query faster.
    />

  • Find a slow query

    Hi all,
    I have two questions about the SQL tuning:
    There are many open sessions for an Oracle database,
    (1) How to find the session that runs a slow query?
    (2) How to locate / find this slow query so that the query can be tuned?
    Thanks a lot.

    Hi,
    (1) How to find the session that runs a slow query?This can be coming up from the wait events that the sessions are waiting for.So you can check V$session_wait to see that which are the sessions which are waiting for some thing to happen and for that reason have become slow.Also you can take the advantage of ASH in 10g to tell you the same if you want to drill down your search for last few minutes.
    (2) How to locate / find this slow query so that the query can be tuned?IF you read Optimizing OralcePerformance, this is the first thing that Carry asks to address and take extreme caution in doing it.Ask the user that which business process is slow.It will vary depending upon the business.There is nothing called ,"we are slow" and there is no such thing that "tune it all".We have to tune the main area or the maximum benefit giving area only.Ask the user which query/report he is running which he wants to get optimized.You can also take advantage or statspack/AWR report to go for the particular query depending upon the wait event.If you know the query than trace the query to see what is happening.I shall suggest 10046 trace forthe query as its more wider and imparts mch more info as compared to tkprof but you can pick what you want/like.
    HTH
    Aman....

  • Slow Query Using index. Fast with full table Scan.

    Hi;
    (Thanks for the links)
    Here's my question correctly formated.
    The query:
    SELECT count(1)
    from ehgeoconstru  ec
    where ec.TYPE='BAR' 
    AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') )  
    and deathdate is null
    and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'Runs on 32 seconds!
    Same query, but with one extra where clause:
    SELECT count(1)
    from ehgeoconstru  ec
    where ec.TYPE='BAR' 
    and  ( (ec.contextVersion = 'REALWORLD')     --- ADDED HERE
    AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) ) 
    and deathdate is null
    and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'This runs in 400 seconds.
    It should return data from one table, given the conditions.
    The version of the database is Oracle9i Release 9.2.0.7.0
    These are the parameters relevant to the optimizer:
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     1
    optimizer_features_enable            string      9.2.0
    optimizer_index_caching              integer     99
    optimizer_index_cost_adj             integer     10
    optimizer_max_permutations           integer     2000
    optimizer_mode                       string      CHOOSE
    SQL> Here is the output of EXPLAIN PLAN for the first fast query:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                     |  Name               | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT     |                         |           |       |       |
    |   1 |  SORT AGGREGATE       |                         |           |       |       |
    |*  2 |   TABLE ACCESS FULL   | EHCONS            |       |       |       |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE"
                  IS NULL AND "EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy
    -mm-dd
                  hh24:mi:ss') AND "EC"."TYPE"='BAR')
    Note: rule based optimizationHere is the output of EXPLAIN PLAN for the slow query:
    PLAN_TABLE_OUTPUT
       |       |
    |   1 |  SORT AGGREGATE              |                             |       |
       |       |
    |*  2 |   TABLE ACCESS BY INDEX ROWID| ehgeoconstru      |       |
       |       |
    |*  3 |    INDEX RANGE SCAN          | ehgeoconstru_VSN  |       |
       |       |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE" IS
    NULL AND "EC"."TYPE"='BAR')
    PLAN_TABLE_OUTPUT
       3 - access("EC"."CONTEXTVERSION"='REALWORLD' AND "EC"."BIRTHDATE"<=TO_DATE('2
    009-10-06
                  11:52:12', 'yyyy-mm-dd hh24:mi:ss'))
           filter("EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy-mm-dd hh24:
    mi:ss'))
    Note: rule based optimizationThe TKPROF output for this slow statement is:
    TKPROF: Release 9.2.0.7.0 - Production on Tue Nov 17 14:46:32 2009
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Trace file: gen_ora_3120.trc
    Sort options: prsela  exeela  fchela 
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    SELECT count(1)
    from ehgeoconstru  ec
    where ec.TYPE='BAR'
    and  ( (ec.contextVersion = 'REALWORLD')
    AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) )
    and deathdate is null
    and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.00     538.12     162221    1355323          0           1
    total        4      0.00     538.12     162221    1355323          0           1
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 153 
    Rows     Row Source Operation
          1  SORT AGGREGATE
      27747   TABLE ACCESS BY INDEX ROWID OBJ#(73959)
    2134955    INDEX RANGE SCAN OBJ#(73962) (object id 73962)
    alter session set sql_trace=true
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.02          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      0.00       0.02          0          0          0           0
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer goal: CHOOSE
    Parsing user id: 153 
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      0.00       0.02          0          0          0           0
    Fetch        2      0.00     538.12     162221    1355323          0           1
    total        5      0.00     538.15     162221    1355323          0           1
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        2  user  SQL statements in session.
        0  internal SQL statements in session.
        2  SQL statements in session.
    Trace file: gen_ora_3120.trc
    Trace file compatibility: 9.02.00
    Sort options: prsela  exeela  fchela 
           2  sessions in tracefile.
           2  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           2  SQL statements in trace file.
           2  unique SQL statements in trace file.
          94  lines in trace file.Edited by: PauloSMO on 17/Nov/2009 4:21
    Edited by: PauloSMO on 17/Nov/2009 7:07
    Edited by: PauloSMO on 17/Nov/2009 7:38 - Changed title to be more correct.

    Although your optimizer_mode is choose, it appears that there are no statistics gathered on ehgeoconstru. The lack of cost estimate and estimated row counts from each step of the plan, and the "Note: rule based optimization" at the end of both plans would tend to confirm this.
    Optimizer_mode choose means that if statistics are gathered then it will use the CBO, but if no statistics are present in any of the tables in the query, then the Rule Based Optimizer will be used. The RBO tends to be index happy at the best of times. I'm guessing that the index ehgeoconstru_VSN has contextversion as the leading column and also includes birthdate.
    You can either gather statistics on the table (if all of the other tables have statistics) using dbms_stats.gather_table_stats, or hint the query to use a full scan instead of the index. Another alternative would be to apply a function or operation against the contextversion to preclude the use of the index. something like this:
    SELECT COUNT(*)
    FROM ehgeoconstru  ec
    WHERE ec.type='BAR' and 
          ec.contextVersion||'' = 'REALWORLD'
          ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') and
          deathdate is null and
          SUBSTR(ec.strgfd, 1, LENGTH('[CIMText')) <> '[CIMText'or perhaps UPPER(ec.contextVersion) if that would not change the rows returned.
    John

  • Stumbled on the slow query, Can any one look into it, Y it is so slow

    I just stumbled on the slow query . Can any one please guess why this querie is so slow, do i need to change anything in it
    Pid=32521 Tid=2884070320 03/26/2011 07:54:19.176 - Cursor wm09_2_49107 took 27996 ms elapsed time and 27995 ms db time for 1 fetches. sql string:
    SELECT ALLOC_INVN_DTL.ALLOC_INVN_DTL_ID, ALLOC_INVN_DTL.WHSE, ALLOC_INVN_DTL.SKU_ID, ALLOC_INVN_DTL.INVN_TYPE, ALLOC_INVN_DTL.PROD_STAT, ALLOC_INVN_DTL.BATCH_NBR, ALLOC_INVN_DTL.SKU_ATTR_1, ALLOC_INVN_DTL.SKU_ATTR_2, ALLOC_INVN_DTL.SKU_ATTR_3, ALLOC_INVN_DTL.SKU_ATTR_4, ALLOC_INVN_DTL.SKU_ATTR_5, ALLOC_INVN_DTL.CNTRY_OF_ORGN, ALLOC_INVN_DTL.ALLOC_INVN_CODE, ALLOC_INVN_DTL.CNTR_NBR, ALLOC_INVN_DTL.TRANS_INVN_TYPE, ALLOC_INVN_DTL.PULL_LOCN_ID, ALLOC_INVN_DTL.INVN_NEED_TYPE, ALLOC_INVN_DTL.TASK_TYPE, ALLOC_INVN_DTL.TASK_PRTY, ALLOC_INVN_DTL.TASK_BATCH, ALLOC_INVN_DTL.ALLOC_UOM, ALLOC_INVN_DTL.ALLOC_UOM_QTY, ALLOC_INVN_DTL.QTY_PULLD, ALLOC_INVN_DTL.FULL_CNTR_ALLOCD, ALLOC_INVN_DTL.ORIG_REQMT, ALLOC_INVN_DTL.QTY_ALLOC, ALLOC_INVN_DTL.DEST_LOCN_ID, ALLOC_INVN_DTL.TASK_GENRTN_REF_CODE, ALLOC_INVN_DTL.TASK_GENRTN_REF_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_CODE, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR, ALLOC_INVN_DTL.ERLST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_CMPL_DATE_TIME, ALLOC_INVN_DTL.NEED_ID, ALLOC_INVN_DTL.STAT_CODE, ALLOC_INVN_DTL.CREATE_DATE_TIME, ALLOC_INVN_DTL.MOD_DATE_TIME, ALLOC_INVN_DTL.USER_ID, ALLOC_INVN_DTL.PKT_CTRL_NBR, ALLOC_INVN_DTL.REQD_INVN_TYPE, ALLOC_INVN_DTL.REQD_PROD_STAT, ALLOC_INVN_DTL.REQD_BATCH_NBR, ALLOC_INVN_DTL.REQD_SKU_ATTR_1, ALLOC_INVN_DTL.REQD_SKU_ATTR_2, ALLOC_INVN_DTL.REQD_SKU_ATTR_3, ALLOC_INVN_DTL.REQD_SKU_ATTR_4, ALLOC_INVN_DTL.REQD_SKU_ATTR_5, ALLOC_INVN_DTL.REQD_CNTRY_OF_ORGN, ALLOC_INVN_DTL.PKT_SEQ_NBR, ALLOC_INVN_DTL.CARTON_NBR, ALLOC_INVN_DTL.CARTON_SEQ_NBR, ALLOC_INVN_DTL.PIKR_NBR, ALLOC_INVN_DTL.PULL_LOCN_SEQ_NBR, ALLOC_INVN_DTL.DEST_LOCN_SEQ_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR_SEQ, ALLOC_INVN_DTL.SUBSTITUTION_FLAG, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_1, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_2, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_3, ALLOC_INVN_DTL.CD_MASTER_ID FROM ALLOC_INVN_DTL WHERE ( ( ( ( ( ( ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1 ) AND ( ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2 ) ) AND ( ALLOC_INVN_DTL.SKU_ID = :3 ) ) AND ( ALLOC_INVN_DTL.CNTR_NBR = :4 ) ) AND ( ALLOC_INVN_DTL.STAT_CODE < 1 ) ) AND ( ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL ) )
    input variables
    1: Address(0xabe74300) Length(0) Type(8) "2" - No Indicator
    2: Address(0x8995474) Length(0) Type(8) "PERP014119" - No Indicator
    3: Address(0xab331f1c) Length(0) Type(8) "MB57545217" - No Indicator
    4: Address(0xab31e32c) Length(0) Type(8) "T0000000000000078257" - No Indicator

    784786 wrote:
    I just stumbled on the slow query . Can any one please guess why this querie is so slow, do i need to change anything in it
    Pid=32521 Tid=2884070320 03/26/2011 07:54:19.176 - Cursor wm09_2_49107 took 27996 ms elapsed time and 27995 ms db time for 1 fetches. sql string:
    SELECT ALLOC_INVN_DTL.ALLOC_INVN_DTL_ID, ALLOC_INVN_DTL.WHSE, ALLOC_INVN_DTL.SKU_ID, ALLOC_INVN_DTL.INVN_TYPE, ALLOC_INVN_DTL.PROD_STAT, ALLOC_INVN_DTL.BATCH_NBR, ALLOC_INVN_DTL.SKU_ATTR_1, ALLOC_INVN_DTL.SKU_ATTR_2, ALLOC_INVN_DTL.SKU_ATTR_3, ALLOC_INVN_DTL.SKU_ATTR_4, ALLOC_INVN_DTL.SKU_ATTR_5, ALLOC_INVN_DTL.CNTRY_OF_ORGN, ALLOC_INVN_DTL.ALLOC_INVN_CODE, ALLOC_INVN_DTL.CNTR_NBR, ALLOC_INVN_DTL.TRANS_INVN_TYPE, ALLOC_INVN_DTL.PULL_LOCN_ID, ALLOC_INVN_DTL.INVN_NEED_TYPE, ALLOC_INVN_DTL.TASK_TYPE, ALLOC_INVN_DTL.TASK_PRTY, ALLOC_INVN_DTL.TASK_BATCH, ALLOC_INVN_DTL.ALLOC_UOM, ALLOC_INVN_DTL.ALLOC_UOM_QTY, ALLOC_INVN_DTL.QTY_PULLD, ALLOC_INVN_DTL.FULL_CNTR_ALLOCD, ALLOC_INVN_DTL.ORIG_REQMT, ALLOC_INVN_DTL.QTY_ALLOC, ALLOC_INVN_DTL.DEST_LOCN_ID, ALLOC_INVN_DTL.TASK_GENRTN_REF_CODE, ALLOC_INVN_DTL.TASK_GENRTN_REF_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_CODE, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR, ALLOC_INVN_DTL.ERLST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_CMPL_DATE_TIME, ALLOC_INVN_DTL.NEED_ID, ALLOC_INVN_DTL.STAT_CODE, ALLOC_INVN_DTL.CREATE_DATE_TIME, ALLOC_INVN_DTL.MOD_DATE_TIME, ALLOC_INVN_DTL.USER_ID, ALLOC_INVN_DTL.PKT_CTRL_NBR, ALLOC_INVN_DTL.REQD_INVN_TYPE, ALLOC_INVN_DTL.REQD_PROD_STAT, ALLOC_INVN_DTL.REQD_BATCH_NBR, ALLOC_INVN_DTL.REQD_SKU_ATTR_1, ALLOC_INVN_DTL.REQD_SKU_ATTR_2, ALLOC_INVN_DTL.REQD_SKU_ATTR_3, ALLOC_INVN_DTL.REQD_SKU_ATTR_4, ALLOC_INVN_DTL.REQD_SKU_ATTR_5, ALLOC_INVN_DTL.REQD_CNTRY_OF_ORGN, ALLOC_INVN_DTL.PKT_SEQ_NBR, ALLOC_INVN_DTL.CARTON_NBR, ALLOC_INVN_DTL.CARTON_SEQ_NBR, ALLOC_INVN_DTL.PIKR_NBR, ALLOC_INVN_DTL.PULL_LOCN_SEQ_NBR, ALLOC_INVN_DTL.DEST_LOCN_SEQ_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR_SEQ, ALLOC_INVN_DTL.SUBSTITUTION_FLAG, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_1, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_2, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_3, ALLOC_INVN_DTL.CD_MASTER_ID FROM ALLOC_INVN_DTL WHERE ( ( ( ( ( ( ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1 ) AND ( ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2 ) ) AND ( ALLOC_INVN_DTL.SKU_ID = :3 ) ) AND ( ALLOC_INVN_DTL.CNTR_NBR = :4 ) ) AND ( ALLOC_INVN_DTL.STAT_CODE < 1 ) ) AND ( ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL ) )
    input variables
    1: Address(0xabe74300) Length(0) Type(8) "2" - No Indicator
    2: Address(0x8995474) Length(0) Type(8) "PERP014119" - No Indicator
    3: Address(0xab331f1c) Length(0) Type(8) "MB57545217" - No Indicator
    4: Address(0xab31e32c) Length(0) Type(8) "T0000000000000078257" - No IndicatorWithout more information I cannot tell you why it is slow, but I can certainly tell you why it is impossible to read.
    Just because sql allows unformatted query text does not mean it is a good idea. Why not bring some sanity to this for your own sake, not to mention that of people from whom you are expecting to actually read and analyze this mess. I wish someone could explain to me why people write these long stream-of-consciousness queries.
    When posting to this forum you should use the code tags to bracket your code and preserve the formatting. Then the code should actually be formatted:
    SELECT
         ALLOC_INVN_DTL.ALLOC_INVN_DTL_ID,
         ALLOC_INVN_DTL.WHSE,
         ALLOC_INVN_DTL.SKU_ID,
         ALLOC_INVN_DTL.INVN_TYPE,
         ALLOC_INVN_DTL.PROD_STAT,
         ALLOC_INVN_DTL.BATCH_NBR,
         ALLOC_INVN_DTL.SKU_ATTR_1,
         ALLOC_INVN_DTL.SKU_ATTR_2,
         ALLOC_INVN_DTL.SKU_ATTR_3,
         ALLOC_INVN_DTL.SKU_ATTR_4,
         ALLOC_INVN_DTL.SKU_ATTR_5,
         ALLOC_INVN_DTL.CNTRY_OF_ORGN,
         ALLOC_INVN_DTL.ALLOC_INVN_CODE,
         ALLOC_INVN_DTL.CNTR_NBR,
         ALLOC_INVN_DTL.TRANS_INVN_TYPE,
         ALLOC_INVN_DTL.PULL_LOCN_ID,
         ALLOC_INVN_DTL.INVN_NEED_TYPE,
         ALLOC_INVN_DTL.TASK_TYPE,
         ALLOC_INVN_DTL.TASK_PRTY,
         ALLOC_INVN_DTL.TASK_BATCH,
         ALLOC_INVN_DTL.ALLOC_UOM,
         ALLOC_INVN_DTL.ALLOC_UOM_QTY,
         ALLOC_INVN_DTL.QTY_PULLD,
         ALLOC_INVN_DTL.FULL_CNTR_ALLOCD,
         ALLOC_INVN_DTL.ORIG_REQMT,
         ALLOC_INVN_DTL.QTY_ALLOC,
         ALLOC_INVN_DTL.DEST_LOCN_ID,
         ALLOC_INVN_DTL.TASK_GENRTN_REF_CODE,
         ALLOC_INVN_DTL.TASK_GENRTN_REF_NBR,
         ALLOC_INVN_DTL.TASK_CMPL_REF_CODE,
         ALLOC_INVN_DTL.TASK_CMPL_REF_NBR,
         ALLOC_INVN_DTL.ERLST_START_DATE_TIME,
         ALLOC_INVN_DTL.LTST_START_DATE_TIME,
         ALLOC_INVN_DTL.LTST_CMPL_DATE_TIME,
         ALLOC_INVN_DTL.NEED_ID,
         ALLOC_INVN_DTL.STAT_CODE,
         ALLOC_INVN_DTL.CREATE_DATE_TIME,
         ALLOC_INVN_DTL.MOD_DATE_TIME,
         ALLOC_INVN_DTL.USER_ID,
         ALLOC_INVN_DTL.PKT_CTRL_NBR,      
         ALLOC_INVN_DTL.REQD_INVN_TYPE,      
         ALLOC_INVN_DTL.REQD_PROD_STAT,
         ALLOC_INVN_DTL.REQD_BATCH_NBR,
         ALLOC_INVN_DTL.REQD_SKU_ATTR_1,
         ALLOC_INVN_DTL.REQD_SKU_ATTR_2,
         ALLOC_INVN_DTL.REQD_SKU_ATTR_3,
         ALLOC_INVN_DTL.REQD_SKU_ATTR_4,
         ALLOC_INVN_DTL.REQD_SKU_ATTR_5,
         ALLOC_INVN_DTL.REQD_CNTRY_OF_ORGN,
         ALLOC_INVN_DTL.PKT_SEQ_NBR,
         ALLOC_INVN_DTL.CARTON_NBR,
         ALLOC_INVN_DTL.CARTON_SEQ_NBR,
         ALLOC_INVN_DTL.PIKR_NBR,
         ALLOC_INVN_DTL.PULL_LOCN_SEQ_NBR,
         ALLOC_INVN_DTL.DEST_LOCN_SEQ_NBR,
         ALLOC_INVN_DTL.TASK_CMPL_REF_NBR_SEQ,
         ALLOC_INVN_DTL.SUBSTITUTION_FLAG,
         ALLOC_INVN_DTL.MISC_ALPHA_FIELD_1,
         ALLOC_INVN_DTL.MISC_ALPHA_FIELD_2,
         ALLOC_INVN_DTL.MISC_ALPHA_FIELD_3,
         ALLOC_INVN_DTL.CD_MASTER_ID
    FROM ALLOC_INVN_DTL
    WHERE (
                 ( ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1
                    AND ( ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2
                  AND ( ALLOC_INVN_DTL.SKU_ID = :3
                AND ( ALLOC_INVN_DTL.CNTR_NBR = :4
              AND ( ALLOC_INVN_DTL.STAT_CODE < 1
            AND ( ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL
          )Now, since you are only selecting from one table, there is no need to clutter up the query by qualifying every column name with the table name. Let's simplify with this:
    SELECT
         ALLOC_INVN_DTL_ID,
         WHSE,
         SKU_ID,
         INVN_TYPE,
         PROD_STAT,
         BATCH_NBR,
         SKU_ATTR_1,
         SKU_ATTR_2,
         SKU_ATTR_3,
         SKU_ATTR_4,
         SKU_ATTR_5,
         CNTRY_OF_ORGN,
         ALLOC_INVN_CODE,
         CNTR_NBR,
         TRANS_INVN_TYPE,
         PULL_LOCN_ID,
         INVN_NEED_TYPE,
         TASK_TYPE,
         TASK_PRTY,
         TASK_BATCH,
         ALLOC_UOM,
         ALLOC_UOM_QTY,
         QTY_PULLD,
         FULL_CNTR_ALLOCD,
         ORIG_REQMT,
         QTY_ALLOC,
         DEST_LOCN_ID,
         TASK_GENRTN_REF_CODE,
         TASK_GENRTN_REF_NBR,
         TASK_CMPL_REF_CODE,
         TASK_CMPL_REF_NBR,
         ERLST_START_DATE_TIME,
         LTST_START_DATE_TIME,
         LTST_CMPL_DATE_TIME,
         NEED_ID,
         STAT_CODE,
         CREATE_DATE_TIME,
         MOD_DATE_TIME,
         USER_ID,
         PKT_CTRL_NBR,      
         REQD_INVN_TYPE,      
         REQD_PROD_STAT,
         REQD_BATCH_NBR,
         REQD_SKU_ATTR_1,
         REQD_SKU_ATTR_2,
         REQD_SKU_ATTR_3,
         REQD_SKU_ATTR_4,
         REQD_SKU_ATTR_5,
         REQD_CNTRY_OF_ORGN,
         PKT_SEQ_NBR,
         CARTON_NBR,
         CARTON_SEQ_NBR,
         PIKR_NBR,
         PULL_LOCN_SEQ_NBR,
         DEST_LOCN_SEQ_NBR,
         TASK_CMPL_REF_NBR_SEQ,
         SUBSTITUTION_FLAG,
         MISC_ALPHA_FIELD_1,
         MISC_ALPHA_FIELD_2,
         MISC_ALPHA_FIELD_3,
         CD_MASTER_ID
    FROM ALLOC_INVN_DTL
    WHERE (
                 ( TASK_CMPL_REF_CODE = :1
                    AND ( TASK_CMPL_REF_NBR = :2
                  AND ( SKU_ID = :3
                AND ( CNTR_NBR = :4
              AND ( STAT_CODE < 1
            AND ( PULL_LOCN_ID IS NULL
          )And finally, your WHERE clause is a simple string of AND conditions, there was no need to complicate it with all of the nested parentheses. Much simpler:
    WHERE ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1
       AND  ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2
       AND  ALLOC_INVN_DTL.SKU_ID = :3
       AND  ALLOC_INVN_DTL.CNTR_NBR = :4
       AND  ALLOC_INVN_DTL.STAT_CODE < 1
       AND  ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL
               None of the above makes a whit of difference in your query performance, but if you worked in my office, I would make you clean it up before I even attempted to do a performance analysis.
    Edited by: EdStevens on Mar 26, 2011 2:14 PM

  • Slow query against seg$ - Oracle 10g

    Hi,
    Our AWR report shows the following slow query, 3 minutes per execution,
    select file#, block# from seg$ where type# = 3 and ts# = :1
    This query isn't from our application for sure. Does anyone know what backgroud jobs or processes may execute this query?
    Thanks.

    user632535 wrote:
    Hi,
    Our AWR report shows the following slow query, 3 minutes per execution,
    select file#, block# from seg$ where type# = 3 and ts# = :1
    This query isn't from our application for sure. Does anyone know what backgroud jobs or processes may execute this query?
    It looks like the type of thing the SMON would run to clear up temporary segments after a process has done a rebuild, move, drop or similar. One reason why it might be slow is if you have a very large number of objects in a given tablespace that is subject to a lot of drops, creates etc. (E.g. a tablespace holding a complicated composite partitioned object with lots of indexes that goes through a frequent cycle of add/drop partition).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The temptation to form premature theories upon insufficient data is the bane of our profession."
    Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear".

  • Very Slow Query due to Bitmap Conversion

    I have a strange problem with the performance of a spatial query. If I perform a 'SELECT non_geom_column FROM my_table WHERE complicated_join_query' the result comes back sub-second. However, when I replace the column selected with geometry and perform 'SELECT geom_column FROM my_table WHERE same_complicated_join_query' the response takes over a minute.
    The issue is that in the second case, despite the identical where clause, the explain plan is significantly different. In the 'select geom_column' query there is a BITMAP CONVERSION (TO ROWIDS) which accounts for all of the extra time, where as in the 'select other_column' query that conversion is replaced with TABLE ACCESS (BY INDEX ROWID) which is near instant.
    I have tried putting in some hints, although I do not have much experience with hints, and have also tried nesting the query in various sub-selects. Whatever I try I can not persuade the explain plan to drop the bitmap conversion when I select the geometry column. The full query and an explanation of that query are below. I have run out of things to try, so any help or suggestions at all would be much appreciated.
    Regards,
    Chris
    Explanation and query
    My application allows users to select geometries from a map image through clicking, dragging a box and various other means. The image is then refreshed - highlighting geometries based on the query with which I am having trouble. The user is then able to deselect any of those highlighted geometries, or append others with additional clicks or dragged selections.
    If there are 2 (or any even number of) clicks within the same geometry then that geometry is deselected. Alternatively the geometry could have been selected through an intersection with a dragged box, and then clicked in to deselect - again an even number of selections. Any odd number of selections (i.e. selecting, deselecting, then selecting again) would result in the geometry being selected.
    The application can not know if the multiple user clicks are in the same geometry, as it simply has an image to work with, so all it does is pass all the clicks so far to the database to deal with.
    My query therefore does each spatial point or rectangle query in turn and then appends the unique key for the rows each returned to a list. After performing all of the queries it groups the list by the key and the groups with an odd total are 'selected'. To do this logic in a single where clause I have ended up with nested select statements that are joined with union all commands.
    The query is therefore..
    SELECT
    --the below column (geometry) makes it very slow...replacing it with any non-spatial column takes less than 1/100 of the time - that is my problem!
    geometry
    FROM
    my_table
    WHERE
    primary_key IN
    SELECT primary_key FROM
    SELECT primary_key FROM my_table WHERE
    sdo_relate(geometry, mdsys.sdo_geometry(2003, 81989, NULL, sdo_elem_info_array(1, 1003, 3), sdo_ordinate_array( rectangle co-ords )), 'mask=anyinteract') = 'TRUE'
    UNION ALL SELECT primary_key FROM my_table WHERE
    sdo_relate(geometry, mdsys.sdo_geometry(2001, 81989, sdo_point_type( point co-ords , NULL), NULL, NULL), 'mask=anyinteract') = 'TRUE'
    --potentially more 'union all select...' here
    GROUP BY primary_key HAVING mod(count(*),2) = 1     
    AND
    --the below is the bounding rectangle of the whole image to be returned
    sdo_filter(geometry, mdsys.sdo_geometry(2003, 81989, NULL, sdo_elem_info_array(1, 1003, 3), sdo_ordinate_array( outer rectangle co-ords )), 'mask=anyinteract') = 'TRUE'

    Hi
    Thanks for the reply. After a lot more googling- it turns out this is a general Oracle problem and is not solely related to use of the GEOMETRY column. It seems that sometimes, the Oracle optimiser makes an arbitrary decision to do bitmap conversion. No amount of hints will get it to change its mind !
    One person reported a similarly negative change after table statistic collection had run.
    Why changing the columns being retrieved should change the execution path, I do not know.
    We have a numeric primary key which is always set to a positive value. When I added "AND primary_key_column > 0" (a pretty pointless clause) the optimiser changed the way it works and we got it working fast again.
    Chris

  • Why Isn't xmlindex being used in slow query on binary xml table eval?

    I am running a slow simple query on Oracle database server 11.2.0.1 that is not using an xmlindex. Instead, a full table scan against the eval binary xml table occurs. Here is the query:
    select -- /*+ NO_XMLINDEX_REWRITE no_parallel(eval)*/
          defid from eval,
          XMLTable(XMLNAMESPACES(DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
          'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7"),
          '$doc/eval/derivedFacts/ns7:derivedFact' passing eval.object_value as "doc" columns defid varchar2(100) path 'ns7:defId'
           ) eval_xml
    where eval_xml.defid in ('59543','55208'); The predicate is not selective at all - the returned row count is the same as the table row count (325,550 xml documents in the eval table). When different values are used bringing the row count down to ~ 33%, the xmlindex still isn't used - as would be expected in a purely relational nonXML environment.
    My question is why would'nt the xmlindex be used in a fast full scan manner versus a full table scan traversing the xml in each eval table document record?
    Would a FFS hint be applicable to an xmlindex domain-type index?
    Here is the xmlindex definition:
    CREATE INDEX "EVAL_XMLINDEX_IX" ON "EVAL" (OBJECT_VALUE)
      INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
      ('XMLTable eval_idx_tab XMLNamespaces(DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03'',
      ''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7"),''/eval''
           COLUMNS defId VARCHAR2(100) path ''/derivedFacts/ns7:derivedFact/ns7:defId''');Here is the eval table definition:
    CREATE
      TABLE "N98991"."EVAL" OF XMLTYPE
        CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
        INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" ENABLE
      XMLTYPE STORE AS SECUREFILE BINARY XML
        TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
        KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT)
      ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
        "EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
        SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
    /eval/@eval_dt'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
    WITH
      TIME ZONE))),
        "EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50)))
      PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" ; Sample cleansed xml snippet:
    <?xml version = '1.0' encoding = 'UTF-8' standalone = 'yes'?><eval createdById="xxxx" hhhhMemberId="37e6f05a-88dc-41e9-a8df-2a2ac6d822c9" category="eeeeeeee" eval_dt="2012-02-11T23:47:02.645Z" evalId="12e007f5-b7c3-4da2-b8b8-4bf066675d1a" xmlns="http://www.xxxxx.com/vvvv/domains/eval/2010/03" xmlns:ns2="http://www.cigna.com/nnnn/domains/derived/fact/2010/03" xmlns:ns3="http://www.xxxxx.com/vvvv/domains/common/2010/03">
       <derivedFacts>
          <ns2:derivedFact>
             <ns2:defId>12345</ns2:defId>
             <ns2:defUrn>urn:mmmmrunner:Medical:Definition:DerivedFact:52657:1</ns2:defUrn>
             <ns2:factSource>tttt Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>boolean</ns2:type>
                <ns2:value>true</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
          <ns2:derivedFact>
             <ns2:defId>52600</ns2:defId>
             <ns2:defUrn>urn:ddddrunner:Medical:Definition:DerivedFact:52600:2</ns2:defUrn>
             <ns2:factSource>cccc Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>string</ns2:type>
                <ns2:value>null</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
          <ns2:derivedFact>
             <ns2:defId>59543</ns2:defId>
             <ns2:defUrn>urn:ddddunner:Medical:Definition:DerivedFact:52599:1</ns2:defUrn>
             <ns2:factSource>dddd Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>string</ns2:type>
                <ns2:value>INT</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
                With the repeating <ns2:derivedFact> element continuing under the <derivedFacts>The Oracle XML DB Developer's Guide 11g Release 2 isn't helping much...
    Any assitance much appreciated.
    Regards,
    Rick Blanchard

    odie 63, et. al.;
    Attached is the reworked select query, xmlindex, and 2ndary indexes. Note: though namespaces are used; we're not registering any schema defns.
    SELECT /*+ NO_USE_HASH(eval) +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
    eval_xml.eval_catt, df.defid FROM eval,
    --df.defid FROM eval,
    XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
                            'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
            '/eval' passing eval.object_value
             COLUMNS
               eval_catt VARCHAR2(50) path '@category',
               derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
    XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
                              DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
            '/ns7:derivedFact' passing eval_xml.derivedFact
             COLUMNS
               defid VARCHAR2(100) path 'ns7:defId') df
    WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
    --where df.defid = '52657';
    SELECT /*+ NO_USE_HASH(eval +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
    eval_xml.eval_catt, df.defid FROM eval,
    --df.defid FROM eval,
    XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
                            'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
            '/eval' passing eval.object_value
             COLUMNS
               eval_catt VARCHAR2(50) path '@category',
               derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
    XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
                              DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
            '/ns7:derivedFact' passing eval_xml.derivedFact
             COLUMNS
               defid VARCHAR2(100) path 'ns7:defId') df
    WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
    --where df.defid = '52657'; create index defid_2ndary_ix on eval_idx_tab_II (defID);
         eval_catt VARCHAR2(50) path ''@CATEGORY''');
    create index eval_catt_2ndary_ix on eval_idx_tab_I (eval_catt);The xmlindex is getting picked up but a couple of problesm:
    1. In the developemnt environment, no xml source records for defid '52657' or '52599' are being displayed - just an empty output set occurs; in spite of these values being present and stored in the source xml.
    This really has me stumped, as can query the eval table to see the actual xml defid vaues '52657' and '52599' exist. Something appears off with the query - which didn't return records even before the corrresponding xml index was created.
    2. The query still performs slowly, in spite of using the xmlindex. The execution plan shows a full table scan of eval occurring whether or not a HASH JOIN or MERGE JOIN (gets used in place of the HASH JOIN when the NO_USE_HASH(eval) int is used.
    3. Single column 2ndary indexes created respectively for eval_catt and defid are not used in the execution plan - which may be expected upon further consideration.
    In the process of running stats at this moment, to see if performance improves....
    At this point, really after why item '1.' is occurring?
    Edited by: RickBlanchardSRSCigna on Apr 16, 2012 1:33 PM

  • Is there any better option than this slow query?

    Hi all,
    i want to find out lets say Ticekt no range from Variable1 - Variable 2, are all ready existed or missing from my ticket master table which is having a million records and the no grows timely.
    For example i want to find out in range 30000 - 50000
    if any missings it should give missing number are
    34567
    45678 etc . etc.
    i wrote a for.. loop and im checking one bye one from ticket master table using select count(*) from ticket_master where ticket_no = var, which is time consuming and server becoming slow when i issue this query. my ticket master ticket_no is indexed.
    any better idea advise please..

    I am not sure I understand your problem correctly.
    Here are some test datas:
    create table ticket_masters ( ticket_no number) ;
    exec for i in 1..1000 loop insert into ticket_masters values (round(i/0.97)); end loop
    select 200-1+rownum missing from ticket_masters where rownum<=300-200
      minus
    select ticket_no from ticket_masters where ticket_no between 200 and 300
       MISSING
           217
           250
           283i am select rownum from ticket_masters, but I could select from anything actually, a pl/sql table, dual group by cube(1,1,1,1,1,1,1,1,1,1), all_objects, ...
    Could you please do a desc ticket_masters to show me your datatype and also select a few ticket_no
    Regards
    Laurent

  • Slow query for one person

    I have a simple query that runs fast for all users except one.
    He has the same version of software, the same tablespaces, privelges, etc as other users but the query returns 4 minutes slower than others.
    any idea what is wrong?

    JColl4,
    You said:
    like the rain cloud that follows eyeore?I believe it is spelled "Eeyore".
    Good Luck,
    Avi.

  • Slow query on remote server

    Hi
    I am running a query as below;
    INSERT INTO tblEvents( <fields list> )
    SELECT <fields list>
    FROM OPENROWSET('SQLNCLI', 'Server=<ip address>;DATABASE=MyDB;Uid=sa;Pwd=MyPassword;', 'SELECT * FROM tblEvents') AS a
    WHERE (ID = 68596)
    Problem is that below part of query runs very slow (takes around 3 mins to complete) even though it just needs to bring one row.
    SELECT <fields list>
    FROM OPENROWSET('SQLNCLI', 'Server=<ip address>;DATABASE=MyDB;Uid=sa;Pwd=MyPassword;', 'SELECT * FROM tblEvents') AS a
    WHERE (ID = 68596)
    What is the problem and how can I speed it up?
    Thanks
    Regards

    Hi,
    as Ronen already mentioned, the syntax you are using will not push anything from the confition down the provider stack to the data source, it will pull the whole data and the filter it on the client side. (where you execute the query).
    You can either use the syntax that Ronen already pointed out OR use a linked Server along with the existing provider for SQL Server. In most cases the provider is able to push down the conditions to the data source, have it executes here and then bring back
    only the relevant rows. This depends on the query you are using and the ability of the driver to translate the conditions.
    See more on creating linked servers here:
    http://msdn.microsoft.com/en-us/library/aa560998.aspx
    -Jens
    Jens K. Suessmeyer
    http://blogs.msdn.com/Jenss

  • Very slow query when specifying where clause

    Hi folks:
    I am using SAS with SQL Passthrough for Oracle, however that may be irrelevant to the problem.
    I have a query that I am subsetting based on a series of where clauses, and when I put the clauses into the query, it slows it down tremendously.
    I can pull the entire table and subset it later on in SAS and it is much faster.
    Can someone explain why the SQL subsetting logic runs slower than pulling 18M rows and subsetting later?
    Here is my query - hope this is enough information
    select
    a.loan_number
    , a.CFLRC_CD
    , a.cservno
    , b.cprj_typ
    , a.occ_stat
    , c.state2
    , c.city2
    , e.hstatus
    , e.hcattype
    , e.dqind
    , e.mthsdel
    from
    VEW_LL_ln_1 a
    , vew_ll_ln_2 b
    , VEW_LL_PROP_GEO_CD c
    , vew_ll_ln_actvy_1 e
    where
    a.loan_number = b.loan_number
    and a.loan_number = c.loan_number
    and a.loan_number = e.loan_number
    and e.act_dte = '01-feb-2011'
    and not (occ_stat in ('2','3'))
    and hcattype <> '800'
    and dqind not in ('E','F','G','H')
    and cprj_typ not in ('1','2','3','4','5','16','17','18','19','20','21','22')
    and cflrc_cd <> '2'
    order by
    a.loan_number
    Thanks

    You're in the wrong forum. This is the forum for the SQL Developer tool. You will get better answers in the SQL and PL/SQL forum which is here PL/SQL

  • Slow query - db_cache_size ?

    Hi,
    Oracle 9.2.0.5.0 ( solaris )
    I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
    The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
    When running on preprod, the hit ratio for the query is .90 + , on production it drops down to .70 - .80 ( as per query below )
    I have plenty of memory available on the machine, would it be wise to size up the caches? db_cache_size, db_keep_cache_size, db_recycle_cache_size ?
       SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
         FROM   v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
                v$sesstat P3, v$statname N3
         WHERE  N1.name = 'db block gets'
         AND    P1.statistic# = N1.statistic#
         AND    P1.sid = &sid
         AND    N2.name = 'consistent gets'
         AND    P2.statistic# = N2.statistic#
         AND    P2.sid = P1.sid
         AND    N3.name = 'physical reads'
         AND    P3.statistic# = N3.statistic#
         AND    P3.sid = P1.sid
    PRE-PRODUCTION
      call     count       cpu    elapsed       disk      query    current        rows   
      Parse        1      0.64       0.64          0          0          0           0      
      Execute      1      0.00       0.00          0          0          0           0      
      Fetch        2    186.92     329.88     162174    5144281          5           1      
      total        4    187.56     330.53     162174    5144281          5           1      
      Elapsed times include waiting on following events:
        Event waited on                             Times   Max. Wait  Total Waited
        ----------------------------------------   Waited  ----------  ------------
        SQL*Net message to client                       2        0.00          0.00
        db file sequential read                    160098        1.44        162.52
        db file scattered read                          1        0.00          0.00
        direct path write                              27        0.66          3.36
        direct path read                               97        0.00          0.02
        SQL*Net message from client                     2      985.79        985.79
    PRODUCTION
      call     count       cpu    elapsed       disk      query    current        rows
      Parse        1      2.41       2.34         79         16          0           0  
      Execute      1      0.00       0.00          0          0          0           0  
      Fetch        2    844.76   12305.06    1507519    5226663          0           1  
      total        4    847.17   12307.41    1507598    5226679          0           1  
      Elapsed times include waiting on following events:
        Event waited on                             Times   Max. Wait  Total Waited
        ----------------------------------------   Waited  ----------  ------------
        SQL*Net message to client                       2        0.00          0.00
        db file sequential read                   1502104        4.40      11849.13
        direct path write                             361        0.57          3.06
        direct path read                              361        0.05          0.88
        buffer busy waits                              36        0.02          0.17
        latch free                                      5        0.01          0.01
        log buffer space                                2        1.00          1.37
        SQL*Net message from client                     2      687.95        687.95
      Suggestions for further investigation more than welcome.

    user12044475 wrote:
    Hi,
    Oracle 9.2.0.5.0 ( solaris )
    I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
    The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
    There are more physical reads, and the average read time is longer. This may simply be a reflection of the fact that other people are working on the production database at the same time and (a) kicking your data out of the cache and (b) causing you to queue at the disc as they do their I/O. A larger cache MIGHT protect your data a little longer, and MAY reduce their I/O at the same time so that the I/Os are faster - but we have no idea what side effects might then appear.
    It's also worth considering whether you did something as you tranferred the data from production to pre-production that helped to improve query performance. (As a simple example, an export/import could have eliminated a lot of row migration - and the nature of your plan means you MIGHT be suffering a lot of excess I/O from "table fetch continued row"). So, how did you get the data from production to test, how long ago, what's happened to it since, and do you have any session statistics taken as you ran the two queries ?
    Since your execution plan (prediction) is a long way off the actual run time, though, (even on the pre-production system), it's probably more important to work out whether you can make your query much more efficient before you make any dramatic changes to the system. I notice that you have three existences subqueries that appear at the end of the plan - such subqueries wreck the optimizer's arithmetic in your version of Oracle and can make it do very silly things. (See for example this blog note: http://jonathanlewis.wordpress.com/2006/11/08/subquery-selectivity )
    The effect of the subqueries may (for example) be why you have a full tablescan on the second table in a nested loop join at one point in your query. The expectation of a vastly reduced number of rows may be why you are seeing nested loops all over the place when (possibly) a couple of hash joins would be more appropriate.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
    Isaac Asimov                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for