Query timeout for large table

Dear friend,
My view always shows timeout because my table is now having 18,00,000 data row.
Now what should I do with this table? can anyone help me?
another question is, in my work purpose I need to create 5-7 report every day. So every time I need to create view for those report. I can not create procedure always because creating view is easier for me.  But the views become slower day by day. My
server is I think quiet good. Xeon cuad core dual processor and Ram is 32.
Is advice will be appreciated.
Thanks in advance

Ya thanks for your time. I am appreciating all the way.
Actually I attach those for present an idea about my database. Most of the time I need to work with just 3 or 4 table which is LC_Profile and student_profile or ROSC database.
I am adding the query but you do not need to go all the query. Just understand how difficult my query use to be. My question is there good way to get result faster than the view?I need to make several report every day. So I use view and join many tables
and need to use many where clause, case, convert time etc. That is why I am asking for suggestion.
SELECT TOP (100) PERCENT dbo.ACF_LCs.YearTrim, dbo.ACF_LCs.EduYr, dbo.vw_Geocode.DivisionID, dbo.vw_Geocode.DivisionB, dbo.vw_Geocode.Division,
dbo.vw_Geocode.DistrictID, dbo.vw_Geocode.District, dbo.vw_Geocode.DistrictB, dbo.vw_Geocode.UpazilaID, dbo.vw_Geocode.Upazila, dbo.vw_Geocode.UpazilaB,
dbo.LCProfile.LCID, dbo.LCProfile.LCYr, dbo.LCProfile.LCNm, dbo.LCProfile.LCNmB, dbo.Vw_Teacher_Active.TeachYr, dbo.Vw_Teacher_Active.TeachEdu,
CASE WHEN TeachEdu = 1 THEN 3000 ELSE 3000 END AS TeacherSalaryOld, dbo.LCProfile.LCAccountNo, dbo.Vw_Teacher_Active.TeachNm,
dbo.Vw_Teacher_Active.TeachSex, dbo.vw_Bank_Branch.LCBankBr, dbo.Vw_Teacher_Active.TeachMob, dbo.LCProfile.UnionID, dbo.UnionCode.UnionB,
dbo.LCProfile.LCVill, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AS MDistrictID,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AS MUpazilaID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AS MLCID,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.MOID,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStatus, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LC1stVstDt,
MonitoringROSCII.dbo.Venu_Info.VenuType, MonitoringROSCII.dbo.Venu_Info.VenuTypeOthr, MonitoringROSCII.dbo.Venu_Info.NoWindow,
MonitoringROSCII.dbo.Venu_Info.SuffWinAir, MonitoringROSCII.dbo.Venu_Info.FreeArsWater, MonitoringROSCII.dbo.Venu_Info.HigLatrin,
MonitoringROSCII.dbo.Venu_Info.SeatArg, MonitoringROSCII.dbo.Venu_Info.Blackboard, MonitoringROSCII.dbo.Venu_Info.DistrictID AS VDistrictID,
MonitoringROSCII.dbo.Venu_Info.UpazilaID AS VUpazilaID, MonitoringROSCII.dbo.Venu_Info.LCID AS VLCID,
MonitoringROSCII.dbo.Vw_UniformYes.DistrictID AS UDistrictID, MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID AS UUpazilaID,
MonitoringROSCII.dbo.Vw_UniformYes.LCID AS ULCID, MonitoringROSCII.dbo.Vw_UniformYes.RecUniformY,
MonitoringROSCII.dbo.Teacher_Training.DistrictID AS TDistrictID, MonitoringROSCII.dbo.Teacher_Training.UpazilaID AS TUpazilaID,
MonitoringROSCII.dbo.Teacher_Training.LCID AS TLCID, MonitoringROSCII.dbo.Teacher_Training.TcrRecFndTrn, MonitoringROSCII.dbo.LC_Info.PrsnMale,
MonitoringROSCII.dbo.LC_Info.PrsnFemale, MonitoringROSCII.dbo.LC_Info.PrsnStdTot, RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DivisionID), 2)
+ RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DistrictID), 2) + RIGHT(CONVERT(varchar, dbo.vw_Geocode.UpazilaID), 2) + RIGHT('000' + CONVERT(varchar,
dbo.Vw_Teacher_Active.LCID), 3) AS InstituteID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStartHr,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCEndHr, dbo.Vw_LCProfile_QStudent_LCwise2013_3.NoStudent AS NoQStudent,
dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu13, dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu45, dbo.PO.PO_NM_E, dbo.PO.PO_NM_B,
dbo.vw_Geocode.Status AS UpStatus, dbo.vw_Geocode.Phase, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.SpecialStatus,
dbo.ACF_LCs.SpecialStatus AS SpecialStatusACF, MonitoringROSCII.dbo.Teacher_Profile.TcrPres, MonitoringROSCII.dbo.Teacher_Profile.TcrMtchLCProf,
CASE WHEN NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL) AND LCStatus = 1 AND ((TcrPres = 1 AND TcrMtchLCProf = 2) OR
TcrPres = 2) THEN 0 ELSE 3000 END AS TeacherSalary
FROM dbo.Vw_Teacher_Active RIGHT OUTER JOIN
dbo.PO RIGHT OUTER JOIN
dbo.vw_LC_Functioning RIGHT OUTER JOIN
dbo.Vw_LCProfile_QStudent_LCwise2013_3 INNER JOIN
dbo.ACF_LCs INNER JOIN
dbo.vw_Geocode INNER JOIN
dbo.LCProfile ON dbo.vw_Geocode.DistrictID = dbo.LCProfile.DistrictID AND dbo.vw_Geocode.UpazilaID = dbo.LCProfile.UpazilaID ON
dbo.ACF_LCs.DistrictID = dbo.LCProfile.DistrictID AND dbo.ACF_LCs.UpazilaID = dbo.LCProfile.UpazilaID AND dbo.ACF_LCs.LcID = dbo.LCProfile.LCID ON
dbo.Vw_LCProfile_QStudent_LCwise2013_3.DistrictID = dbo.ACF_LCs.DistrictID AND
dbo.Vw_LCProfile_QStudent_LCwise2013_3.UpazilaID = dbo.ACF_LCs.UpazilaID AND dbo.Vw_LCProfile_QStudent_LCwise2013_3.LCID = dbo.ACF_LCs.LcID ON
dbo.vw_LC_Functioning.DistrictID = dbo.ACF_LCs.DistrictID AND dbo.vw_LC_Functioning.UpazilaID = dbo.ACF_LCs.UpazilaID AND
dbo.vw_LC_Functioning.LCID = dbo.ACF_LCs.LcID LEFT OUTER JOIN
MonitoringROSCII.dbo.Teacher_Training RIGHT OUTER JOIN
MonitoringROSCII.dbo.Venu_Info RIGHT OUTER JOIN
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo LEFT OUTER JOIN
MonitoringROSCII.dbo.Teacher_Profile ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.Teacher_Profile.DistrictID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.Teacher_Profile.UpazilaID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.Teacher_Profile.LCID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.Teacher_Profile.VisitType AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.Teacher_Profile.LCVisitYr AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = MonitoringROSCII.dbo.Teacher_Profile.Trimister ON
MonitoringROSCII.dbo.Venu_Info.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
MonitoringROSCII.dbo.Venu_Info.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
MonitoringROSCII.dbo.Venu_Info.LCID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AND
MonitoringROSCII.dbo.Venu_Info.VisitType = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType AND
MonitoringROSCII.dbo.Venu_Info.LCVisitYr = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr AND
MonitoringROSCII.dbo.Venu_Info.Trimister = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister LEFT OUTER JOIN
MonitoringROSCII.dbo.Vw_UniformYes ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.Vw_UniformYes.DistrictID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.Vw_UniformYes.LCID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.Vw_UniformYes.VisitType AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.Vw_UniformYes.LCVisitYr LEFT OUTER JOIN
MonitoringROSCII.dbo.LC_Info ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.LC_Info.DistrictID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.LC_Info.UpazilaID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.LC_Info.LCID AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.LC_Info.VisitType AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.LC_Info.LCVisitYr AND
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = MonitoringROSCII.dbo.LC_Info.Trimister ON
MonitoringROSCII.dbo.Teacher_Training.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
MonitoringROSCII.dbo.Teacher_Training.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
MonitoringROSCII.dbo.Teacher_Training.LCID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AND
MonitoringROSCII.dbo.Teacher_Training.VisitType = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType AND
MonitoringROSCII.dbo.Teacher_Training.LCVisitYr = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr AND
MonitoringROSCII.dbo.Teacher_Training.Trimister = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister ON
dbo.ACF_LCs.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
dbo.ACF_LCs.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
dbo.ACF_LCs.LcID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID ON dbo.PO.DistrictID = dbo.LCProfile.DistrictID AND
dbo.PO.UpazilaID = dbo.LCProfile.UpazilaID ON dbo.Vw_Teacher_Active.DistrictID = dbo.LCProfile.DistrictID AND
dbo.Vw_Teacher_Active.UpazilaID = dbo.LCProfile.UpazilaID AND dbo.Vw_Teacher_Active.LCID = dbo.LCProfile.LCID LEFT OUTER JOIN
dbo.UnionCode ON dbo.LCProfile.UnionID = dbo.UnionCode.UnionID AND dbo.LCProfile.UpazilaID = dbo.UnionCode.UpazilaID AND
dbo.LCProfile.DistrictID = dbo.UnionCode.DistrictID LEFT OUTER JOIN
dbo.vw_Bank_Branch ON dbo.LCProfile.LCBankBr = dbo.vw_Bank_Branch.BranchID
GROUP BY dbo.vw_Geocode.DivisionID, dbo.vw_Geocode.DivisionB, dbo.vw_Geocode.DistrictID, dbo.vw_Geocode.DistrictB, dbo.vw_Geocode.UpazilaID,
dbo.vw_Geocode.UpazilaB, dbo.LCProfile.LCID, dbo.LCProfile.LCYr, dbo.LCProfile.LCNmB, dbo.Vw_Teacher_Active.TeachEdu, dbo.LCProfile.LCAccountNo,
dbo.Vw_Teacher_Active.TeachNm, dbo.Vw_Teacher_Active.TeachSex, dbo.vw_Bank_Branch.LCBankBr, dbo.UnionCode.UnionB, dbo.vw_Geocode.Division,
dbo.vw_Geocode.District, dbo.vw_Geocode.Upazila, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.MOID,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStatus, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LC1stVstDt,
MonitoringROSCII.dbo.Venu_Info.NoWindow, MonitoringROSCII.dbo.Venu_Info.SuffWinAir, MonitoringROSCII.dbo.Venu_Info.FreeArsWater,
MonitoringROSCII.dbo.Venu_Info.HigLatrin, MonitoringROSCII.dbo.Venu_Info.SeatArg, MonitoringROSCII.dbo.Venu_Info.Blackboard,
MonitoringROSCII.dbo.Venu_Info.DistrictID, MonitoringROSCII.dbo.Venu_Info.UpazilaID, MonitoringROSCII.dbo.Venu_Info.LCID,
MonitoringROSCII.dbo.Venu_Info.VenuType, MonitoringROSCII.dbo.Venu_Info.VenuTypeOthr, MonitoringROSCII.dbo.Vw_UniformYes.RecUniformY,
MonitoringROSCII.dbo.Vw_UniformYes.DistrictID, MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID, MonitoringROSCII.dbo.Vw_UniformYes.LCID,
MonitoringROSCII.dbo.Teacher_Training.DistrictID, MonitoringROSCII.dbo.Teacher_Training.UpazilaID, MonitoringROSCII.dbo.Teacher_Training.LCID,
MonitoringROSCII.dbo.Teacher_Training.TcrRecFndTrn, dbo.LCProfile.UnionID, MonitoringROSCII.dbo.LC_Info.PrsnMale, MonitoringROSCII.dbo.LC_Info.PrsnFemale,
MonitoringROSCII.dbo.LC_Info.PrsnStdTot, dbo.LCProfile.LCVill, dbo.Vw_Teacher_Active.TeachMob, RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DivisionID), 2)
+ RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DistrictID), 2) + RIGHT(CONVERT(varchar, dbo.vw_Geocode.UpazilaID), 2) + RIGHT('000' + CONVERT(varchar,
dbo.Vw_Teacher_Active.LCID), 3), MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStartHr,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCEndHr, dbo.Vw_LCProfile_QStudent_LCwise2013_3.NoStudent, dbo.PO.PO_NM_E, dbo.PO.PO_NM_B,
dbo.vw_Geocode.Status, dbo.vw_Geocode.Phase, dbo.Vw_Teacher_Active.TeachYr, dbo.LCProfile.LCNm,
MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.SpecialStatus, dbo.ACF_LCs.YearTrim, dbo.ACF_LCs.EduYr,
dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu13, dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu45, dbo.ACF_LCs.SpecialStatus,
MonitoringROSCII.dbo.Teacher_Profile.TcrPres, MonitoringROSCII.dbo.Teacher_Profile.TcrMtchLCProf,
CASE WHEN NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL) AND LCStatus = 1 AND ((TcrPres = 1 AND TcrMtchLCProf = 2) OR
TcrPres = 2) THEN 0 ELSE 3000 END
HAVING (dbo.ACF_LCs.YearTrim = 1) AND (dbo.ACF_LCs.EduYr = 2014) AND (dbo.LCProfile.LCYr < 2013) AND
(MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = 2014) AND (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = 1) AND
(MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = 3) AND (NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL)) OR
(dbo.ACF_LCs.YearTrim = 1) AND (dbo.ACF_LCs.EduYr = 2014) AND (dbo.LCProfile.LCYr < 2013) AND
(MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL)
ORDER BY dbo.vw_Geocode.DivisionID
Another problem is ,
Lets say I have a table with id, name, place,address, timeof_attendance and an id which is big int type. This table have 18,00,000 record. and each day increasing with 5000 record. From there I am finding the attandance in every day. So I have to create
4 nested view to come to a result. Now My query shows timeout. If I delete old data then it works. This kind of problem I am facing.
Please advice me.
Thanks

Similar Messages

  • HS connection to MySQL fails for large table

    Hello,
    I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
    I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-00942: table or view does not exist
    [Generic Connectivity Using ODBC][MySQL][ODBC 3.51
        Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
    (SQL State: S1T00; SQL Code: 2013)
    ORA-02063: preceding 2 lines from MYSQL_RMG
    I traced the HS connection and here is the result from the .trc file:
    Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
    Heterogeneous Agent Release
    10.2.0.1.0
    (0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
    (0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
    (0) IDENTIFIER_QUOTE_CHAR="",
    (0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
    (0) tasource name='MYSQL_RMG' type='ODBC'
    (0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
    (0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
    (0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
    (0) tokenSize='1000' noInsertParameterization='true'
    noThreadedReadAhead='true'
    (0) noCommandReuse='true'/></environment></binding></navobj>
    (0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
    (0) hoadtab(26); Entered.
    (0) Table 1 - PROGRESSNOTES
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
    (0) memory (SQL State: S1T00; SQL Code: 2008)
    (0) (Last message occurred 2 times)
    (0)
    (0) hoapars(15); Entered.
    (0) Sql Text is:
    (0) SELECT * FROM "PROGRESSNOTES"
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
    (0) server during query (SQL State: S1T00; SQL Code: 2013)
    (0) (Last message occurred 2 times)
    (0)
    (0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
    (0) log file for details.
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
    (0) the log file for details.
    (0) Closing log file at THU JUN 12 11:20:38 2008.
    I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-02068: following severe error from MYSQL_RMG
    ORA-28511: lost RPC connection to heterogeneous remote agent using
    SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
    Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
    Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
    # HS init parameters
    HS_FDS_CONNECT_INFO = MYSQL_RMG
    HS_FDS_TRACE_LEVEL = ON
    My SID_LIST_LISTENER entry is:
    (SID_DESC =
    (PROGRAM = HSODBC)
    (SID_NAME = MYSQL_RMG)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    Finally, here is my TNSNAMES.ORA entry for the HS connection:
    MYSQL_RMG =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
    (CONNECT_DATA =
    (SID = MYSQL_RMG)
    (HS = OK)
    Your advice will be greatly appeciated,
    Thanks,
    Luis
    Message was edited by:
    lmconsite

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • Gather table stats taking longer for Large tables

    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    Does Table size actually matter for stats collection ?

    Max wrote:
    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    09:40:05 SQL> desc user_tables
    Name                            Null?    Type
    TABLE_NAME                       NOT NULL VARCHAR2(30)
    TABLESPACE_NAME                        VARCHAR2(30)
    CLUSTER_NAME                             VARCHAR2(30)
    IOT_NAME                             VARCHAR2(30)
    STATUS                              VARCHAR2(8)
    PCT_FREE                             NUMBER
    PCT_USED                             NUMBER
    INI_TRANS                             NUMBER
    MAX_TRANS                             NUMBER
    INITIAL_EXTENT                         NUMBER
    NEXT_EXTENT                             NUMBER
    MIN_EXTENTS                             NUMBER
    MAX_EXTENTS                             NUMBER
    PCT_INCREASE                             NUMBER
    FREELISTS                             NUMBER
    FREELIST_GROUPS                        NUMBER
    LOGGING                             VARCHAR2(3)
    BACKED_UP                             VARCHAR2(1)
    NUM_ROWS                             NUMBER
    BLOCKS                              NUMBER
    EMPTY_BLOCKS                             NUMBER
    AVG_SPACE                             NUMBER
    CHAIN_CNT                             NUMBER
    AVG_ROW_LEN                             NUMBER
    AVG_SPACE_FREELIST_BLOCKS                   NUMBER
    NUM_FREELIST_BLOCKS                        NUMBER
    DEGREE                              VARCHAR2(10)
    INSTANCES                             VARCHAR2(10)
    CACHE                                  VARCHAR2(5)
    TABLE_LOCK                             VARCHAR2(8)
    SAMPLE_SIZE                             NUMBER
    LAST_ANALYZED                             DATE
    PARTITIONED                             VARCHAR2(3)
    IOT_TYPE                             VARCHAR2(12)
    TEMPORARY                             VARCHAR2(1)
    SECONDARY                             VARCHAR2(1)
    NESTED                              VARCHAR2(3)
    BUFFER_POOL                             VARCHAR2(7)
    FLASH_CACHE                             VARCHAR2(7)
    CELL_FLASH_CACHE                        VARCHAR2(7)
    ROW_MOVEMENT                             VARCHAR2(8)
    GLOBAL_STATS                             VARCHAR2(3)
    USER_STATS                             VARCHAR2(3)
    DURATION                             VARCHAR2(15)
    SKIP_CORRUPT                             VARCHAR2(8)
    MONITORING                             VARCHAR2(3)
    CLUSTER_OWNER                             VARCHAR2(30)
    DEPENDENCIES                             VARCHAR2(8)
    COMPRESSION                             VARCHAR2(8)
    COMPRESS_FOR                             VARCHAR2(12)
    DROPPED                             VARCHAR2(3)
    READ_ONLY                             VARCHAR2(3)
    SEGMENT_CREATED                        VARCHAR2(3)
    RESULT_CACHE                             VARCHAR2(7)
    09:40:10 SQL> >
    Does Table size actually matter for stats collection ?yes
    Handle:     Max
    Status Level:     Newbie
    Registered:     Nov 10, 2008
    Total Posts:     155
    Total Questions:     80 (49 unresolved)
    why so many unanswered questions?

  • How can I setup the query timeout for SQVI in basis?

    Hi,
    I want to setup query timeout for a particular user so that if his query (created in SQVI) takes more then say 10 minutes, it automatically times out and system resources are freed up.
    How can I do this for a specific user? Also in case I cant do it for a specific user how can I do it for all users?
    Thanks for reading

    The memory is limited by the parameters
    abap/heap_area_dia
    abap/heap_area_nondia
    abap/heap_area_total
    which are system wide.
    If a user requests more memory than those, the program will dump.
    Markus

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Need help in optimisation for a select query on a large table

    Hi Gurus
    Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
    My Select is reading from a table which contains 10 Million records.
    I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
    I am pasting the code. please help
    Data: wa_i_tab1 type tys_tg_1 .
    DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
    Data : wa_result_pkg type tys_tg_1,
    wa_result_pkg1 type tys_tg_1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
    /BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
    into CORRESPONDING FIELDS OF table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
    sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    loop at RESULT_PACKAGE into wa_result_pkg.
    read TABLE i_tab INTO wa_i_tab1 with key
    /BIC/ZREB_SDAT =
    wa_result_pkg-/BIC/ZREB_SDAT
    AGREEMENT = wa_result_pkg-AGREEMENT
    /BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
    IF SY-SUBRC = 0.
    move wa_i_tab1-/BIC/ZSETLRUN to
    wa_result_pkg-/BIC/ZSETLRUN.
    wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
    modify RESULT_PACKAGE from wa_result_pkg1
    TRANSPORTING /BIC/ZSETLRUN.
    ENDIF.
    CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
    endloop.

    Hi,
    1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
    2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
    refer the below code is
    RESULT_PACKAGE1[] = RESULT_PACKAGE[].
    sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
    from /BIC/PZREB_SDAT
    into table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE1
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE1-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
    and one more thing your getting 10 million records so use package size in you select query.
    Refer the following link also For All Entry for 1 Million Records
    Regards,
    Dhina..
    Edited by: Dhina DMD on Sep 15, 2011 7:17 AM

  • Postgres' LIMIT .. OFFSET for large table

    Hi!
    I have a really large table (some millions of rows) which I'd like to present on a web page. I let the user choose a limit, say 25 lines per page, and present some buttons to go one page forward or backwards.
    Some years ago, I have done this using PostgreSQL. There's an easy way to do it using LIMIT .. OFFSET. In Oracle, there's no such functionality.
    Currently, my 'workaround' looks like this (a bit more complex in reality):
    1 SELECT * FROM (
    2 SELECT
    3 ROW_NUMBER() OVER ( ORDER BY MSG_RCV_TIME DESC) AS ROWNO,
    4 TO_CHAR(MSG_RCV_TIME) MSG_RCV
    5 FROM MSG_TABLE
    6* ORDER BY MSG_RCV_TIME DESC) WHERE ROWNO BETWEEN 1 AND 10
    This gives back 10 rows, which does the job. The problem is: It takes AGES!. The web server falls in to a timeout before even printing one line. First, Oracle has to suck in all x*1'000'000 lines just to sort out the ones it doesn't need. That can't be the solution, can it?
    In this forum, I have read a few notes about PARTITION, CURSOR and such things, but I didn't really get what the use of it is.
    Any hints on that? This forum is based on Oracle, too (I hope), and it's fast. There must be a solution for this.
    Btw, the table I am talking about is being filled by syslog-ng, and it currently grows by 200MB per day (and it's still in the testing phase). I expect some hundred million lines to be present later.
    Thanks a lot in advance
    André

    See Tom Kyte's site for thisCool. Didn't know this one. How is he checking the performance of the queries?
    The one comment in there that I entirely agree with
    is that such large result sets are meaningless to the
    human eye so I would question exactly what you are
    trying to achieve. As Tom rightly says, nobody is
    ever going to scroll down to rows 999001 - 999010,
    even if they could.Of course not. But you see, as an example, that if you type just one word into google's mask, it returns loads of pages. As soon as you see that your query was not really a good one, you try with more specific words, and it returns less pages. That's exactly what my GUI is going to do. First, it gives you an overview, then, it lets you refine the search.
    Anyway: As soon as I limit the output in the innermost query, I doubt it's useful: Say, I limit the number of rows to browse through to 1000, but syslog-ng is producing 2000 rows per minute - you'll miss the rows you were maybe looking for.
    It's essential to be able to see all the records. I don't mind if nobody ever looks at pages 200'000 to 1'000'000.
    Thanks again for the great link.
    André (who really starts to like Oracle and its community)

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • Query Needed for Partitioning table

    Hi,
    I have created a table called Test. There is a column named business_name.
    There are several businesses like ABC,BCD,ADE....
    There will be lakhs of rows corresponding to each business, i mean there will be lakhs of entires corresponding to ABC,BCD....
    So i like to partition the table according to business_name so that the search will be more faster.As we had partitioned according to the business_name, i hope we need to search only on the partition corresponding to the particular business.
    can any one provide the Query to partition the table ' TEST ' according to the column ' business_name ' .
    Also can anyone provide Query to modify the already existing table ' TEST ' to incorporate partition for the column ' business_name '.

    We can partiton a table by the following
    create table Generalledger (
         record_id     number,
         business_name     varchar2(3)
         sales_dt     date,
         amount     number(10)
    partition by list (business_name)
    partition ct values ('ABC'),
    partition ca values ('BCD'),
    partition def values (default)
    But if we dont know the values like 'ABC' , 'BCD'
    ....how can we do the partitionuse SQL to generate part (or all) of your DDL statement. The following will output one partition statement for each business_name:
    SELECT DISTINCT 'partition p_' || BUSINESS_NAME || ' values (''' ||
                     BUSINESS_NAME || '''),'
    FROM GENERALLEDGER;

  • Slow query due to large table and full table scan

    Hi,
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:
    SELECT table1.column, table2.column, table3.column
    FROM table1
    JOIN table2 on table1.table2Id = table2.id
    LEFT JOIN table3 on table2.table3id = table3.id
    WHERE table1.id IN(
    SELECT id
    FROM (
    (SELECT a.*, rownum rnum FROM(
    SELECT table1.id
    FROM table1,
    table2,
    table3
    WHERE
    table1.table2id = table2.id
    AND
    table2.table3id IS NULL OR table2.table3id = :table3IdParameter
    ) a
    WHERE rownum <= :end))
    WHERE rnum >= :start
    Table1 and table2 are the large tables in this example. This query starts two full table scans on those tables.
    Can we avoid this? We have, what we think are, the correct indexes.
    /best regards, Håkan

    >
    Hi Håkan - welcome to the forum.
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:Firstly, please read the forum FAQ - top right of page.
    Please format your SQL using tags [code /code].
    In order to help us to help you.
    Please post table structures - relevant (i.e. joined, FK, PK fields only) in the form - note use of code tags - we can just run table create script.
    CREATE TABLE table1
      Field1  Type1,
      Field2  Type2,
    FieldN  TypeN
    );Then give us some table data - not 100's of records - just enough in the form
    INSERT INTO Table1 VALUES(Field1, Field2.... FieldN);
    ..Please post EXPLAIN PLAN - again with tags.
    HTH,
    Paul...
    /best regards, Håkan

  • "db file scattered read" too high and Query going for full table scan-Why ?

    Hi,
    I had a big table of around 200mb and had a index on it.
    In my query I am using the where clause which has to use the
    index. I am neither using any not null condition
    nor using any function on the index fields.
    Still my query is not using the index.
    It is going for full table scan.
    Also the statspack report is showing the
    "db file scattered read" too high.
    Can any body help and suggest me why this is happenning.
    Also tell me the possible solution for it.
    Thanks
    Arun Tayal

    "db file scattered read" are physical reads/multi block reads. This wait occurs when the session reading data blocks from disk and writing into the memory.
    Take the execution plan of the query and see what is wrong and why the index is not being used.
    However, FTS are not always bad. By the way, what is your db_block_size and db_file_multiblock_read_count values?
    If those values are set to high, Optimizer always favour FTS thinking that reading multiblock is always faster than single reads (index scans).
    Dont see oracle not using index, just find out why oracle is not using index. Use the INDEX hint to force optimizer to use index. Take the execution with/witout index and compare the cardinality,cost and of course, logical reads.
    Jaffar
    Message was edited by:
    The Human Fly

  • ALV: how to save context space for large tables ?

    Dear collegues,
    We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
    Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
    Question: do you have an idea, how to save context space for those ALV fields ?
    Best regards,
    Christian

    >We are displaying an ALV table that is quite large.
    Do you mean quite large as in a large number of columns or as in a large number of rows (or both)?  I assume that the problem is probably more related to a large number of rows.  For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time.  Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
    Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows?

  • Query statement for internal table

    is it possible to use a select statement to select data from an internal table? if yes, can anyone show me the codes to it? thx

    Hi Daphne,
    You use SELECT statement to read data from database table but not from Internal table.
    For reading data from Internal table, you have to use READ statement.
    Syntax:
    READ TABLE itab { table_key
                    | free_key
                    | index } result.
    Effect of using read statement:
    This statement reads a row from internal table itab. You have to specify the row by either naming values table_key for the table key, a free condition free_key or an index index. The latter choice is possible only for index tables. The output result result determines when and where the row contents are read.
    If the row to be read is not uniquely specified, the first suitable row is read. In the case of index tables, this row has the lowest table index of all matching rows.
    Reward if usefull
    thanks
    swaroop

  • Web show document timeout for large data in file

    Hi,
    I'm using Oracle Application Server 10.1.2.0.2 as a middleware and we launch an excel file using web.show_document. The problem that i'm facing is that if the data in the excel file is too large (I'm unable to quantify large!) the browser is unable to show the document. I have tried to increase the timeout in the Apache settings, but still the problem persists. Has anybody faced such a problem with web.show_document?

    OK, then you should get a thread dump of WLS just after your DBA confirms that all the heavy lifting for a given transaction is done, to see what WLS thinks it needs to be doing. Actually, I would open a support case and get instructions how to turn on JTA logging, so we'll see step-by-step, timestamp-by-timestamp the progress of the transaction.

  • CAML query performance for large lists

    I have a list with more than 10000 items. I am retrieving the items and displaying it in a RAD Grid on my page using CAML query. While retrieving the items, around 1000 records are retrieved due to filter. I have enabled paging in my grid and PageSize is
    set to 25. I have noticed that the load time of my page is very slow as it retrieves all the 1000 records at once.
    Is it possible to retrieve just 25 records for the first page on load. On click on the Next button or Page number it should retrieve the next set of 25 records for that particular page.
    I want to know if there is any way to link CAMl query paging with RAD grid paging
    Any code example would be greatly helpful.

    Hi,
    For pagination in SPListItem use the SPQuery.ListItemCollectionPosition property. 
    http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.listitemcollectionposition(v=office.15).aspx
    check the usefull urls
    http://omourad.blogspot.in/2009/07/paging-with-listitemcollectionposition.html
    http://www.anmolrehan-sharepointconsultant.com/2011/10/client-object-model-access-large-lists.html
    Anil

Maybe you are looking for