View Vs temp table

can anyone please explain me the differnce of having a view replaced with a temp table.
the reason i am asking is one of our jobs takes 4 hours to complete using a view, if the same view is replaced by a table, it just takes 20 minutes. If the base tables are being modified, will the results be the same for both , ie using a view vs a temp table in place of a view..pls help..thnks in advnace.
Niki

P.S.  If the view is indexed (materialized) all bets are off!
This is not true, and SQL Server does not work in the manner that other products like Oracle would in this aspect.  Indexed Views are updated as the data changes in the tables as a part of the transaction changing the information.  This is why there are limitations and criteria that govern what kinds of views can be indexed or not (ie, no self joins).
Jonathan Kehayias
http://sqlblog.com/blogs/jonathan_kehayias/
http://www.sqlclr.net/
Please click the Mark as Answer button if a post solves your problem!

Similar Messages

  • Temp Table in view

    I Know we cannot use Temp table,table variable in View but i want to know the reason behind it

    Assuming the second batch is executed from a 2nd connection, there is no doubt that the query should return 46 on that 2nd connection. If the 1st connection would run the same SELECT statement, it should return 33.
    And if the definition of the temp tables are
    CREATE TABLE #temp(a int NOT NULL)
    CREATE TABLE #temp(b float NOT NULL)
    Keep in mind how view definitions are stored in SQL Server.
    I don't know exactly why the original poster wanted this, but what does make sense is of course a temporary view, and preferrably one that does not require a batch of its own. A couple of years ago, I wrote a stored procedure where I found that I repeated
    the same conditions all over again. This made me year for something like a module-level CTE and I submitted this Connect item:
    http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=343067
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Multiple users accessing the same data in a global temp table

    I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
    We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
    I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
    The GTT isn't going to work unless we get new sessions. How do we do this?
    Thanks!

    DGS wrote:
    I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
    We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
    I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
    The GTT isn't going to work unless we get new sessions. How do we do this?
    Thanks!You may want to try changing your GTT to 'ON COMMIT DELETE ROWS' and have the .Net app use a transaction object.
    We had a similar problem and I found help in the following thread:
    Re: Global temp table problem w/ODP?
    All the best.

  • Benefit and limitation of temp table in sql server 2008

    I have a datagrid in front end of asp.net . when user select a row it shows another gridview (multiple row ) where user have to put some data. when user click CLOSE button  i have to keep all data of 2nd gridview in reference with 1st gridview row id. 
    Same process for each row of 1st gridview. If i will keep all the record in a datatable in view state and finally save the data in database table when final save is clicked then the process may be very slow due to large data in view state. So i think to store
    data in temp datable in database for each CLOSE click. For this porpose which temp table i should use
    1. Local temporary tables
    2. Global temporary tables
    3. Normal tables.
    Multiple user may do the same thing same time.  Please help me.
    Thanks

    >1. Local temporary tables
    >2. Global temporary tables
    >3. Normal tables.
    When used in stored procedures, local temporary tables (#table) are automatically multi-user.
    For global temp (##table) & normal tables, you need to develop your own mult-user logic.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Global Temp Table or Permanent Temp Tables

    I have been doing research for a few weeks and trying to comfirm theories with bench tests concerning which is more performant... GTTs or permanent temp tables. I was curious as to what others felt on this topic.
    I used FOR loops to test out the performance on inserting and at times with high number of rows the permanent temp table seemed to be much faster than the GTTs; contrary to many white papers and case studies that have read that GTTs are much faster.
    All I did was FOR loops which iterated INSERT/VALUES up to 10 million records. And for 10 mil records, the permanent temp table was over 500k milliseconds faster...
    Anyone have an useful tips or info that can help me determine which will be best in certain cases? The tables will be used for staging for ETL Batch processing into a Data Warehouse. Rows within my fact and detail tables can reach to the millions before being moved to archives. Thanks so much in advance.
    -Tim

    > Do you have any specific experiences you would like to share?
    I use both - GTTs and plain normal tables. The problem dictates the tools. :-)
    I do have an exception though that does not use GTTs and still support "restartability".
    I need to to continuously roll up (aggregate) data. Raw data collected for an hour gets aggregated into an hourly partition. Hourly partitions gets rolled up into a daily partition. Several billion rows are processed like this monthly.
    The eventual method I've implemented is a cross between materialised views and GTTs. Instead of dropping or truncating the source partition and running an insert to repopulate it with the latest aggregated data, I wrote an API that allows you to give it the name of the destination table, the name of the partition to "refresh", and a SQL (that does the aggregation - kind of like the select part of a MV).
    It creates a brand new staging table using a CTAS, inspects the partitioned table, slaps the same indexes on the staging table, and then performs a partition exchange to replace the stale contents of the partition with that of the freshly build staging table.
    No expensive delete. No truncate that results in an empty and query-useless partition for several minutes while the data is refreshed.
    And any number of these partition refreshes can run in parallel.
    Why not use a GTT? Because they cannot be used in a partition exchange. And the cost of writing data into a GTT has to be weighed against the cost of using that data by writing it (or some of it) into permanent tables. Ideally one wants to plough through a data set once.
    Oracle has a fairly rich feature set - and these can be employed in all kinds of ways to get the job done.

  • Global Temp Table or PL/SQL Table

    I am trying to determine if this can be done only using PL/SQL table. If not, will the usage of the global temp table affects the performance.
    Here is the situation,
    I have a data block that is based on a stored procedure. This stored procedure will return table of records from different database tables with join conditions. Some of the fields within the table of records will not have data returned from database tables. They will be the fields displayed on the form and the data will be entered by user.
    For example:
    Records will look like:
    Id          (will be populated by procedure)
    Hist_avg     (will be populated by procedure)
    My_avg     (will be used as field on the form so that user can enter their own avg)
    Cheked     (will be populated by procedure)
    My questions are:
    1.     Is this doable in form using a data block based on PL/SQL table?
    2.     Will users be able to manipulate (update) the data that based on the PL/SQL table in the memory as they wish and invoke the procedure to update the underlying table when clicking on a button (Update Avg)?
    3.     What is the advantage of using PL/SQL table and global temp table from database and form point of views?
    Any info is appreciated.

    Hi there...
    Here is the Reference...
    http://asktom.oracle.com/pls/ask/f?p=4950:8:2939484874961025998::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:604830985638
    Best Regards...
    Muhammad Waseem Haroon

  • Global Temp Table Not found - SSIS

    I am facing below error while using global temp table in SSIS.
    [OLE DB Destination [78]] Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E37.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E37  Description: "Table/view either does not exist or contains errors.".
    [OLE DB Destination [78]] Error: Failed to open a fastload rowset for " ##AGENTDTLS". Check that the object exists in the database.
    [SSIS.Pipeline] Error: component "OLE DB Destination" (78) failed the pre-execute phase and returned error code 0xC0202040.
    1) For data connection manager - Retain same connection is set to True
    2) Data Flow task - Delay Validation is set to True
    3) Destination Task - Using Temp Table - ValidateExternalMetadata is set to false.
    4) I am just using one data connection.
    5) before using the temp file I am checking if its exits and if yes drp it first and create it.
    Not able to understand the reason for failure.

    Why don't you use permanent table in tempdb?
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Can a tabular form be created/used against a GLOBAL TEMP TABLE?

    We are trying to simplify our apex applications. In doing so, we are examing the many collections we use to create tablular forms. These collections currently are tricky to manage and we are considering moving them to either VIEWS or GLOBAL TEMPORARY TABLES (GTT).
    I have created a test app against a GLOBAL TEMP TABLE....it looks great, but when I add a row and SUBMIT I recieve a message indicating record is inserted....but where did it go? I am unable to retrieve...I cannot see it in the underlying GLOBAL TEMP TABLE (as expected)....
    should I be creating an ON INSERT table on the GTT to automatically insert the data into a regular table?

    Now you know why you have to use collections :) Answered here:
    working with global temp table and apex
    Has to do with the session management and how APEX handles it.
    Create a view on your collection using the column names from the source table. Use packages to write an update, delete and insert process. This can be written automatically if there are multiple collections to handle.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.apress.com/9781430235125
    http://apex.oracle.com/pls/apex/f?p=31517:1
    http://www.amazon.de/Oracle-APEX-XE-Praxis/dp/3826655494
    -------------------------------------------------------------------

  • Urgent!  Slow Result Set -- temp table slowing me??

    -- Running BC4J/JHeadstart/UIX
    Description:
    I have a uix page that calls a Servlet and passes a TABLE_NAME. The Servlet gets the TABLE_NAME and calls a class that extends oracle.jheadstart.persistence.bc4j.handler.DataSourceHandlerImpl to create a ViewObject and get the data we need. Once the ViewObject is passed back to the servlet, the servlet loops through the VIewObject and builds a report. See the problem below and the code at the bottom.
    Problem:
    I am running a query that returns approx 5000 records to my ViewObject. I then loop through the rows and construct a report. The view object will return the first 1085 records quickly, however, the following 4000 come back very slowly. I read online that BC4J creates temp tables to store large resultsets and then streams the data to the user as you need it. Is this our potential bottleneck?
    Questions:
    Is there a way to have it return all the rows? What can I do to speed this up?
    Code:
    --- Begin Servlet Snippet ---
    private ByteArrayOutputStream createReport(HttpServletRequest request)
    try{
    // PARM_REPORT = table name
    String reportName = request.getParameter(PARM_REPORT);
    System.out.println(">> calling getReport for " + reportName);
    RdmUserHandlerImpl handler = new RdmUserHandlerImpl();
    ViewObject vo = handler.getReportView2(reportName, request.getSession().getId());
    System.out.println(">> back from get report");
    // loop through report and print every 100
    while(vo.hasNext())
    curRow++;
    if (curRow % 100 == 0 )
    System.out.println(curRow + "");
    --- End Servlet Snippet ---
    --- Begin RdmUserHandlerImpl Snippet ---
    public ViewObject getReportView2(String tableName, Object sessionId) throws Exception {
    System.out.println("IN GET REPORT VIEW");
    ApplicationModule appMod = (ApplicationModule)getConnection("classpath...resource.MyUser", sessionId);
    // First see if we already created the view definition
    ViewObject vo = appMod.findViewObject(tableName);
    // If it was already created then refresh it, else lets try to create it
    if(vo != null) {
    System.out.println("found existing view");
    vo.reset();
    else {
    System.out.println("view not found, making new view");
    String query="SELECT * FROM " + tableName;
    System.out.println("QUERY = " + query);
    vo = appMod.createViewObjectFromQueryStmt(tableName, query);
    // max fetch returns -1
    System.out.println("MAX Fetch Size = " + vo.getMaxFetchSize());
    return vo;
    --- End RdmUserHandlerImpl Snippet ---
    Please reply asap! Deadline is coming fast!
    -Matt

    Matt,
    I think that you are right, the temporary tables created by BC4J are the reason for making it slow after a certain number of records. One of Steve Muench's articles includes the text:
    One of the most frequent performance-related questions we get on the Oracle Technet discussion forum is a question like, "After I query about a 1000 rows in a view object, my application gets very, very slow. What's happening?"
    It explains how you can turn off this feature, see the full article at http://www.oracle.com/technology/products/jdev/tips/muench/voperftips/index.html.
    The following article gives a lot of helpful information about the temporary tables:
    http://www.oracle.com/technology/products/jdev/htdocs/bc4j/bc4j_temp_tables.html
    The next article gives general tips for performance tuning of BC4J:
    http://www.oracle.com/technology/products/jdev/howtos/10g/adfbc_perf_and_tuning.html
    Hope this helps,
    Sandra Muller
    JHeadstart Team

  • MySQL temp tables or Calling Stored procedures in CS4

    I need to filter a result set by username before performing a LEFT JOIN and including OR IS NULL rows. The SQL works from from the mysqli client, by either creating a temp table using  "create temporary table temp_appts select * from..."  Or by creating a stored procedure which contains the same "create temp table" statement, then running my LEFT JOIN statement against the temp table.
    I have tried both in CS4, it accepts the code in "code view" without errors but the page in a browser loads as a blank or a 500 error in IE. In my experience, this is caused by exiting before reaching the html header.
    Is it possible to do either in CS4?  Below is the code I am using.
    $varU_temp_appts = "-1";
    if (isset(<?php $_SESSION['MM_Username'])){
    $varU_temp_appts =$_SESSION['MM_Username'];
    mysql_select_db($database_ess, $ess);
    $query_temp_appts = sprintf("CREATE TEMPORARY TABLE temp_appts SELECT * FROM appts WHERE username=%s", /GetSQLValueString($varU_temp_appts, "text"));
    $temp_appts = mysql_query($query_temp_appts, $ess) or die(mysql_error());
    $row_temp_appts = mysql_fetch_assoc($temp_appts);
    $totalRows_temp_appts = mysql_num_rows($temp_appts);
    mysql_select_db($database_ess, $ess);
    $query_todays_appts = "SELECT *  FROM appt_tm LEFT JOIN (temp_appts) ON appt_tm.appt_time=temp_appts.appt_hr WHERE(appt_date=CURDATE() OR appt_date IS NULL)";
    $todays_appts = mysql_query($query_todays_appts, $ess) or die(mysql_error());
    $row_todays_appts = mysql_fetch_assoc($todays_appts);
    $totalRows_todays_appts = mysql_num_rows($todays_appts);
    Any help is appreciated!

    chuck8 wrote:
    ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'appt_tm LEFT JOIN (temp_appts) ON appt_tm.appt_time=temp_appts.appt_hr where tem' at line 1
    I'm no expert on joins, but I know there were changes in MySQL 5.0.12 that resulted in queries that had previously worked failing. Go to http://dev.mysql.com/doc/refman/5.0/en/join.html, and scroll down to the section labelled Join Processing Changes in MySQL 5.0.12.

  • What are these DR$TEMP % tables and can they be deleted?

    We are generating PL/SQL using ODMr and then periodically running the model in batch mode. Out of 26,542 objects in the DMUSER1 schema, 25,574 are DR$TEMP% tables. Should the process be cleaning itself up or is this supposed to be a manual process? Is the cleanup documented somewhere?
    Thanks

    Hi Doug,
    The only DR$ tables/indexes built are the ones generated by the Build,Apply and Test Activities. I confirmed that they are deleted in ODMr 10.2.0.3. As I noted earlier, there was a bug in ODMr 10.2.0.2 which would lead to leakage when deleting Activities. You will have DR$ table around for existing Activities, so do not delete these without validating they are no longer part of an existing Activity.
    You can track down the DR$ objects associated to an Activity by viewing the text step in the activity and finding the table generated for the text data. This table will have a text index created on it. The name of that text index is used as a base name for several tables which Oracle text utilizes.
    Again, all of these are deleted when you delete an Activity with ODMr 10.2.0.3.
    Thanks, Mark

  • User temp tables not getting cleaned out from tempdb?

    Im seeing what looks like temp tables that are hanging around in tempdb and not getting cleaned out.
    This is SQL 2008 
    SELECT * FROM tempdb.sys.objects so WHERE name LIKE('#%') AND DATEDIFF(hour,so.create_date,GETDATE()) > 12
    Im seeing about 50 or so objects that get returned and they are user_table and some of them have create dates of several days in the past.  Now I know users do not run processes that take that long, so why are these objects still hanging around?
    TempDB gets reset after a server restart, but other than that, how long are these things going to hang around?

    A local temporary table scope exists only for the duration of a user session or the procedure that created the temporary table. When the user logs off or the procedure that created the table completes, the local temporary table is lost. 
    You can check the below blog for more information
    http://sqlblog.com/blogs/paul_white/archive/2010/08/14/viewing-another-session-s-temporary-table.aspx
    --Prashanth

  • SQL tune for temp table

    I am executing a query for which there is a system temp table is used, How could i tune for better execution?
    Please see the execution plan below
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 2650 | 83 (2)| 00:00:01 |
    |* 1 | COUNT STOPKEY | | | | | |
    | 2 | NESTED LOOPS | | | | | |
    | 3 | NESTED LOOPS | | 25 | 2650 | 83 (2)| 00:00:01 |
    | 4 | NESTED LOOPS | | 25 | 1900 | 81 (2)| 00:00:01 |
    | 5 | VIEW | | 603 | 28341 | 78 (2)| 00:00:01 |
    | 6 | TEMP TABLE TRANSFORMATION | | | | | |
    | 7 | LOAD AS SELECT | PRODUCTS | | | | |
    | 8 | HASH UNIQUE | | 114 | 2736 | 1435 (2)| 00:00:18 |
    |* 9 | HASH JOIN | | 114 | 2736 | 1434 (2)| 00:00:18 |
    |* 10 | TABLE ACCESS FULL | CUSTOMERS | 65 | 780 | 865 (2)| 00:00:11 |
    | 11 | TABLE ACCESS BY INDEX ROWID | ORDER_LINES | 252K| 2956K| 567 (1)| 00:00:07 |
    |* 12 | INDEX RANGE SCAN | OL_O1 | 252K| | 85 (0)| 00:00:02 |
    | 13 | SORT ORDER BY | | 603 | 81405 | 439 (3)| 00:00:06 |
    | 14 | HASH GROUP BY | | 603 | 81405 | 439 (3)| 00:00:06 |
    |* 15 | HASH JOIN RIGHT ANTI | | 603 | 81405 | 437 (2)| 00:00:06 |
    | 16 | VIEW | | 114 | 912 | 2 (0)| 00:00:01 |
    | 17 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6B20_523F10 | 114 | 912 | 2 (0)| 00:00:01 |
    |* 18 | HASH JOIN | | 604 | 76708 | 435 (2)| 00:00:06 |
    | 19 | TABLE ACCESS BY INDEX ROWID | PRODUCTS | 411 | 8220 | 6 (0)| 00:00:01 |
    |* 20 | INDEX RANGE SCAN | PR_U2 | 411 | | 1 (0)| 00:00:01 |
    |* 21 | HASH JOIN | | 2177 | 227K| 428 (2)| 00:00:06 |
    | 22 | NESTED LOOPS | | | | | |
    | 23 | NESTED LOOPS | | 7348 | 423K| 325 (2)| 00:00:04 |
    |* 24 | HASH JOIN | | 114 | 3078 | 176 (3)| 00:00:03 |
    | 25 | VIEW | | 114 | 912 | 2 (0)| 00:00:01 |
    | 26 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6B20_523F10 | 114 | 912 | 2 (0)| 00:00:01 |
    |* 27 | TABLE ACCESS FULL | CUSTOMER_CLUSTERS | 86315 | 1601K| 173 (2)| 00:00:03 |
    |* 28 | INDEX RANGE SCAN | IDX_RULES_COMM_ANTE_PROB | 64 | | 1 (0)| 00:00:01 |
    | 29 | TABLE ACCESS BY INDEX ROWID| RULES | 64 | 2048 | 1 (0)| 00:00:01 |
    |* 30 | TABLE ACCESS FULL | CLUSTER_PRODUCTS | 18418 | 863K| 103 (1)| 00:00:02 |
    | 31 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1 | 29 | 1 (0)| 00:00:01 |
    |* 32 | INDEX UNIQUE SCAN | CU_PK | 1 | | 1 (0)| 00:00:01 |
    |* 33 | INDEX UNIQUE SCAN | PR_PK | 1 | | 1 (0)| 00:00:01 |
    | 34 | TABLE ACCESS BY INDEX ROWID | PRODUCTS | 1 | 30 | 1 (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------------------------------------

    ID 17 and 26
    17 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6B20_523F10 | 114 | 912 | 2 (0)| 00:00:01 |

  • Crystal Reports Temp tables - Driving Me crazy

    Our ongoing saga with temp tables and crystal reports continues. I am ready to go crazy, so any help would be appreciated.
    Notes:
    I create the ##tempsmall table in query analyzer, so it is present when the code executes.
    When I use the sa/Corinne credentials, it works.
    When I use a non sa set of credentials, I get an error saying the temp table is not found.
    When I add the non sa user to the tempdb as a user, with DBO permissions, It works.
    Code is below
    Dim Report As New ReportDocument()
    Report.Load("C:\KI\Customers\Live Customers\Brookledge\ReportRD\CrystalReportsApplication1\Projects.rpt")
    Dim LogInfo As New TableLogOnInfo
    Dim Tbl As Table
    For Each Tbl In Report.Database.Tables
      LogInfo = Tbl.LogOnInfo
      LogInfo.ConnectionInfo.ServerName = "ocamb-vista"
      LogInfo.ConnectionInfo.DatabaseName = "mapps20"
      ' We need to figure out how to make this work with a non sa user?
      LogInfo.ConnectionInfo.UserID = "medequipuser"
      LogInfo.ConnectionInfo.Password = "mapps"
      'LogInfo.ConnectionInfo.UserID = "sa"
      'LogInfo.ConnectionInfo.Password = "corinne"
      Tbl.ApplyLogOnInfo(LogInfo)
    Next Tbl
    Report.Database.Tables.Item(0).Location = "tempdb.dbo.##tempsmall"
    Dim v As New Viewer
    v.CrystalReportViewer1.ReportSource = Report
    v.Show()

    Hi Scott,
    The problem is tempdb is only accessible by the Administrator account and they are typically not to be used by anyone other than the system. This is how SQL server works, any query being run will create a temp table for the DB server to use and pass that data to the calling query, once completed the tempdb is deleted.
    As you have seen, unless you set permissions each time for a specific user you don't have access to it. One option is to use real tables that you can populate and delete the rows of data each time. Not very efficient though.
    Crystal respects all DB permissions so it's not a CR issue.
    One possibility is in your front end you could use MS SQL Server API's, if there are any, or use a stored procedure to set permissions to the temp table before calling CR to set location. Sorry no samples to do this, its a DB server question.
    Another option is to write the data into a Data Set and then set the location to the DS for CR to use. If you have more than 5000 row though don't do this or your reports will have performance problems.
    CR does not have any API's that will allow you to access a table that it does not have permissions to get to. Basic DB security, you'll get the same log on error trying to connect ot any table that user doens't have access to.
    Thank you
    Don

  • Partitioned view read other tables when it shouldn't? SQL Server 2008 bug?

    The query still scans all the tables where it supposes to scan only one table when the tables have more than, say, 10000 rows. 
    The details can be found here.
    http://stackoverflow.com/questions/25691738/unreliable-partitioned-view-execution-table-scan-sql-server-bug?noredirect=1#comment40155612_25691738

    I don't see why this would be a bug.
    The optimizer has no information about what is in this table variable, since table variable does not have statistics. Therefore it comes up with a plan which makes sense in the general case, that is, the table variable has data from both partitions. If that
    plan is such that a startup expression can be applied, a startup filter is added. But in the case where both tables are scanned, the optimizer has decided that the best is to run a MERGE JOIN on the two tables.
    The situation may brighten up a bit, if you instead use a temp table, since a temp table has distribution statistics.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for