Temp Tables - Best Practice

Hello,
I have a customer who uses temp tables all over their application.
This customer is a novice and the app has its roots in VB6. We are converting it to .net
I would really like to know the best practice for using temp tables.
I have seen code like this in the app.
CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
That seems to work, though i do not know why the full tempdb.dbo.[## is required.
However, when i use this in the new report I am doing I get runtime errors.
i also tried this
CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
I did not get errors, but I was returned data i did not expect.
Before i delve into different ways to do this, i could use some help with a good pattern to use.
thanks

Hi Scott,
Are you using the RDC still? It's not clear but looks like it.
We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
Thank you
Don

Similar Messages

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Building tables - Best practices?

    Hi all,
    I hope this is a good place to ask a general question like this. I'm trying to improve myself as a DB designer/programmer and am wondering what are the current practices used when deploying a database that must be kept running at the highest performance possible (as far as selecting data and keeping the database clean).
    Basically, here are the specific topics of concern for me:
    - table sizing
    - index sizing
    - oracle parameter tuning
    - maintenance work required to be done on tables/indexes
    The things I've studied on were all based on Oracle 8i, and I'm wondering if much has changed for 9i and/or 10g.
    Thanks.
    Peter

    Actually I'm not very new to doing database work now, but I do still consider myself not quite sufficient in certain aspects of a typical DBA. For that reason, I'm trying to keep my questions very general as though as I'm learning them afresh.
    It does seem that I'm trying to ask something that is too broad to bring up in forum discussions... I'll go back and do some independent studies then come back to the forum with better questions. :)
    When looking through the 10g bug reports in metalink, it made me uncomfortable on some issues that people have been running into (been a while since I did the initial evaluation, and I forgot which specific issues I looked at). I realized that Oracle 10g provided a lot of conveniences with their new web-based EM and EMServer (especially interested in the new reports and built-in automations that Oracle provided), and also on grid deployments for high-availability systems, but we've been held back by many reasons to not go forward with 10g at this time. Having said that, moving to 10g is still planned for the future, so I am continuing the evaluation in several aspects that are specific to our design to determine what we can use and/or abandon in our existing deployment processes.
    Thanks for everyone's time, best wishes.
    Peter

  • Join large external external Tables (best practice ?)

    Hi there,
    I have three external tables in a master-detail relation: table A (10000000 rows) is master of table B (20000000 rows), Table B is master of table C (100000000 rows). Can you tell my the best way:
    - directly join the external tables, or
    - copy the external tables into tables, create an index, and join them
    What is more efficient, and why ?
    Thanks for your help and ideas.
    Gerhard.

    In general, if the joins you are doing can benefit from indexes, you will want to copy the data to database tables. If the joins will end up doing full table scans anyway, it will matter far less.
    For data integrity, you will likely also want to be able to enforce foreign key constraints.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Best practice for a same query against 2 different tables

    Hello all,
    I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
    CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_data_files
    WHERE tablespace_name = tablespaceName;
    CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_temp_files
    WHERE tablespace_name = tablespaceName;
    First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
    Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
    Thank you,

    Hi
    Check whether the below query is helpful or not
    select      fs.tablespace_name "Tablespace",
         fs.tempspace "Temp MB",
         df.totalspace "Total MB"
         from
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) TotalSpace
         from
         dba_data_files
         group by
         tablespace_name
         ) df,
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) tempSpace
         from
         dba_temp_files
         group by
         tablespace_name
         ) fs
         where
         df.tablespace_name = fs.tablespace_name;
    Thanks

  • Best options to use in Temp Table

    Hello,
    I was just trying to figure out the best options we can choose with when we come across a scenario where we need to use a  Temp Table/Table Variable/Create
    a Temp Table on the fly.
    However, I could not see any big difference in using those options. As per my understanding using a table variable is more convenient if the query logic is
    small and the result set also will be comparatively small.Creating a temp table is also an easy option but it takes much time and we can not create any indexes on it. I am working on a query optimization task where in plenty of temp tables are used
    and the query takes more than five minutes to execute. We have created few indexes and all in few tables and reduced the query execution time up to 2 mnts.Can anyone give me more suggestions on it. I have gone through various articles about it and came to
    know that there is no one solution for this and I am aware of the basic criteria like  use Set No count on, Order the table in which the indexes are created, Do not use Select * instead use only columns which are really required, Create Indexes
    and all. Other than these I am stuck with the usage of temp tables. There are some limitations where I can convert all the Temp table logic to CTE (I am not saying its not possible, I really dont have time to spend for the conversion). Any suggestions are
    welcome.
    Actual Query
    select Code,dbo.GetTranslatedText(Name,'en-US')
    as Name from ProductionResponse.ProductionResponse
    00.00.02
    5225 rows
    With Table Variable
    DECLARE  @General
    TABLE(Code
    NVarchar(Max),Name
    NVarchar(Max)
    INSERT
    INTO @General
    select Code,dbo.GetTranslatedText(Name,'en-US')
    AS Name  from ProductionResponse.ProductionResponse
    select
    * from @General
    00.00.03
    5225 rows
    With an Identity Column
    DECLARE  @General
    TABLE(Id
    INT IDENTITY(1,1)
    ,Code NVarchar(Max),Name
    NVarchar(Max)
    INSERT
    INTO @General
    select Code,dbo.GetTranslatedText(Name,'en-US')
    AS Number  from ProductionResponse.ProductionResponse
    select
    * from @General
    00.00.04
    5225 rows
    With Temp Table:
    CREATE
    TABLE #General (Id
    INT IDENTITY(1,1)
    PRIMARY KEY,Code
    NVarchar(Max),Name
    NVarchar(Max)
    INSERT
    INTO #General
    select Code,dbo.GetTranslatedText(Name,'en-US')
    as Name from ProductionResponse.ProductionResponse
    select
    * from #General
    DROP
    TABLE #General
    00.00.04
    5225 rows
    With Temp Table on the Fly
    SELECT G.Code,G.Name
    INTO #General  
    FROM
    select Code,dbo.GetTranslatedText(Name,'en-US')
    as Name from ProductionResponse.ProductionResponse
    )G
    select
    * from #General
    00.00.04
    5225 rows

    >> I was just trying to figure out the best options we can choose with when we come across a scenario where we need to use a Temp Table/Table Variable/Create a Temp Table on the fly. <<
    Actually, we want to avoid all of those things in a declarative/functional language. The goal is to write the solution in a single statement. What you are doing is mimicking a scratch tape in a 1950's tape file system. 
    Another non-declarative technique is to use UDFs, to mimic 1950's procedural code or OO style methods. Your sample code is full of COBOL-isms! In RDBMS we follow ISO-11179 rules, so we have “<something in particular>_code” rather than just “code” like
    a field within a COBOL record. The hierarchical record structure provides context, but in RDBMS, data elements are global.  Or better, they are universal names. 
    >> I am aware of the basic criteria like use SET NO COUNT ON, Order the table in which the indexes are created, Do not use SELECT * instead use only columns which are really required, CREATE INDEXes and all.<<
    All good, but you missed others. Never use the same name for a data element (scalars) and a table (sets). Think about what things like “ProductionResponse.production_response” means. A set with one element is a bit weird, but that is what you said. Also, what
    is this response? A code? A count? It lacks what we call an attribute property. 
    This was one of the flaws we inherited when ANSI standardized SQL and we should have fixed it. Oh well, too late now. 
    Never use NVARCHAR(MAX). Why do you need to put all of the Soto Zen sutras in Chinese Unicode? When you use over-sized data elements, you eventually get garbage data. 
    >> Other than these I am stuck with the usage of temp tables. There are some limitations where I can convert all the Temp table logic to CTE (I am not saying its not possible, I really do not have time to spend for the conversion). Any suggestions are
    welcome.<<
    Yes! This is how we do declarative/functional programming! Make the effort, so the optimizer can work, so you can use parallelism and so you can port your code out of T-SQL dialect. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Best practice to create multi tier (atleast 3 level) table

    What is the best practice to create multi tier (minimum 3 levels) of table?. Could any one provide a sample structure?.
    Thanks.

    Can u b more specific as to what you are trying to do. What u mean by 3 level table?

Maybe you are looking for

  • Firefox 28 with EMET 4.1

    Recently added EMET 4.1 to my PC [Win 7 Home SP1] but having frequent FF28.0 crashes even after applying all .NET 4.0 Windows updates. Some are identified specifically as DEP errors by EMET. The best I am able to do is use FF safe mode with EMET on [

  • Windows 98

    I apologise in advance for asking this very basic question. I'm embarassed to be asking it. But, as I am totally in the dark about this topic, here goes anyway. I haven't had an MP3 player before. For the past couple of weeks I've been checking out a

  • Reinstalling deleted apps

    hi everyone! i got a problem, my "simcity" never came in to the game and just kept on showing the "loading" screen forever, so i deleted it i knew nothing would happen because i have bought the file once and have it my computer, but i cant figure out

  • CS6 Downloader Error 101

    Hi there, I have tried to install the CS6 Master Collection onto my laptop (windows 7, 64bit). I have downloaded it three times but each time it comes up with the same error message. "Error extractiong the product installer (Error 101). check for ava

  • Contact edit

    Unable to edit or add contacts