SQL Server partitioned tables

I'm trying to work out the most efficient way of sharing a table between different stored procedures in a thread in SQL 2005. This will be addressed directly by table parameters in 2008 I realise.
The constraints are that this is a concurrent thread application, i.e. at any one time there may be multiple threads using their own data. So my thoughts are:
1. A single simple permanent table with some sort of thread key field. That's fine, but at the end of the thread I want to remove this data. The number of rows to be deleted at that time might conceivably be 500,000+. So I'm a bit worried about DELETE FROM performance.
2. I could create a thread specific table. I've done that with dynamic SQL (exec (@sql_string)). Removing the data then just involves dropping the table. But my dynamic SQL is starting to get cumbersome. I've considered accesing the data on each thread by taking a copy of the thread specific table into a temporary table within each relevant SP. This is actually a lot more efficient than recalculating the table completely but still not very clever.
3. Partitioned tables. If I create a partition with the relevant thread key as its partition function then my delete step should be pretty efficient. But then I'm not sure whether the overhead of partition function management is an issue
My questions are:
1. Any thoughts on other approaches? I suspect my description above doesn't really explain everything - sorry!
2. What's the efficient partition managament approach for this. The thread key is a single int. So I just want a simple partition function that creates a new partition for every int. And when I drop the partition I want to remove it's definition from the function (I think, to avoid hitting any boundary value limits). Can this function be created as a one-off, or does it have to be altered each time I enter a new thread?  It seems like this is the simplest partition scheme in the world: "Please create a new partition for every new integer value".  Is tthis a simple approach or am I making complications?
Any thoughts appreciated.
Thanks,
- Rob.

Dynamic partitioning doesn't make much sense if the physical files don't come across multi disc hardware.
There is no point (or at least there is little point) in separating your data in e.g. 10 data files if you just have ONE single disc. There is a point for that for LOG files but if you really want a performance gain you need to have them on different physical
discs.

Similar Messages

  • How to delete a row from a SQL Server CE Table with multiple JOINs?

    I want to delete a record from a SQL Server CE table.
    There are 3 tables scripts, options and results. I would like to remove a record from the results table. The where clause contains dynamic information which retrieved via other queries to different tables in the same database. These queries work fine and deliver
    the desired data.
    The Compact server is a clone of a remote table created using the sync framework. The same query to the remote table works fine.
    The error I get is:
    There was an error parsing the query. [ Token line number = 1,Token line offset = 10,Token in error = from ]
    The code that throws the exception is as follows:
    Dim connLoc As SqlCeConnection = New SqlCeConnection(My.Settings.ConnectionString)connLoc.Open()     Dim strDel As String = "Delete r from ResultsTable r inner join OptionsTable o ON o.TestName=r.TestName inner join ScriptTable c ON r.TestName=c.TestName WHERE r.TestName = '" & ds1Loc.Tables(0).Rows(0)(1) & "' AND [Index] = '" & lstIndex & "'"Dim cmdDel As SqlCeCommand = New SqlCeCommandcmdDel.CommandText = strDelcmdDel.Connection = connLoccmdDel.ExecuteNonQuery()
    The values held in ds1Loc.Tables(0).Rows(0)(1) and lstIndex are
    correct so should not be the problem.
    I also tried using parameterised queries
    Dim strDel As String = "Delete r from [ResultsTable] r inner join [OptionsTable] o ON o.TestName=r.TestName inner join [ScriptTable] c ON r.TestName=c.TestName WHERE r.TestName = @TestName AND [Index] = @lstIndex"
    Dim cmdDel As SqlCeCommand = New SqlCeCommand        cmdDel.CommandText = strDel       
    With cmdDel.Parameters           
    .Add(New SqlCeParameter("@TestName", ds1Loc.Tables(0).Rows(0)(1)))           
    .Add(New SqlCeParameter("@lstIndex", lstIndex))       
    End With 
    cmdDel.Connection = connLoc        cmdDel.ExecuteNonQuery()
    I have tried replacing the "=" with "IN" in the the WHERE clause but this has not worked.
    Is it the join that is causing the problem? I can do a select with the same search criteria and joins from the same database.
    Also this query works with SQL Server. Is it perhaps that SQL CE does not support the Delete function the same as SQL Server 2008? I have been looking at this for a while now and cannot find the source of the error. Any help would be greatly appreciated.

    Hello,
    In SQL Server Compact, we can use join in FROM clause. The DELETE statement fail may be caused by the FOREIGN KEY constraint.
    Please refer to:
    DELETE (SQL Server Compact)
    FROM Clause (SQL Server Compact)
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • INSERTING DATA INTO A SQL SERVER 2005 TABLE, WHICH HAS A IDENTITY COLUMN

    Hi All,
    I have to insert the data into a SQL SERVER 2005 Database table.
    I am able to insert the data into a normal SQL Server table.
    When I am trying to insert the data into a SQL Server table, which has a identity column (i.e. auto increment column in Oracle) I am getting error saying that can't insert value explicitly when IDENTITY_INSERT is set to OFF.
    Had anybody tried this??
    There are some SRs on this issue, Oracle agreed that it is a bug. I am wondering if there is any workaround from any one of you (refer Insert in MS-SQL database table with IDENTITY COLUMN
    Thanks
    V Kumar

    Even I had raised a SR on this in October 2008. But didn't get any solution for a long time, finally I had removed the identity column from the table. I can't to that now :).
    I am using 10.1.3.3.0 and MS SQL SERVER 2005, They said it is working for MS SQL SERVER TABLE, if the identity column is not a primary key and asked me to refer to note 744735.1.
    I had followed that note, but still it is not working for me.
    But my requirement is it should work for a MS SQL SERVER 2005 table, which has identity column as primary key.
    Thanks
    V Kumar

  • Migration from sql server 2005 tables to oracle tables.

    Hi,
    Kindly give the steps to migrate from sql server 2005 tables to oracle tables.
    Kindly advise
    Oracle database version:
    Oracle Database 10g Release 10.2.0.1.0 - Production
    PL/SQL Release 10.2.0.1.0 - Production
    "CORE 10.2.0.1.0 Production"
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    Edited by: 873127 on Jul 18, 2011 9:46 PM

    Are you migrating or taking continual updates?
    If migrating it might be worth considering The SQLDeveloper Migration Workbench (which moves more than just data)..
    http://www.oracle.com/technetwork/database/migration/sqldevmigrationworkbench-132899.pdf
    Cheers
    David

  • Interfacing to a third-party system through a shared SQL Server DB Table

    Guys, I have been given a task of interfacing to a third-party system through a shared SQL Server database table.I had never come across such implementation.Can anyone please let me know the methodologies involved in it.
    Thanks,
    Jack.

    This line:
    stmt.executeQuery(query);
    should be:
    stmt.executeUpdate(query);

  • Sql server partition parent table and reference not partition child table

     
    Hi,
    I have two tables in SQL Server 2008 R2, Parent and Child Table.  
    Parent has date time, and it is partitioned monthly,  there is a Child table which just refer the Parent table using Foreign key relation.   
    is there any problem the non-partitioned child table referring to a partitioned parent table?
    Thanks,
    Areef

    The tables will need to be offline for the operation. "Offline" here, means that you wrap the entire operation in a transaction. Ideally, this transaction would:
    1) Drop the foreign key.
    2) Use ALTER TABLE SWITCH to drop the old data.
    3) Use ALTER PARTITION FUNCTION to drop the old empty partition.
    4) Use ALTER PARTITION FUNCTION to add a new empty partition.
    5) Reapply the foreign keys WITH CHECK.
    All but the last operation are metadata-only operation (provided that you do them right). To perform the last operation, SQL Server must scan the child tbale and verify that all keys are present in the parent table. This can take some time for larger tables.
    During the transaction, SQL Server holds Sch-M locks on the table, which means that are entirely inaccessible, even for queries running with NOLOCK.
    You avoid this the scan by applying the fkey constraint WITH NOCHECK, but this can have impact on query plans, as SQL Server will not consider the constraint as trusted.
    An alternative which should not be entirely dismissed is to use partitioned
    views instead. With partitioned views, the foreign keys are not an issue, because each partition is a pair of tables, with its own local fkey.
    As for the second question: it appears to be completely pointless to partition the parent, but not the child table. Or does the child table only have rows for a smaller set of the rows in the parent?
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Sql Server Partitioning for Update

    hi ... 
    i have created on my sql server database a table that hold transactions on my database , some of updating process take more time for update .
    my question is is the partitioning for this table will be useful and decrease the updating time , or will be the same
    Thanks for attention

    IMO, I would never partition a table which had 10,000 rows.  You might want to if there is a good partitioning key and you expect this table to get much larger in the future. 
    In any case, on a 10,000 row table, I can't see any scenario where partitioning will significantly improve performance.
    You would be much better off either improving the indexing and/or rewriting the queries to be more efficient. 

  • Create a table in SQL Server, Export tables from Microsoft Excel to Microsoft SQL Server, Populate the created table

    Hello team,
    I have a project that I need to do, what is the best approach for each step?
    1- I have to create a table in Microsoft SQL Server.
    2- I have to import data/ tables from Microsoft Excel or Access to Microsoft SQL Server. Should I use Microsoft Visual Studio to move data from Excel or Access?
    3-I should populate the created table with the data from the exported data.
    4-How should I add the second and third imported table to the first table? Should I use union query?
    After I learn these, I will bring up the code to make sure what I do is right.
    Thanks for all,
    Guity
    GGGGGNNNNN

    Hello Naomi,
    I have imported all the tables into SQL Server,
    I created a table:
    CREATE
    TABLE dbo.Orders
    Now I want to populate this table with the values from imported tables, will this code take care of this task?
    INSERT INTO dbo.Orders(OrderId, OrderDate)
    SELECT OrderId, OrderDate
    FROM Sales.Orders
    UNION
    SELECT OrderId, OrderDate
    FROM Sales.Orders1
    Union
    SELECT OrderId, OrderDate
    FROM Sales.Orders2
    If not, what is the code?
    Please advise me.
    GGGGGNNNNN
    GGGGGNNNNN

  • Using Oracle Heterogenous services to access sql server database table

    I have created a dblink 'POC_HS' from oracle to sql (implemented heterogeneous services) and I am able to successfully pull out data from the default database that the DSN(for sql server) is connected to.
    So this 'select * from Test@POC_HS' is working perfectly fine on the Oracle database as 'Test' table resides in the default database (which the System DSN is connected to).
    But when I do 'select * Abc.Test@POC_HS' where Test table resides in 'ABC' database which is not the default database throws an error as follows:
    ORA-00942: table or view does not exist [Generic Connectivity Using ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name 'Abc.Test'.[Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (SQL State: S0002; SQL Code: 208)
    I have also tried this 'select * from Abc.dbo.Test@POC_HS' but oracle throws this exception "ORA-00933: SQL command not properly ended".
    The dblink user and System DSN account has access to the 'Abc' database.
    Thoughts?

    Thanks for the info.
    But suppose if we have DB link 'POC_HS' where POC_HS is a DBlink between oracle servers, I can do the following -
    1. select * Abc.Test@POC_HS
    2. select * Def.Test@POC_HS
    where Abc,Def are the schemas which the Dblink user has access to. I can execute the above perfectly fine.
    I wanted the achieve the same functionality from Oracle to Sql where database keep on changing dynamically . So according to you that's not possible right?
    We will have to keep on changing the ODBC connection to a different database or create a new odbc/listener/tnsentry each time query uses a different database right?
    Edited by: 878753 on Aug 11, 2011 1:29 AM

  • SQL Server log table sizes

    Our SQL Server 2005 (Idm 7.1.1 (with patch 13 recently applied), running on Win2003 & Appserver 8.2) database has grown to 100GB. The repository was created with the provided create_waveset_tables.sqlserver script.
    In looking at the table sizes, the space hogs are:
    Data space:
        log       7.6G
        logattr   1.8G
        slogattr 10.3G
        syslog   38.3G
    Index space:
        log       4.3G
        logattr   4.3G
        slogattr 26.9G
        syslog    4.2GAs far as usage goes, we have around 20K users, we do a nightly recon against AD, and have 3 daily ActiveSync processes for 3 other attributes sources. So there is alot of potential for heavy duty logging to occur.
    We need to do something before we run out of disk space.
    Is the level of logging tunable somehow?
    If we lh export "default" and "users", then wipe out the repo, reload the init, default and users what will we have lost besides a history of attribute updates?

    Hi,
    I just fired up my old 7.1 environment to have a look at the syslog and slogattr tables. They looked save to delete as I could not find any "magic" rows in there. So I did a shutdown of my appserver and issued
    truncate syslog
    truncate slogattr
    from my sql tool. After restarting the appserver everything is still working nicely.
    The syslog and slogattr tables store technical information about errors. Errors like unable to connect to resource A or Active Sync agains C is not properly configured. It does not store provisioning errors, those go straight to the log/logattr table. So from my point of view it is ok to clean out the syslog and slogattr once in a while.
    But there is one thing which I think is not ok - having so many errors in the first place. Before you truncate your syslog you should run a syslog report to identify some of the problems in the environment.
    Once identified and fixed you should'nt have many new entries in your syslog per day. There will allways be a few, network hickups and the like. But not as many as you seem to have today.
    Regards,
    Patrick

  • How mapping sharepoint list Columns to Sql server data table columns programaticlly

    Hi ,
     I have one Verification List in share Point ,in that list i have 10 columns.And we have sql server in that sql server we have one data table Verification_Table
    in that table we have 25 column, my requirement is all list data  move to  sql data table[ what ever columns mapping to list--->data table that data store in data table reaming column is  Null]
     using grammatically not in BCS

    Hello,
    You can create SQL connection and use Datareader to read from SQL.Firs create a connection string and put this string in web application web.config file of your sharepoint site.
    Now use below code to call your connectionstring in your webpart.
    SqlConnection con = new SqlConnection(System.Configuration.ConfigurationManager.AppSettings["ConnectionString"]);
    Here is link to read the data from SQL:
    http://www.akadia.com/services/dotnet_data_reader.html
    Here is one MSDN link to read SP list data:
    http://msdn.microsoft.com/en-us/library/dd490727%28v=office.12%29.aspx
    Let me know if you have any doubt
    Hemendra:Yesterday is just a memory,Tomorrow we may never see
    Please remember to mark the replies as answers if they help and unmark them if they provide no help

  • Storing the file in to the sql server 2005 table

    hi,every body...
    in my appliaction i need to store the files.
    i have searched the net but only i found as BLOB(Binary L:arge OBject) is the solution,
    how to use this blob...technique..
    can any one send some sample of code to explain.......
    how to insert a file and how to retrive the inserted file from the SQL SERVER 2005...
    thanks in advance..

    yes, ur rite thse two methods are available..
    but i didnt understand that..the method setBinaryStream() accepts 3 parameters
    the last parameter which i know that.. size of the file of int type rite?
    can u give some more information regarding this..
    thanks ..

  • SQL Server Partition with Persisted

    Hi Guru's
    I am about to create partition for my table and would like to difference between "partition column" on a datetime directly
    and a persisted int column. I have put the code segment here. If you could let me know the pros and cons of both the approaches w.r.t space,performance,maintenance would be really great.
    Partitioning straight Datetime column
    CREATE TABLE [dbo].[Table_with_partitions](
    [Id] [bigint] IDENTITY(1,1) NOT NULL,
    [Username] [varchar](255) NULL,
    [CreatedDate] [datetime] NOT NULL
     CONSTRAINT [PK_Table_with_partitions] PRIMARY KEY CLUSTERED  ([Id] ASC, [CreatedDate] ASC)
     ) ON Table_with_partitions_PartScheme ([CreatedDate])
    Partitioning with Persisted Column
    CREATE TABLE [dbo].[Table_with_partitions](
    [Id] [bigint] IDENTITY(1,1) NOT NULL,
    [Username] [varchar](255) NULL,
    [CreatedDate] [datetime] NOT NULL,
    [Partition_column]  AS (CONVERT([bigint],CONVERT([varchar],[CreatedDate],(112)),0)) PERSISTED,
     CONSTRAINT [PK_Table_with_partitions] PRIMARY KEY CLUSTERED  ([Id] ASC, [Partition_column] ASC)
     ) ON Table_with_partitions_PartScheme ([Partition_column])
    Thanks,
    Ganesh
    Ganesh

    Sorry I intended to int instead of datetime, also it is SQL 2005 can't go with datetime2.
    Next the reason for partition is to reduce the blocking while purging data. We plan to keep only 60 days worth of data.
    The other thing there is an index on the CreatedDate column based on which there will be queries for date ranges.
    For the above 2 reasons, I am wandering having a Datetime column as part of partition and having a persisted int column will have any impact on the size of the table. Because either the datetime or int should be the part of Primary Key.
    Next the int persisted column will have 20150429, 20150430 and so on. Application/user query passing the datetime.
    Please share your ideas
    Ganesh

  • Efficient way to do the Purge / Delete activity on a SQL Server Heap table

    Hi,
    I have a huge heap table (sql 2008) on a staging database which is used to store log history for an application.
    The application is not directly using this heap table.
    The table is having a Date column and We have a Purge Plan to remove the records that are all older than 1 Year.
    In this scenario, which one will help and support us in order to expedite the purge process ?
    Whether Crating a Clustered Index or Non-Clustered Index ?
    Of course, I am planning to use the following script in order to avoid Log file bloat and get rid of the blockings.
    Can some help in this regard by providing suggestion.

    I personally wouldn't create a clustered index on the table.  Adding a clustered index has two problems in your scenario.
    Adding a clustered index will be time consuming and resource intensive.  Talk about log file bloat...
    A clustered index will result in poorer insert performance when compared to leaving the table as a heap and adding a non-clustered index.
    I would add the non-clustered index to the table on the date column you refer to, then purge data in small batches.  Although purging data in small batches might not be quite as fast as purging the data in a single batch, it won't be much slower and
    will allow you to have total control over your log file.
    The non-clustered index on the date column will be small since even the largest date datatypes only consume 10 bytes of space.  So for a table containing 5 billion records the non-clustered index would be only about 90 GB in size.
    As stated above you could then purge data in small batches and perform log backups between batches to control log file bloat or simply switch the database to simple recovery model.

  • SQL Server system tables

    I am trying to convert a SQLServer 2000 database to Oracle 10g. One of the issues I am running into is the reference in a number of the stored procedures to system tables or procedures in the master database that SQLServer maintains:
    1. sysprocesses table: the sp accesses this table to find out session id and login time for the specific session. I am aware that session id can be mimicked by select sys_context('USERENV','SESSIONID'), but how about login time?
    2. Other procedures have references to sp_OACreate and sp_OADestroy etc that are extended procedures in the master db. The migration work-bench either does not detect them, or does not select them for conversion.
    If anybody has input on how these items can be resolved, I would really appreciate it. Thanks.

    I am trying to convert a SQLServer 2000 database to Oracle 10g. One of the issues I am running into is the reference in a number of the stored procedures to system tables or procedures in the master database that SQLServer maintains:
    1. sysprocesses table: the sp accesses this table to find out session id and login time for the specific session. I am aware that session id can be mimicked by select sys_context('USERENV','SESSIONID'), but how about login time?
    2. Other procedures have references to sp_OACreate and sp_OADestroy etc that are extended procedures in the master db. The migration work-bench either does not detect them, or does not select them for conversion.
    If anybody has input on how these items can be resolved, I would really appreciate it. Thanks.

Maybe you are looking for