Is filestream filegroup size considers in database size?
while calculating database size will it necessary to consider filestream filegroup size??
sp_spaceused stored procedure returns size of database but it consider only data and log files.
my question is : can we consider filestream filegroup size while calculating database size or is there any other way which already consider filestream filegroup size??
Hi priyanka,
Since you can get the size of
filestream files via the T-SQL statement and the sys.database_files, I will mark their post as answer for your question.
That way, other community members could benefit from this sharing. Thanks for your understanding.
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support
Similar Messages
-
SQL Server 2012 Database with FileStream enabled tables
Hi,
I have some questions concerning the SQL Server 2012 FileStream feature.
In a database combining both Filestream connected tables and none filestream connected tables. It is obviously possible to tell the root path to a filestream FILEGROUP. It's also possible to create database primary data file (.mdf) and several
optional secondary data files (.ndf), and multiple log files.
If I have two filestream connected tables, which each in turn have a couple of other referenced tables (none filestream connected). Is it possible to put the filestream filegroup1 (eg. the filestream connected table1) and its referenced none filestream connected
tables, their data, indexes etc., on the same physical data file? And the other filestream connected table and its referenced tables to another physical file (.ndf)? If this is possible and recommended, how do I declare such an create database statement?
For ex. when having tables for both none archived state, archived state, in the same database. Or is the best solution to split the two (and its referenced tables) in separate databases?
SWEDEVHello,
File groups a just contains for the objects, you e.g. can split one table over several file Groups/secondarie files using partitioning. And a filestream is also just a table. You can reference all table Independent of file Groups / file stream.
Olaf Helper
[ Blog] [ Xing] [ MVP] -
Hello all,
I am wondering why is that when a filegroup/file is removed from the DB, it still shows up in sys.master_files and msdb.dbo.backupfile.
To reproduce, just create a database and add a filegroup or file to an existing file and delete that filegroup\file.
Now,Query Sys.master_files and you can still see the deleted Filegroup/file for the database. Of course the State_desc shows as 'Offline' but I am wondering why it is not removed completely.
Also, if you take a backup and query the msdb.dbo.backupfile, it will show up there as well. Again, State_desc = 'Dropped' but why is it not removed completely.
Thank you and have a wonderful day.
Hope it Helps!!Hi Andrew--Msdb.dbo.backupfile showing online is correct because the backup was taken when file group was active. However, I do not see what you are seeing, ie, when I drop the filegroup/file, i still see it in the sys.master_files. and in msdb.dbo.backupfile.
I tested this is on both sql2008R2 build 10.50.4000 and sql 2012 sp1.
below is the script..
CREATE DATABASE [TestFileGroup] ON PRIMARY
( NAME = N'TestFileGroup', FILENAME = N'F:\Backup\TestFileGroup.mdf' , SIZE = 4072KB , FILEGROWTH = 1024KB ),
FILEGROUP [TestFG]
( NAME = N'TestFile', FILENAME = N'F:\Backup\TestFile.ndf' , SIZE = 4072KB , FILEGROWTH= 1024KB )
LOG ON
( NAME = N'TestFileGroup_log', FILENAME = N'F:\Backup\TestFileGroup_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO
USE [TestFileGroup]
GO
IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'PRIMARY')
ALTER DATABASE [TestFileGroup] MODIFY FILEGROUP [PRIMARY] DEFAULT
GO
select state_desc,Name,getdate() from sys.master_files where database_id= db_ID('TestFileGroup')
BACKUP DATABASE testFileGroup TO DISK = N'F:\Backup\TestFG.bak' with init,Format
USE [TestFileGroup]
GO
ALTER DATABASE [TestFileGroup] REMOVE FILE [TestFile]
GO
ALTER DATABASE [TestFileGroup] REMOVE FILEGROUP [TestFG]
GO
select State_desc,Name,getdate() from sys.master_files where database_id= db_ID('TestFileGroup')
BACKUP DATABASE testFileGroup TO DISK = N'F:\Backup\TestFG.bak' with init,Format
select A.Backup_set_id,A.State_desc,logical_name,Database_Name,getdate() from msdb.dbo.backupfile A Inner join msdb.dbo.backupset B on B.backup_set_id=A.backup_set_id
where database_Name = 'TestFileGroup' and A.backup_set_id = (Select max(backup_set_id) from msdb.dbo.backupset where database_Name = 'TestFileGroup' )
The above queried also returned 'DROPPED' filegroup. (I could upload the screen shot but it errors saying i could upload only 2 images)
I tired querying the sys.master_files and took a backup and queried msdb.dbo.backupfile after restarting the instance and I still see the same thing...
Hope it Helps!! -
Delete an empty table partition filegroup.
create partition function pfOrders(int)
as range left
for values(34);
create partition scheme psOrders
as partition pfOrders
to (FG_34,FG_65)
go
I used the above code to create partition function and scheme. I switched out FG_65 filegroup data to another table. Now, I have to delete the filegroup FG_65 completely. I can delete the file successfully as it doesnot have any data. But, as the partition
scheme is using the FG_65 file group, I was unable to drop it.
I am not quite sure hoe can I use split or merge on this. If I merge 34, then the first partition can be deleted. I want to delete the last partition.USE [master]
GO
DROP DATABASE PartitionMaintTest
GO
CREATE DATABASE PartitionMaintTest
GO
ALTER DATABASE PartitionMaintTest SET RECOVERY SIMPLE
GO
ALTER DATABASE [PartitionMaintTest] ADD FILEGROUP [FACT_EMPTY]
GO
ALTER DATABASE [PartitionMaintTest] ADD FILEGROUP [FACT_2008_M10]
ALTER DATABASE [PartitionMaintTest] ADD FILE (
NAME = N'FACT_EMPTY',
FILENAME = N'e:\PartitionMaintTest_FACT_EMPTY.ndf' ,
SIZE = 512KB , FILEGROWTH = 512KB ) TO FILEGROUP [FACT_EMPTY]
GO
ALTER DATABASE [PartitionMaintTest] ADD FILE (
NAME = N'FACT_2008_M10_01',
FILENAME = N'e:\PartitionMaintTest_FACT_2008_M10_01.ndf' ,
SIZE = 512KB , FILEGROWTH = 512KB ) TO FILEGROUP [FACT_2008_M10]
USE [PartitionMaintTest]
GO
/****** Object: PartitionFunction [pf_FACT_DATA_DATE] Script Date: 11/10/2010 20:45:07 ******/
CREATE PARTITION FUNCTION [pf_FACT_DATA_DATE](datetime) AS RANGE RIGHT FOR
VALUES ( N'2008-10-01')
GO
/****** Object: PartitionScheme [ps_FACT_DATA_DATE] Script Date: 11/10/2010 20:45:29 ******/
CREATE PARTITION SCHEME [ps_FACT_DATA_DATE] AS PARTITION [pf_FACT_DATA_DATE] TO (
[FACT_EMPTY], [FACT_2008_M10])
CREATE TABLE Orders (
OrderCloseDate datetime not null,
OrderNum int not null,
[Status] char(2) null,
CustomerID int not null)
ON ps_FACT_DATA_DATE (OrderCloseDate)
GO
CREATE INDEX IX_Orders_CustomerID
ON Orders (OrderCloseDate, CustomerID)
ON ps_FACT_DATA_DATE (OrderCloseDate)
GO
-- Insert Sample Data
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('09/10/2008', 1, 'AE', 12288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('10/10/2008', 2, 'AE', 12288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('11/10/2008', 3, 'AE', 12388)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('12/10/2008', 4, 'AE', 12488)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('01/10/2009', 5, 'AE', 12588)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('02/10/2009', 6, 'AE', 12688)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('03/10/2009', 7, 'AE', 12788)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('04/10/2009', 8, 'AE', 12888)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('05/10/2009', 9, 'AE', 12988)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('06/10/2009', 10, 'AE', 12088)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('07/10/2009', 11, 'AE', 11288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('08/10/2009', 12, 'AE', 12288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('09/10/2009', 13, 'AE', 13288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('10/10/2009', 14, 'AE', 14288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('11/10/2009', 15, 'AE', 15288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('12/10/2009', 16, 'AE', 16288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('01/10/2010', 17, 'AE', 17288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('02/10/2010', 18, 'AE', 18288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('03/10/2010', 19, 'AE', 19288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('04/10/2010', 20, 'AE', 12288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('05/10/2010', 21, 'AE', 32288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('06/10/2010', 22, 'AE', 52288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('07/10/2010', 23, 'AE', 62288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('08/10/2010', 24, 'AE', 92288)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('09/10/2010', 25, 'AE', 12283)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('10/10/2010', 26, 'AE', 12284)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('11/10/2010', 27, 'AE', 12285)
GO
INSERT INTO Orders (OrderCloseDate, OrderNum, [Status], CustomerID) VALUES ('12/10/2010', 28, 'AE', 12286)
GO SELECT
DB_NAME() AS 'DatabaseName'
,OBJECT_NAME(p.OBJECT_ID) AS 'TableName'
,p.index_id AS 'IndexId'
,CASE
WHEN p.index_id = 0 THEN 'HEAP'
ELSE i.name
END AS 'IndexName'
,p.partition_number AS 'PartitionNumber'
,prv_left.value AS 'LowerBoundary'
,prv_right.value AS 'UpperBoundary'
,ps.name as PartitionScheme
,pf.name as PartitionFunction
,CASE
WHEN fg.name IS NULL THEN ds.name
ELSE fg.name
END AS 'FileGroupName'
,CAST(p.used_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'UsedPages_MB'
,CAST(p.in_row_data_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'DataPages_MB'
,CAST(p.reserved_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'ReservedPages_MB'
,CASE
WHEN p.index_id IN (0,1) THEN p.row_count
ELSE 0
END AS 'RowCount'
,CASE
WHEN p.index_id IN (0,1) THEN 'data'
ELSE 'index'
END 'Type'
FROM sys.dm_db_partition_stats p
INNER JOIN sys.indexes i
ON i.OBJECT_ID = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.data_spaces ds
ON ds.data_space_id = i.data_space_id
LEFT OUTER JOIN sys.partition_schemes ps
ON ps.data_space_id = i.data_space_id
LEFT OUTER JOIN sys.partition_functions pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.destination_data_spaces dds
ON dds.partition_scheme_id = ps.data_space_id
AND dds.destination_id = p.partition_number
LEFT OUTER JOIN sys.filegroups fg
ON fg.data_space_id = dds.data_space_id
LEFT OUTER JOIN sys.partition_range_values prv_right
ON prv_right.function_id = ps.function_id
AND prv_right.boundary_id = p.partition_number
LEFT OUTER JOIN sys.partition_range_values prv_left
ON prv_left.function_id = ps.function_id
AND prv_left.boundary_id = p.partition_number - 1
WHERE
OBJECTPROPERTY(p.OBJECT_ID, 'ISMSSHipped') = 0
AND p.index_id IN (0,1)
ALTER PARTITION FUNCTION [pf_FACT_DATA_DATE]()
MERGE RANGE('2008-10-01 00:00:00.000')
ALTER PARTITION SCHEME [ps_FACT_DATA_DATE]
NEXT USED [FACT_EMPTY]
SELECT
DB_NAME() AS 'DatabaseName'
,OBJECT_NAME(p.OBJECT_ID) AS 'TableName'
,p.index_id AS 'IndexId'
,CASE
WHEN p.index_id = 0 THEN 'HEAP'
ELSE i.name
END AS 'IndexName'
,p.partition_number AS 'PartitionNumber'
,prv_left.value AS 'LowerBoundary'
,prv_right.value AS 'UpperBoundary'
,ps.name as PartitionScheme
,pf.name as PartitionFunction
,CASE
WHEN fg.name IS NULL THEN ds.name
ELSE fg.name
END AS 'FileGroupName'
,CAST(p.used_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'UsedPages_MB'
,CAST(p.in_row_data_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'DataPages_MB'
,CAST(p.reserved_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'ReservedPages_MB'
,CASE
WHEN p.index_id IN (0,1) THEN p.row_count
ELSE 0
END AS 'RowCount'
,CASE
WHEN p.index_id IN (0,1) THEN 'data'
ELSE 'index'
END 'Type'
FROM sys.dm_db_partition_stats p
INNER JOIN sys.indexes i
ON i.OBJECT_ID = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.data_spaces ds
ON ds.data_space_id = i.data_space_id
LEFT OUTER JOIN sys.partition_schemes ps
ON ps.data_space_id = i.data_space_id
LEFT OUTER JOIN sys.partition_functions pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.destination_data_spaces dds
ON dds.partition_scheme_id = ps.data_space_id
AND dds.destination_id = p.partition_number
LEFT OUTER JOIN sys.filegroups fg
ON fg.data_space_id = dds.data_space_id
LEFT OUTER JOIN sys.partition_range_values prv_right
ON prv_right.function_id = ps.function_id
AND prv_right.boundary_id = p.partition_number
LEFT OUTER JOIN sys.partition_range_values prv_left
ON prv_left.function_id = ps.function_id
AND prv_left.boundary_id = p.partition_number - 1
WHERE
OBJECTPROPERTY(p.OBJECT_ID, 'ISMSSHipped') = 0
AND p.index_id IN (0,1)
GO
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Error creating a file in filestream folder
So, we have a filestream table that we have been using and copying a significant number of image file into over the past month (about 6 million). So far, the copying has been going well but have run into a problem which I cannot find an explanation
or cure.
When I try to create a new folder, I am getting the following message:
An unexpected error is keeping you from creating the folder. If you continue to receive this error, you can use the error code to search for help with this problem.
Error 0x8007013D: The system cannot find message text for message number 0x%1 in the message file for %2
Any thoughts?Hi Mark Anthony Erwin,
Usually, when you can create a new folder to store the FILESTREAM data , you need to enable the XP_CMDSHELL feature on SQL Server, then create a FILESTREAM enabled database, and create a table with FILESTREAM columns to store FILESTREAM data. Once the FILESTREAM
table is created successfully, we can insert any other files to FILESTREAM table via OPENROWSET function, For more information, see:
http://www.mssqltips.com/sqlservertip/1850/using-insert-update-and-delete-to-manage-sql-server-filestream-data/.
According to your description, you want to create a new folder in the FILESTREAM folder, you need to check if you don’t set and reconfigure the xp_cmdshell.
For existing databases, you can use the
ALTER DATABASE statement to add a FILESTREAM filegroup.
ALTER DATABASE [FileStreamDataBase]
ADD FILE (NAME = N'FileStreamDB_FSData2', FILENAME = N'C:\Filestream\FileStreamData2')
TO FILEGROUP FileStreamGroup
GO
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
We have multi-terabytes of ultrasound images stored on a NAS. One of our clinical users wants to bring in a new COTS application to manage them using SQL Server 2008 R2 Enterprise. I'm seeing out in the blogesphere that: 1) It can't be done,
only local storage is allowed, 2) It can be done, but performance stinks, 3) Cleanup your data and store it locally... I won't respond to 3), but to say that's not going to happen. I don't believe 1), because someone did 2) and didn't like the results.
I've read the White-paper by Paul Randal, and I've emailed him about this exact issue. He doesn't know the answer either, so I'm looking for new information on how to do this. I would like to know the exact technique to enable SQL Server 2008
R2 Enterprise to utilize the FILESTREAM RBS Provider to access a NAS. The crux of the problem is when we add the file to the FILESTREAM filegroup with the statement:
ALTER DATABASE <database name> ADD FILE (Name = FSGroup1, FILENAME = '<PATH\FSData>') TO FILEGROUP FileStreamGroup1;
For the PATH, do we refer to a UNC, or map a drive? Do we need to enable Kerberos authentication when we open SMB port 445? Can we stick to native Microsoft products for RBS, or is there a 3rd party product that would better
suited to this purpose? Any advice or recommendations would be appreciated.
Thanks!
Brandon Forest
SQL Server DBA
UC Davis Medical Center - IT
Brandon Forest SQL Server DBAHi Banyardi,
Based on my research, filestreams can't live on a network addressable storage (NAS) device unless the NAS device is presented as a local NFS volume via iSCSI. With iSCSI , it is supported by Microsoft
FILESTREAM provider.
Accessing the filestream uses the server message block (SMB) protocol, so if you're going to allow file I/O-style access from outside the machine where SQL Server is installed, you must allow access to the SMB port (usually port 445, with port 139 as a fallback)
through the firewall. And it Is better to use Kerberos authentication in this case.
Reference:
Description of support for network database files in SQL Server
Programming with FileStreams in SQL Server 2008
Thanks,
Lydia Zhang
Lydia Zhang
TechNet Community Support -
Hello
My code is
Use DBInMemoryOLTP
Go
ALTER DATABASE DBInMemoryOLTP ADD FILEGROUP [DBInMemoryOLTP_data]
CONTAINS MEMORY_OPTIMIZED_DATA
GO
ALTER DATABASE DBInMemoryOLTP ADD FILE
NAME = [DBInMemoryOLTP_FG1],
FILENAME = 'c:\database\DBInMemoryOLTP_FG1'
) TO FILEGROUP [DBInMemoryOLTP_data]
GO
ALTER DATABASE DBInMemoryOLTP ADD FILE
NAME = [DBInMemoryOLTP_FG2],
FILENAME = 'c:\database\DBInMemoryOLTP_FG2'
) TO FILEGROUP [DBInMemoryOLTP_data]
GO
Error message
Msg 10797, Level 15, State 2, Line 27
Only one MEMORY_OPTIMIZED_DATA filegroup is allowed per database.
Msg 5170, Level 16, State 2, Line 36
Cannot create file 'c:\database\DBInMemoryOLTP_FG2' because it already exists. Change the file path or the file name, and retry the operation.
Now what to do ?Part 1 of error :
Msg 10797, Level
15, State
2, Line
27
Only one MEMORY_OPTIMIZED_DATA filegroup is allowed per
database.
Ans:You can only create one memory-optimized filegroup per database. You need to explicitly mark the filegroup as containing memory_optimized_data.
You can create the filegroup when you create the database or you can add it later. For more :
http://msdn.microsoft.com/en-us/library/dn639109.aspx
Part 2 of error :
That Initial Catalog part was the problem, and removing it solved the problem .
Ahsan Kabir Please remember to click Mark as Answer and Vote as Helpful on posts that help you. This can be beneficial to other community members reading the thread. http://www.aktechforum.blogspot.com/ -
Oracle Database Character set and DRM
Hi,
I see the below context in the Hyperion EPM Installation document.
We need to install only Hyperion DRM and not the entire Hyperion product suite, Do we really have to create the database in one of the uft-8 character set?
Why it is saying that we must create the database this way?
Any help is appreciated.
Oracle Database Creation Considerations:
The database must be created using Unicode Transformation Format UTF-8 encoding
(character set). Oracle supports the following character sets with UTF-8 encoding:
l AL32UTF8 (UTF-8 encoding for ASCII platforms)
l UTF8 (backward-compatible encoding for Oracle)
l UTFE (UTF-8 encoding for EBCDIC platforms)
Note: The UTF-8 character set must be applied to the client and to the Oracle database.
Edited by: 851266 on Apr 11, 2011 12:01 AMSrini,
Thanks for your reply.
I would assume that the ConvertToClob function would understand the byte order mark for UTF-8 in the blob and not include any parts of it in the clob. The byte order mark for UTF-8 consists of the byte sequence EF BB BF. The last byte BF corresponds to the upside down question mark '¿' in ISO-8859-1. Too me, it seems as if ConvertToClob is not converting correctly.
Am I missing something?
BTW, the database version is 10.2.0.3 on Solaris 10 x86_64
Kind Regards,
Eyðun
Edited by: Eyðun E. Jacobsen on Apr 24, 2009 8:26 PM -
Database 9. Generate SQL
Dear people,
I have inherited a database which is running version 9.
I would like to generate a SQL file(s) which have been used to build the database (rather than have to use Enterprise Manager).
With this SQLserver this is possible, and to my great horror an Oracle certified DBA told me its not possible. One has the option apparently when you build a database with the "configuration assistant" but I already have the DB.
Would appreciate any replies on this matter.
Kind regards,
Ben Bookey.Ben:
Your DBA is partly correct. There is no easy way to generate the scripts required to re-build a database, but it is possible. What Oracle considers a database is a little different than what SqlServer considers a database, a SqlServer database is closer to an Oracle tablespace (although not exactly the same).
Manually creating an Oracle database is a five step process.
1. Create a parameter file
2. Create the base database (i.e system tablespace and Oracle's metadata tables)
3. Create rollback segments
4. Create additional tablespaces for user data etc.
5. Run Oracle supplied catalog scripts.
You can get most of the information you need to generate the CREATE DATABASE command by running:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE
then editing the resulting file.
The views DBA_TABLESPACES and DBA_DATA_FILES will give you the information you need to re-create the tablespaces.
DBA_ROLLBACK_SEGS will allow you to generate the CREATE ROLLBACK SEGMENT scripts.
If you need to re-create users and rolls, then DBA_USERS, DBA_ROLES, DBA_TAB_PRIVS, DBA_ROLE_PRIVS, DBA_SYS_PRIVS and possibly DBA_TAB_PRIVS can give you the required information.
Tables can be generated from DBA_TABLES and DBA_TAB_COLUMNS, indexes from DBA_INDEXES and DBA_IND_COLUMNS, and constraints from DBA_CONSTRAINTS and DBA_CONS_COLUMNS.
Triggers can be found in DBA_TRIGGERS. Other things like functions, procedures and packages can be generated from DBA_SOURCE.
As I said, not easy, but possible. There are lots of scripts out there that do some or all of this.
HTH
John -
Hello,
Here is the scenario,
I have SharePoint 2013 foundation with SQl Server 2008 Standard Edition (64 bit).
I wish to setup RBS (Filestream provider) on this farm for the client.
I have followed following blogs to setup RBS,
http://www.knowledgecue.com/pdf/ConfiguringRemoteBlobStorageforSharePointKnowledgeCue.pdf
http://www.petri.com/install-configure-remote-blob-storage-rbs-sharepoint-farm.htm
I am stuck at the part where we install RBS.msi file on the web server. This RBS.msi file runs and supposedly creates tables in the content database. I have the default content database WSS_Content and the default SQL Server instance. I have no clue
why it is not creating tables in the sharepoint content database. The log file that is created is of around 550 MB and it shows "SQL Server 2008 R2 Remote Blob Store -- Installation completed successfully. " message
Some blogs suggest that we need to install RBS.msi on the SQL server as well. I have tried both but nothing works.
To summarize following things have been tried,
1. Install RBS_amd64.msi on web front server
2. Install RBS_amd64.msi on sql server
3. Install RBS.msi (x64 bit file) on web front end server
4. Install RBS.msi (x64 bit file) on sql server
5. Tried to use DB Instance Name as sql server name and sql service name
6. Tried creating table manually in the database and it worked (just to check if the user had appropriate rights)
7. Tried changing the path of RBS.msi file and install it
Command used to install this msi is as follows,
msiexec /qn /lvx* rbs_install_log.txt /i RBS_amd64.msi TRUSTSERVERCERTIFICATE=true FILEGROUP=PRIMARY DBNAME=”Content Database” DBINSTANCE=”DB instance name” FILESTREAMFILEGROUP=RBSFilestreamProvider FILESTREAMSTORENAME=FilestreamProvider_1
I am failing to understand what might be the error behind this.
If someone can help me or give me an insight that would be great. Really appreciate it!
Student For Lifeand finally I got it to work, the solution is as unbelievable as it would sound
I kept trying different things ,
1. Opened command prompt as different user as I used to open as administrator and ran the same RBS_adm64.msi command.
2.It threw an error for the first time and said installation failed. It created a log file with 31 kb of size.
3. so I went back and ran the command prompt as admin again and ran the same command and it worked!
Student For Life -
Error: Partition function can only be created in Enterprise edition of SQL Server
By using the Generate Scripts option in SSMS, I've duplicated this DB seven times so far. I do this due to the 10 Gig limit on Sql Express 2012. I was doing this again today. I generated the script, did a search/replace to provide a new DB name for
DB number eight in the series, and then I ran the script to create the DB, causing the error message. I don't remember seeing this error in the past. It's possible I created the first edition of this DB at home, but back then I only had express edition as
I seem to recall (although I did purchase Developer a few months ago).
I don't even know what the Partition function does. I'll try to look that up tonight.
SSMS did create the DB, I just hope the error message doesn't forebode any problems.
USE [master]
GO
/****** Object: Database [Year2014_Aug_To_Dec] Script Date: 07/29/2014 03:55:19 PM ******/
CREATE DATABASE [Year2014_Aug_To_Dec]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'Year2014_Aug_To_Dec', FILENAME = N'F:\FlatFilesDatabases\Year2014_Aug_To_Dec.mdf' , SIZE = 8832000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 204800KB )
LOG ON
( NAME = N'Year2014_Aug_To_Dec_Log', FILENAME = N'F:\FlatFilesDatabases\Year2014_Aug_To_Dec_Log.ldf' , SIZE = 230400KB , MAXSIZE = 2048GB , FILEGROWTH = 204800KB )
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET COMPATIBILITY_LEVEL = 110
GO
IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
begin
EXEC [Year2014_Aug_To_Dec].[dbo].[sp_fulltext_database] @action = 'enable'
end
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET ANSI_NULL_DEFAULT OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET ANSI_NULLS OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET ANSI_PADDING OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET ANSI_WARNINGS OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET ARITHABORT OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET AUTO_CLOSE ON
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET AUTO_CREATE_STATISTICS ON
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET AUTO_SHRINK OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET AUTO_UPDATE_STATISTICS ON
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET CURSOR_CLOSE_ON_COMMIT OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET CURSOR_DEFAULT GLOBAL
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET CONCAT_NULL_YIELDS_NULL OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET NUMERIC_ROUNDABORT OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET QUOTED_IDENTIFIER OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET RECURSIVE_TRIGGERS OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET DISABLE_BROKER
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET AUTO_UPDATE_STATISTICS_ASYNC OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET DATE_CORRELATION_OPTIMIZATION OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET TRUSTWORTHY OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET ALLOW_SNAPSHOT_ISOLATION OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET PARAMETERIZATION SIMPLE
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET READ_COMMITTED_SNAPSHOT OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET HONOR_BROKER_PRIORITY OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET RECOVERY SIMPLE
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET MULTI_USER
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET PAGE_VERIFY CHECKSUM
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET DB_CHAINING OFF
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET FILESTREAM( NON_TRANSACTED_ACCESS = OFF )
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET TARGET_RECOVERY_TIME = 0 SECONDS
GO
USE [Year2014_Aug_To_Dec]
GO
/****** Object: User [NT SERVICE\MSSQL$SQLEXPRESS] Script Date: 07/29/2014 03:55:20 PM ******/
CREATE USER [NT SERVICE\MSSQL$SQLEXPRESS] FOR LOGIN [NT Service\MSSQL$SQLEXPRESS] WITH DEFAULT_SCHEMA=[NT SERVICE\MSSQL$SQLEXPRESS]
GO
/****** Object: User [NT Authority\Authenticated Users] Script Date: 07/29/2014 03:55:20 PM ******/
CREATE USER [NT Authority\Authenticated Users] FOR LOGIN [NT AUTHORITY\Authenticated Users] WITH DEFAULT_SCHEMA=[NT Authority\Authenticated Users]
GO
/****** Object: User [BUILTIN\USERS] Script Date: 07/29/2014 03:55:20 PM ******/
CREATE USER [BUILTIN\USERS] FOR LOGIN [BUILTIN\Users]
GO
/****** Object: Schema [NT Authority\Authenticated Users] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE SCHEMA [NT Authority\Authenticated Users]
GO
/****** Object: Schema [NT SERVICE\MSSQL$SQLEXPRESS] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE SCHEMA [NT SERVICE\MSSQL$SQLEXPRESS]
GO
/****** Object: FullTextCatalog [Catalog1] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE FULLTEXT CATALOG [Catalog1]WITH ACCENT_SENSITIVITY = ON
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_06A2E7C5] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_06A2E7C5](varbinary(128)) AS RANGE LEFT FOR VALUES (0x00390039003200380035, 0x006E006E0033003000320034)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_11A1FB2A] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_11A1FB2A](varbinary(128)) AS RANGE LEFT FOR VALUES (0x006100730073006F006300690061007400650073, 0x006E006E003200320032003700350037003300310030003400300035)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_171D3F63] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_171D3F63](varbinary(128)) AS RANGE LEFT FOR VALUES (0x00610072006900650078006900650074, 0x006E006E003200390035003200330033003400310030)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_1FA6CD15] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_1FA6CD15](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0063006F00720070006F0072006100740069006F006E, 0x006E006E0033003500340031003800390031)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_25DC6753] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_25DC6753](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0061007000700072006F007600650064, 0x006E006E00320033003200380035)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_2B429CF3] Script Date: 07/29/2014 03:55:21 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_2B429CF3](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0069006E006500730068006F006D)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_2D3F28A7] Script Date: 07/29/2014 03:55:22 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_2D3F28A7](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0062006F0078, 0x006E006E003200390034003900320033003000350033)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_32ED1505] Script Date: 07/29/2014 03:55:22 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_32ED1505](varbinary(128)) AS RANGE LEFT FOR VALUES (0x006100690064, 0x006E006E00330036)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_3E6129B6] Script Date: 07/29/2014 03:55:22 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_3E6129B6](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0036003600340038, 0x006C00610074006F0074, 0x006E006E00360031003800380038)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_3FC721DF] Script Date: 07/29/2014 03:55:22 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_3FC721DF](varbinary(128)) AS RANGE LEFT FOR VALUES (0x006300680075006E006B, 0x006E006E0034003300330031006400360031)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_4695B1AD] Script Date: 07/29/2014 03:55:22 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_4695B1AD](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0061006D006F0075006E0074, 0x006E006E003200370064003200330032)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_475E2206] Script Date: 07/29/2014 03:55:23 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_475E2206](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0061007200610079006B)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_52082FB0] Script Date: 07/29/2014 03:55:23 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_52082FB0](varbinary(128)) AS RANGE LEFT FOR VALUES (0x00640065007400610069006C, 0x006E006E003300300038003400320032)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_53473803] Script Date: 07/29/2014 03:55:23 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_53473803](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0061006F00730069, 0x006E006E003200350032003900340031)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_6A54BA8D] Script Date: 07/29/2014 03:55:23 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_6A54BA8D](varbinary(128)) AS RANGE LEFT FOR VALUES (0x00620061006E006B, 0x006E006E003300310064003000370032)
GO
/****** Object: PartitionFunction [ifts_comp_fragment_partition_function_7D7C9D9A] Script Date: 07/29/2014 03:55:23 PM ******/
CREATE PARTITION FUNCTION [ifts_comp_fragment_partition_function_7D7C9D9A](varbinary(128)) AS RANGE LEFT FOR VALUES (0x0063006100720072006900650072, 0x006E006E00330032003700330033)
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_06A2E7C5] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_06A2E7C5] AS PARTITION [ifts_comp_fragment_partition_function_06A2E7C5] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_11A1FB2A] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_11A1FB2A] AS PARTITION [ifts_comp_fragment_partition_function_11A1FB2A] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_171D3F63] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_171D3F63] AS PARTITION [ifts_comp_fragment_partition_function_171D3F63] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_1FA6CD15] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_1FA6CD15] AS PARTITION [ifts_comp_fragment_partition_function_1FA6CD15] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_25DC6753] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_25DC6753] AS PARTITION [ifts_comp_fragment_partition_function_25DC6753] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_2B429CF3] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_2B429CF3] AS PARTITION [ifts_comp_fragment_partition_function_2B429CF3] TO ([PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_2D3F28A7] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_2D3F28A7] AS PARTITION [ifts_comp_fragment_partition_function_2D3F28A7] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_32ED1505] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_32ED1505] AS PARTITION [ifts_comp_fragment_partition_function_32ED1505] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_3E6129B6] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_3E6129B6] AS PARTITION [ifts_comp_fragment_partition_function_3E6129B6] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_3FC721DF] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_3FC721DF] AS PARTITION [ifts_comp_fragment_partition_function_3FC721DF] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_4695B1AD] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_4695B1AD] AS PARTITION [ifts_comp_fragment_partition_function_4695B1AD] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_475E2206] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_475E2206] AS PARTITION [ifts_comp_fragment_partition_function_475E2206] TO ([PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_52082FB0] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_52082FB0] AS PARTITION [ifts_comp_fragment_partition_function_52082FB0] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_53473803] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_53473803] AS PARTITION [ifts_comp_fragment_partition_function_53473803] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_6A54BA8D] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_6A54BA8D] AS PARTITION [ifts_comp_fragment_partition_function_6A54BA8D] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: PartitionScheme [ifts_comp_fragment_data_space_7D7C9D9A] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE PARTITION SCHEME [ifts_comp_fragment_data_space_7D7C9D9A] AS PARTITION [ifts_comp_fragment_partition_function_7D7C9D9A] TO ([PRIMARY], [PRIMARY], [PRIMARY])
GO
/****** Object: StoredProcedure [dbo].[Files_RecordCountLastThreeDays_ByFolder] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [dbo].[Files_RecordCountLastThreeDays_ByFolder]
@ListOfFolders varchar(max)
AS
-- This query pulls only those folders DID have at least one success (a new file added)
With FoldersWithHits AS(
SELECT COUNT(*) AS NUMFILESADDED, Value as Folder
FROM funcSplit('|', @ListOfFolders) As Folders
inner join Files on CHARINDEX(Folders.Value, Files.AGGREGATEPATH) = 1
WHERE DateAdded > DATEADD(DD, -4, GETDATE())
Group By Value
Select * from FoldersWithHits
Union All
-- To get a list of those folders that did NOT have any new files added,
-- resuse the first query - use the above list of successes to do an exclusion
select 0 as NumFilesAdded, Folders.VAlue as Folder
From funcSplit('|', @ListOfFolders) As Folders
Left Join FoldersWithHits on FoldersWithHits.Folder = Folders.Value
Where FoldersWithHits.folder is null
GO
/****** Object: StoredProcedure [dbo].[FILES_SP_FINDTHISMOVEDFILE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE Proc [dbo].[FILES_SP_FINDTHISMOVEDFILE]
@NameOfFile varchar(2000),
@NameOfZipFile varchar(2000),
@FileSize int
As
-- Here find the zipfile by passing in the name of the zipfile as @NameOfZipFile
Select AggregatePath, 'Found ZipFile By Name' as TypeOfHit From dbo.Files where NameOfFile = @NameOfZipFile
UNION
Select AggregatePath, 'Found ZipFile By Name' as TypeOfHit From dbo.FilesNewLocations where NameOfFile = @NameOfZipFile
UNION
-- Here find the file itself (not just the zipfile) by finding two names: the filename and zipFilename.
Select AggregatePath, 'Found Filename' as TypeOfHit From dbo.FilesNewLocations where Len(NameOfZipFile) > 0 AND NameOfFile = @NameOfFile And NameOfZipFile = @NameOfZipFile
union
Select AggregatePath, 'Found Filename' as TypeOfHit From dbo.FilesNewLocations where Len(NameOfZipFile) > 0 AND NameOfFile = @NameOfFile And NameOfZipFile = @NameOfZipFile
union
-- Here find the file by size
Select AGGREGATEPATH, 'Found By Size' as TypeOfHit From dbo.Files where FileSize = @FileSize ANd NameOfFile = @NameOfFile
UNION
Select AGGREGATEPATH, 'Found By Size' as TypeOfHit From dbo.FilesNewLocations where FileSize = @FileSize ANd NameOfFile = @NameOfFile
Grant Execute ON dbo.Files_SP_FindThisMovedFile To [BuiltIn\Users]
CREATE NONCLUSTERED INDEX idx_FilesNewLocations_CreationDate ON Files (CreationDate)
CREATE NONCLUSTERED INDEX idx_FilesNewLocations_FileSize ON Files (FileSize)
CREATE NONCLUSTERED INDEX idx_FilesNewLocations_NameOfFile ON Files (NameOfFile)
CREATE NONCLUSTERED INDEX idx_FilesNewLocations_NameOfZipFile ON Files (NameOfZipFile)
CREATE NONCLUSTERED INDEX idx_FilesNewLocations_AggregatePath ON Files (AggregatePath)
GO
/****** Object: StoredProcedure [dbo].[FILES_SP_GETEPOCALIPSETEXT] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FILES_SP_GETEPOCALIPSETEXT]
@AGGREGATEPATH VARCHAR(700)
AS
SET NOCOUNT ON
SELECT F.EPOCALIPSETEXT FROM FILES AS F
WHERE F.AGGREGATEPATH = @AGGREGATEPATH
GO
/****** Object: StoredProcedure [dbo].[FILES_SP_INSERTFILE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FILES_SP_INSERTFILE]
@AGGREGATEPATH VARCHAR(4000),
@CREATIONDATE DATETIME,
@EPOCALIPSETEXT VARCHAR(MAX),
@FILEID INT OUTPUT,
@PC VARCHAR(2000),
@FILESIZE INT,
@NAMEOFFILE VARCHAR(2000),
@ZIPPED BIT,
@NAMEOFZIPFILE VARCHAR(2000)
AS
SET NOCOUNT ON
DECLARE @DATEADDED SMALLDATETIME
SELECT @DATEADDED = CONVERT(VARCHAR(12), GETDATE(), 101)
INSERT INTO DBO.FILES (DATEADDED, AGGREGATEPATH, CREATIONDATE,EPOCALIPSETEXT, PC, FILESIZE, NAMEOFFILE, ZIPPED, NAMEOFZIPFILE)
VALUES(@DATEADDED, @AGGREGATEPATH,@CREATIONDATE,@EPOCALIPSETEXT, @PC, @FILESIZE, @NAMEOFFILE, @ZIPPED, @NAMEOFZIPFILE)
SELECT @FILEID=SCOPE_IDENTITY()
GO
/****** Object: StoredProcedure [dbo].[FILES_SP_ISDUPFILE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FILES_SP_ISDUPFILE]
@AGGREGATEPATH VARCHAR(2000)
AS
SET NOCOUNT ON
SELECT FILEID FROM DBO.FILES WHERE AGGREGATEPATH= @AGGREGATEPATH
GO
/****** Object: StoredProcedure [dbo].[FILES_SP_RECORDCOUNTLASTSEVENDAYS] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [dbo].[FILES_SP_RECORDCOUNTLASTSEVENDAYS]
AS
SELECT PC, COUNT(*) AS NUMFILESADDED, CONVERT(VARCHAR(12),CONVERT(SMALLDATETIME, DATEADDED, 101), 101) AS DATEADDED FROM FILES
WHERE DATEADDED > DATEADD(DD, -9, GETDATE())
GROUP BY PC, CONVERT(SMALLDATETIME, DATEADDED, 101)
ORDER BY PC, CONVERT(SMALLDATETIME, DATEADDED, 101) DESC
GO
/****** Object: StoredProcedure [dbo].[FILESNEWLOCATIONS_SP_INSERTFILE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FILESNEWLOCATIONS_SP_INSERTFILE]
@AGGREGATEPATH VARCHAR(4000),
@CREATIONDATE DATETIME,
@FILESIZE INT,
@NAMEOFFILE VARCHAR(2000),
@NAMEOFZIPFILE VARCHAR(2000)
AS
SET NOCOUNT ON
INSERT INTO DBO.FILESNEWLOCATIONS (AGGREGATEPATH, CREATIONDATE,FILESIZE, NAMEOFFILE, NAMEOFZIPFILE)
VALUES(@AGGREGATEPATH,@CREATIONDATE,@FILESIZE, @NAMEOFFILE, @NAMEOFZIPFILE)
GRANT EXECUTE ON DBO.FILESNEWLOCATIONS_SP_INSERTFILE TO [BUILTIN\USERS]
GO
/****** Object: StoredProcedure [dbo].[FILESNEWLOCATIONS_SP_ISDUPNEWLOCATION] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FILESNEWLOCATIONS_SP_ISDUPNEWLOCATION]
@AGGREGATEPATH VARCHAR(2000)
AS
SET NOCOUNT ON
SELECT COUNT(*) FROM DBO.FILESNEWLOCATIONS WHERE AGGREGATEPATH= @AGGREGATEPATH
GO
/****** Object: StoredProcedure [dbo].[FOLDERS_SP_DELETEALLFOLDERS] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FOLDERS_SP_DELETEALLFOLDERS]
AS
SET NOCOUNT ON
DELETE FROM FOLDERS
GO
/****** Object: StoredProcedure [dbo].[FOLDERS_SP_INSERTFOLDER] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [dbo].[FOLDERS_SP_INSERTFOLDER]
@THEPATH VARCHAR(4000),
@FRIENDLYNAME VARCHAR(4000)
AS
INSERT INTO FOLDERS ([PATH], FRIENDLYNAME) VALUES (@THEPATH, @FRIENDLYNAME)
GO
/****** Object: StoredProcedure [dbo].[MISC_SP_SETDBSTARTDATEANDENDDATE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[MISC_SP_SETDBSTARTDATEANDENDDATE]
@STARTDATE DATETIME,
@ENDDATE DATETIME
AS
BEGIN
DECLARE @HASDATE TINYINT
SELECT @HASDATE = COUNT(*) FROM MISC WHERE KIND LIKE 'STARTDATE'
IF @HASDATE > 0
BEGIN
UPDATE DBO.MISC
SET DATECOL =
CASE KIND
WHEN 'STARTDATE' THEN @STARTDATE
WHEN 'ENDDATE' THEN @ENDDATE
END
END
ELSE
BEGIN
INSERT INTO DBO.MISC(KIND, DATECOL) VALUES('STARTDATE', @STARTDATE)
INSERT INTO DBO.MISC(KIND, DATECOL) VALUES('ENDDATE', @ENDDATE)
END
END
GO
/****** Object: StoredProcedure [dbo].[PAGES_SP_FINDWORDFORSELECTEDFOLDERS] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[PAGES_SP_FINDWORDFORSELECTEDFOLDERS]
@KEYWORD VARCHAR(500),
@STARTDATE DATETIME,
@ENDDATE DATETIME
AS
SET NOCOUNT ON
SELECT TOP 5000 * FROM
SELECT P.PAGENO AS PGNO,FD.FRIENDLYNAME AS FOLDER, F.CREATIONDATE, 'PAGE' AS [TYPE], F.AGGREGATEPATH AS FULLPATH FROM
CONTAINSTABLE(PAGES, OCRTEXT, @KEYWORD) AS FULLTEXTTABLE
INNER JOIN PAGES AS P ON P.PAGEID = FULLTEXTTABLE.[KEY]
INNER JOIN FILES AS F ON F.FILEID = P.FILEID
INNER JOIN FOLDERS AS FD ON CHARINDEX(FD.PATH + '\', F.AGGREGATEPATH) = 1
WHERE F.CREATIONDATE BETWEEN @STARTDATE AND @ENDDATE
UNION ALL
SELECT NULL AS PGNO, FD.FRIENDLYNAME AS FOLDER, F.CREATIONDATE, 'FILE' AS [TYPE], F.AGGREGATEPATH AS FULLPATH FROM
CONTAINSTABLE(FILES, EPOCALIPSETEXT, @KEYWORD) AS FULLTEXTTABLE
INNER JOIN FILES AS F ON F.FILEID = FULLTEXTTABLE.[KEY]
INNER JOIN FOLDERS AS FD ON CHARINDEX(FD.PATH + '\', F.AGGREGATEPATH) = 1
WHERE F.CREATIONDATE BETWEEN @STARTDATE AND @ENDDATE
) THERESULTS
GO
/****** Object: StoredProcedure [dbo].[PAGES_SP_FINDWORDFORSELECTEDFOLDERS_V2] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[PAGES_SP_FINDWORDFORSELECTEDFOLDERS_V2]
@KEYWORD VARCHAR(500),
@STARTDATE DATETIME,
@ENDDATE DATETIME
AS
SET NOCOUNT ON
SELECT TOP 5000 * FROM
SELECT P.PAGENO AS PGNO,FD.FRIENDLYNAME AS FOLDER, F.CREATIONDATE, 'PAGE' AS [TYPE], F.AGGREGATEPATH AS FULLPATH, F.FILESIZE FROM
CONTAINSTABLE(PAGES, OCRTEXT, @KEYWORD) AS FULLTEXTTABLE
INNER JOIN PAGES AS P ON P.PAGEID = FULLTEXTTABLE.[KEY]
INNER JOIN FILES AS F ON F.FILEID = P.FILEID
INNER JOIN FOLDERS AS FD ON CHARINDEX(FD.PATH + '\', F.AGGREGATEPATH) = 1
WHERE F.CREATIONDATE BETWEEN @STARTDATE AND @ENDDATE
UNION ALL
SELECT NULL AS PGNO, FD.FRIENDLYNAME AS FOLDER, F.CREATIONDATE, 'FILE' AS [TYPE], F.AGGREGATEPATH AS FULLPATH, F.FILESIZE
FROM
CONTAINSTABLE(FILES, EPOCALIPSETEXT, @KEYWORD) AS FULLTEXTTABLE
INNER JOIN FILES AS F ON F.FILEID = FULLTEXTTABLE.[KEY]
INNER JOIN FOLDERS AS FD ON CHARINDEX(FD.PATH + '\', F.AGGREGATEPATH) = 1
WHERE F.CREATIONDATE BETWEEN @STARTDATE AND @ENDDATE
) THERESULTS
GO
/****** Object: StoredProcedure [dbo].[PAGES_SP_GETOCRTEXT] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[PAGES_SP_GETOCRTEXT]
@PAGENO INT,
@AGGREGATEPATH VARCHAR(700)
AS
SET NOCOUNT ON
SELECT P.OCRTEXT FROM PAGES AS P
INNER JOIN FILES AS F ON F.FILEID = P.FILEID
WHERE F.AGGREGATEPATH = @AGGREGATEPATH AND P.PAGENO = @PAGENO
GO
/****** Object: StoredProcedure [dbo].[PAGES_SP_GETOCRTEXTFORALLPAGESOFTHISFILE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [dbo].[PAGES_SP_GETOCRTEXTFORALLPAGESOFTHISFILE]
@AGGREGATEPATH VARCHAR(5000)
AS
SELECT PAGES.OCRTEXT FROM PAGES
INNER JOIN FILES ON FILES.FILEID = PAGES.FILEID
WHERE FILES.AGGREGATEPATH = @AGGREGATEPATH
ORDER BY PAGENO
GO
/****** Object: StoredProcedure [dbo].[PAGES_SP_INSERTPAGE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[PAGES_SP_INSERTPAGE]
@OCRTEXT VARCHAR(MAX),
@FILEID INT,
@PAGENO INT
AS
SET NOCOUNT ON
INSERT INTO DBO.PAGES (OCRTEXT, FILEID, PAGENO) VALUES (@OCRTEXT, @FILEID, @PAGENO)
GO
/****** Object: StoredProcedure [dbo].[PAGES_SP_ISDUPPAGE] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[PAGES_SP_ISDUPPAGE]
@FILEID INT,
@PAGENO INT
AS
SET NOCOUNT ON
SELECT PAGENO FROM DBO.PAGES WHERE FILEID = @FILEID AND PAGENO = @PAGENO
GO
/****** Object: StoredProcedure [dbo].[usp_RaiseError] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[usp_RaiseError]
@CustomMessage nvarchar(4000) = ' '
AS
-- Exit out if there is no error information to retrieve.
IF ERROR_NUMBER() IS NULL RETURN;
DECLARE
@strErrorMessage NVARCHAR(4000),
@ErrorNumber INT,
@Severity INT,
@ErrorState INT,
@Line INT,
@ProcedureName NVARCHAR(200),
@Msg nvarchar(max);
-- Store all the error info in some temp variables (not sure why he does this)
SELECT -- the SELECT keyword apparently means SET in this case.
@ErrorNumber = ERROR_NUMBER(), -- SETs the value of the @-variable.
@Severity = ERROR_SEVERITY(), -- SETs the value of the @-variable.
@ErrorState = ERROR_STATE(), -- SETs the value of the @-variable.
@Line = ERROR_LINE(), -- SETs the value of the @-variable.
@ProcedureName = ISNULL(ERROR_PROCEDURE(), '-'),
@Msg = Error_Message();
-- Build the message string. The "N" means literal string, and each %d is
-- a standin for a number, and we'll populate these standins later.
SET @strErrorMessage = @CustomMessage + N'Error %d, Severity %d, State %d, Procedure %s, Line %d, '
+ 'Message: '+ @Msg;
RAISERROR (-- This is the built-in RAISEERROR command. Requires 2 vals, then the standin-values
@strErrorMessage, -- You must supply two values before you can populate the standins
@Severity, -- first value, required.
1, -- second value, required
@ErrorNumber, -- populates a standin
@Severity, -- populates a standin
@ErrorState, -- populates a standin
@ProcedureName, -- populates a standin
@Line -- populates a standin
GO
/****** Object: StoredProcedure [dbo].[usp_RebuildIndexes] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE Procedure [dbo].[usp_RebuildIndexes]
AS
Declare @fetch_TableName NVARCHAR(256)
DECLARE Cursor_Tables CURSOR FOR
SELECT Name FROM sysobjects WHERE xtype ='U'
OPEN Cursor_Tables
While 1 = 1 -- Begin to Loop through all tables
BEGIN
FETCH NEXT FROM Cursor_Tables INTO @fetch_TableName -- fetches the next table
if @@FETCH_STATUS <> 0 break
print '---------' + @fetch_TableName
Declare @fetch_indexName NVARCHAR(256) -- loops through al indexes of the current table
DECLARE Cursor_Indexes CURSOR FOR -- Looking for indexes fragmented more than 15 percent.
SELECT name as indexName
FROM sys.dm_db_index_physical_stats (DB_ID(DB_Name()), OBJECT_ID(@fetch_TableName), NULL, NULL, NULL) AS a
JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id
Where Name is not null and avg_fragmentation_in_percent > 7
OPEN Cursor_Indexes
WHILE 1= 1 -- Begin to Loop through all Indexes
BEGIN
FETCH NEXT FROM [Cursor_Indexes] INTO @fetch_indexName
if @@FETCH_STATUS <> 0 break
Declare @SqL nvarchar(2000) = N'
BEGIN TRY
ALTER INDEX ' + @fetch_indexName + ' ON ' + DB_Name() + '.dbo.' + @fetch_TableName + ' Rebuild
END TRY
BEGIN CATCH
Declare @err nvarchar(2000) = ERROR_MESSAGE();
throw 51000, @err, 1
END CATCH'
Execute sp_executeSQL @sql
End -- Ends looping through all indexes
CLOSE [Cursor_Indexes]
DEALLOCATE [Cursor_Indexes]
End -- Ends looping through all tables
CLOSE Cursor_Tables
DEALLOCATE Cursor_Tables
GO
/****** Object: UserDefinedFunction [dbo].[funcSplit] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE function [dbo].[funcSplit](@splitChar varchar(1), @CSV nvarchar(max))
Returns @Results Table (Value nvarchar(max))
As
Begin
Declare @lastChar nvarchar(1) = substring(@CSV, len(@CSV), 1)
-- Make sure the string ends in a comma. If not, append one.
if @lastChar <> @splitChar set @CSV = @CSV + @splitChar
Declare @posOfComma int = 0
Declare @LastPosOfComma int = 0
While 1 = 1
Begin
Set @posOfComma = CHARINDEX(@splitChar ,@CSV, @LastPosOfComma)
if @posOfComma = 0 break
Declare @Length int = @posOfComma - @LastPosOfComma
if @Length > 0
Begin
Declare @Phrase nvarchar(max) = substring(@CSV, @LastPosOfComma, @Length)
Insert Into @Results (Value) VALUES (@Phrase)
end
set @LastPosOfComma = @posOfComma +1
if @LastPosOfComma > Len(@CSV) break
END
Return
End
GO
/****** Object: Table [dbo].[FILES] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[FILES](
[AGGREGATEPATH] [varchar](900) NOT NULL,
[NAMEOFFILE] [varchar](300) NOT NULL,
[NAMEOFZIPFILE] [varchar](300) NOT NULL,
[FILEID] [int] IDENTITY(1,1) NOT NULL,
[CREATIONDATE] [datetime] NOT NULL,
[EPOCALIPSETEXT] [varchar](max) NOT NULL,
[DATEADDED] [datetime] NOT NULL,
[PC] [varchar](30) NOT NULL,
[FILESIZE] [int] NOT NULL,
[ZIPPED] [bit] NOT NULL,
CONSTRAINT [PK_FILES] PRIMARY KEY CLUSTERED
[FILEID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [UQ_Files_AggregatePath] UNIQUE NONCLUSTERED
[AGGREGATEPATH] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [UQ_Files_FileID] UNIQUE NONCLUSTERED
[FILEID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[FilesNewLocations] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[FilesNewLocations](
[AggregatePath] [varchar](900) NOT NULL,
[NameOfFile] [varchar](300) NOT NULL,
[NameOfZipFile] [varchar](300) NOT NULL,
[LocationID] [int] IDENTITY(1,1) NOT NULL,
[CreationDate] [datetime] NOT NULL,
[Filesize] [int] NOT NULL,
CONSTRAINT [PK_FilesNewLocations] PRIMARY KEY CLUSTERED
[LocationID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [UQ_FilesNew_AggregatePath] UNIQUE NONCLUSTERED
[AggregatePath] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[FOLDERS] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[FOLDERS](
[FOLDERID] [int] IDENTITY(1,1) NOT NULL,
[PATH] [varchar](900) NOT NULL,
[FRIENDLYNAME] [nvarchar](500) NULL,
CONSTRAINT [PK_Folders_Path] PRIMARY KEY CLUSTERED
[PATH] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [UQ_Folders_FolderID] UNIQUE NONCLUSTERED
[FOLDERID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[MISC] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[MISC](
[BOOLEANCOL] [bit] NULL,
[KIND] [nvarchar](4000) NULL,
[STRINGCOL] [nvarchar](4000) NULL,
[DATECOL] [datetime] NULL,
[INTEGERCOL] [int] NULL,
[MISCELLANEOUSID] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [idx_Misc_MiscellaneousID] UNIQUE NONCLUSTERED
[MISCELLANEOUSID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[PAGES] Script Date: 07/29/2014 03:55:24 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[PAGES](
[OCRTEXT] [varchar](max) NULL,
[FILEID] [int] NOT NULL,
[PAGENO] [int] NOT NULL,
[PAGEID] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [PK_PAGES] PRIMARY KEY CLUSTERED
[PAGEID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [UQ_FILEID_PAGENO] UNIQUE NONCLUSTERED
[FILEID] ASC,
[PAGENO] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [idx_Files_AggregatePath] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_AggregatePath] ON [dbo].[FILES]
[AGGREGATEPATH] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [idx_Files_CreationDate] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_CreationDate] ON [dbo].[FILES]
[CREATIONDATE] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [idx_Files_DateAdded] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_DateAdded] ON [dbo].[FILES]
[DATEADDED] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [idx_Files_FileSize] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_FileSize] ON [dbo].[FILES]
[FILESIZE] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [idx_Files_NameOfFile] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_NameOfFile] ON [dbo].[FILES]
[NAMEOFFILE] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [idx_Files_NameOfZipFile] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_NameOfZipFile] ON [dbo].[FILES]
[NAMEOFZIPFILE] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [idx_Files_PC] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_PC] ON [dbo].[FILES]
[PC] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [idx_Files_Zipped] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Files_Zipped] ON [dbo].[FILES]
[ZIPPED] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [idx_FilesNewLocations_AggregatePath] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_FilesNewLocations_AggregatePath] ON [dbo].[FilesNewLocations]
[AggregatePath] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [idx_FilesNewLocations_CreationDate] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_FilesNewLocations_CreationDate] ON [dbo].[FilesNewLocations]
[CreationDate] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [idx_FilesNewLocations_FileSize] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_FilesNewLocations_FileSize] ON [dbo].[FilesNewLocations]
[Filesize] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [idx_FilesNewLocations_NameOfFile] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_FilesNewLocations_NameOfFile] ON [dbo].[FilesNewLocations]
[NameOfFile] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [idx_FilesNewLocations_NameOfZipFile] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_FilesNewLocations_NameOfZipFile] ON [dbo].[FilesNewLocations]
[NameOfZipFile] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [idx_Pages_FileID] Script Date: 07/29/2014 03:55:24 PM ******/
CREATE NONCLUSTERED INDEX [idx_Pages_FileID] ON [dbo].[PAGES]
[FILEID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FILES] ADD DEFAULT ('') FOR [NAMEOFZIPFILE]
GO
ALTER TABLE [dbo].[FILES] ADD DEFAULT ('') FOR [EPOCALIPSETEXT]
GO
ALTER TABLE [dbo].[FILES] ADD DEFAULT ((0)) FOR [ZIPPED]
GO
USE [master]
GO
ALTER DATABASE [Year2014_Aug_To_Dec] SET READ_WRITE
GO8A partition function is used when you partition a table. Partitioned tables is a feature that is available only in Enterprise and Developer Edition.
I went through the script, and there are a number of partition functions and partition schemes, but they are not used anywhere, so you should be able to ignore the error.
Erland Sommarskog, SQL Server MVP, [email protected] -
hi experts,
Explain me what is table buffering.why we have to use this concept ?and under which circumstances we need to use this concept.and if not what will be the consequences.
ragards,
manihi,
You get the same in SAP help, but i dont know the link..
SAP Buffering
The SAP database interface enables storage of database tables in local buffers, i.e. the buffers reside locally on each application server of the system. Buffering is especially important in a client/server environment because the access time using the network is much greater than the access time to a locally buffered table.
The flag Buffering allowed must be set in the ABAP/4 Dictionary in order that a table be buffered. The buffering type must also be maintained in the technical settings of the table. Setting the flag Buffering allowed alone does not cause the tables to be buffered!
Whether or not it makes sense to buffer a table depends on the type of access to the table.
The buffering type defines how the table should be buffered.
There are the following 3 buffering types:
X full buffering
P single record (partial) buffering
G generic buffering
no entry no buffering
A number of key fields between 1 and number of key fields -1 must be defined for generic buffering.
X full buffering
Full buffering
With full buffering, either the complete table or none of the table is in the buffer. If a read access is made to a record, all records of the table are transferred to the buffer.
When should you select full buffering?
For tables up to 30 KB in size. If a table is accessed frequently, but all accesses are read accesses, this value can be exceeded.
For larger tables where large numbers of records are frequently accessed. However, if the application program is able to formulate an extremely selective WHERE condition using a database index, it may be advisable to dispense with full buffering.
For tables with frequent accesses to data not contained in the table. Since all records are contained in the buffer, a quick decision can be made as to whether or not the table contains a record for a specific key.
When considering whether a table should be fully buffered, you should take three aspects into account: the size of the table, the number of read accesses, and the number of write accesses. Tables best suited to full buffering are small, frequently read, and rarely updated.
P single record (partial) buffering
Single-record buffering
With this kind of buffering, only the records of a table which are actually accessed are loaded into the buffer.
This kind of buffering requires less storage space in the buffer than full buffering. However, greater organization is necessary and considerably more database accesses are necessary for loading.
If an as yet unbuffered record is accessed with SELECT SINGLE, a database access occurs to load the record. If the table does not contain a record for the specified key ('no record found'), this record is noted as nonexistent in the buffer. If a further attempt is made to access this record, a renewed database access can be avoided.
When should single-record buffering be selected?
For large tables where there are frequent single-record accesses (with SELECT SINGLE ...). The size of the records being accessed should be between 100-200 KB.
For comparatively small tables for which the access range is large, it is normally advisable to opt for full buffering. Only one database access is required to load such a table for full buffering, whilst single-record buffering calls for a very large number of table accesses.
Generic buffering
In a read access to a record of a generically buffered table, all the records whose left-justified part of the key (generic area) corresponds are loaded into the buffer.
If this type of buffering is selected, the generic area must be defined by specifying a number n of key fields. The first n key fields of the table then define the generic key.
The number of key fields to be entered must lie between 1 and the number of key fields -1. For example, only values between 1 and 5 are permitted for a table with 6 key fields.
When should generic buffering be selected?
A table should be buffered generically if usually only certain areas of the table are required. The individual generic areas are treated like independent tables which are fully buffered. Please also read the text about full buffering.
The generic key area should be selected so that the generic areas are not too small to prevent too may generic areas being produced. If there are only a few records per generic area, it is more efficient to use full buffering.
Generic buffering only makes sense if the table is accessed by a specified generic key. If, when an access takes place, a field of the generic key is not supplied with a value, the buffer is ignored and the records are read directly from the database.
Language-specific tables are an example of a good use of generic buffering (with the language key field as generic key area).
Bypassing Buffer is related to the buffering settings in the technical
details of a database table. These table buffers are available on every
application server. SELECT statements on a buffered table use this table
buffer in stead of processing the SQL request on the database. As a
result, using table buffering leads to performance improvements, but
only if:
- the buffered table is small
- the contents of the table doesn't change often.
SAP uses table buffering for a lot of their customizing table.
Bypassing Buffer means: skip the table buffer on the application server
and process the sql-request on the database.
The table buffers are automatically synchronized with changes in the
database. However, it takes some time for the database updated to be
available in the table buffer. So if you want to be 100% sure that the
data you read is up to date, you must use the option BYPASSING BUFFER.
Please note that there are also SQL statements that implicitely perform
a BYPASSING BUFFER, for example when using a table in a JOIN statement.
regards,
Prabhu
reward if it is helpful. -
Collation finnish_swedish and Danish chars. Problem
We have a problem with current collation of database finish_swedish doesn’t recognize Danish letter
æ and treats it as 2 letters ae.
As example: grae.dk and græ.dk is the same in the Eyes
of my SQL-server :)
We couldn’t find any bigger subset of collation which includes all but not BIN
Anyone can help me out with this?Hi were running SQL Server 2012 SP1, Swedish_finnish collation.
USE [master]
GO
/****** Object: Database [database] Script Date: 2014-08-11 16:58:11 ******/
CREATE DATABASE [database]
COLLATE Finnish_Swedish_CI_AS
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'dotabase', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\database_prod_bak.mdf' , SIZE = 8919168KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%)
LOG ON
( NAME = N'database_log', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\database_prod_bak_log.ldf' , SIZE = 1193344KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO
ALTER DATABASE [database] SET COMPATIBILITY_LEVEL = 110
GO
IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
begin
EXEC [database].[dbo].[sp_fulltext_database] @action = 'enable'
end
GO
ALTER DATABASE [database] SET ANSI_NULL_DEFAULT OFF
GO
ALTER DATABASE [database] SET ANSI_NULLS OFF
GO
ALTER DATABASE [database] SET ANSI_PADDING OFF
GO
ALTER DATABASE [database] SET ANSI_WARNINGS OFF
GO
ALTER DATABASE [database] SET ARITHABORT OFF
GO
ALTER DATABASE [database] SET AUTO_CLOSE OFF
GO
ALTER DATABASE [database] SET AUTO_CREATE_STATISTICS ON
GO
ALTER DATABASE [database] SET AUTO_SHRINK OFF
GO
ALTER DATABASE [database] SET AUTO_UPDATE_STATISTICS ON
GO
ALTER DATABASE [database] SET CURSOR_CLOSE_ON_COMMIT OFF
GO
ALTER DATABASE [database] SET CURSOR_DEFAULT GLOBAL
GO
ALTER DATABASE [database] SET CONCAT_NULL_YIELDS_NULL OFF
GO
ALTER DATABASE [database] SET NUMERIC_ROUNDABORT OFF
GO
ALTER DATABASE [database] SET QUOTED_IDENTIFIER OFF
GO
ALTER DATABASE [database] SET RECURSIVE_TRIGGERS OFF
GO
ALTER DATABASE [database] SET DISABLE_BROKER
GO
ALTER DATABASE [database] SET AUTO_UPDATE_STATISTICS_ASYNC OFF
GO
ALTER DATABASE [database] SET DATE_CORRELATION_OPTIMIZATION OFF
GO
ALTER DATABASE [database] SET TRUSTWORTHY OFF
GO
ALTER DATABASE [database] SET ALLOW_SNAPSHOT_ISOLATION ON
GO
ALTER DATABASE [database] SET PARAMETERIZATION SIMPLE
GO
ALTER DATABASE [database] SET READ_COMMITTED_SNAPSHOT ON
GO
ALTER DATABASE [database] SET HONOR_BROKER_PRIORITY OFF
GO
ALTER DATABASE [database] SET RECOVERY FULL
GO
ALTER DATABASE [database] SET MULTI_USER
GO
ALTER DATABASE [database] SET PAGE_VERIFY CHECKSUM
GO
ALTER DATABASE [database] SET DB_CHAINING OFF
GO
ALTER DATABASE [database] SET FILESTREAM( NON_TRANSACTED_ACCESS = OFF )
GO
ALTER DATABASE [database] SET TARGET_RECOVERY_TIME = 0 SECONDS
GO
ALTER DATABASE [database] SET READ_WRITE
GO
USE [database]
GO
/****** Object: Table [database1].[Domain] Script Date: 2014-08-11 16:58:41 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [database1].[Domain](
[domainId] [varchar](32) NOT NULL,
[domain] [nvarchar](80) NOT NULL,
[created] [datetime] NOT NULL,
[modified] [datetime] NOT NULL,
[expires] [datetime] NOT NULL,
[state] [nvarchar](50) NOT NULL,
[createdBy] [nvarchar](255) NULL,
[modifiedBy] [nvarchar](255) NULL,
[accountID] [bigint] NULL,
[paidUntil] [datetime] NOT NULL,
[activeLifecycle] [varchar](255) NULL,
[registryCreationDate] [datetime] NULL,
[label] [nvarchar](50) NULL,
[isPrivateWhois] [bit] NOT NULL,
CONSTRAINT [PK__Domain__4A8948710519C6AF] PRIMARY KEY CLUSTERED
[domainId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY],
CONSTRAINT [IX_Domain] UNIQUE NONCLUSTERED
[domain] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY],
UNIQUE NONCLUSTERED
[domain] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [database1].[Domain] ADD DEFAULT ((0)) FOR [isPrivateWhois]
GO
ALTER TABLE [database1].[Domain] WITH CHECK ADD CONSTRAINT [FK__Domain__accountI__37FF1041] FOREIGN KEY([accountID])
REFERENCES [users].[Account] ([id])
ON UPDATE CASCADE
GO
ALTER TABLE [database1].[Domain] CHECK CONSTRAINT [FK__Domain__accountI__37FF1041]
GO
ALTER TABLE [database1].[Domain] WITH CHECK ADD CONSTRAINT [FK__Domain__activeLi__3AA90691] FOREIGN KEY([activeLifecycle])
REFERENCES [lifecycles].[DomainLifeCycleData] ([lifeCycleID])
GO
ALTER TABLE [database1].[Domain] CHECK CONSTRAINT [FK__Domain__activeLi__3AA90691]
GO -
Dear All,
I have a table with 80104948 records , i want to equally place the data in 10 filegroups, each filegroup will have 80104948 /10 =8010494 rows.
How can i achieve this?
Mohd Sufian www.sqlship.wordpress.com Please mark the post as Answered if it helped.Here's the full illsutration using a sample table
ALTER DATABASE DBName ADD FILEGROUP [Filegroup1]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup2]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup3]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup4]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup5]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup6]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup7]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup8]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup9]
GO
ALTER DATABASE DBName ADD FILEGROUP [Filegroup10]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data1',
FILENAME = N'<full path>\data1.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup1]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data2',
FILENAME = N'<full path>\data2.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup2]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data3',
FILENAME = N'<full path>\data3.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup3]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data4',
FILENAME = N'<full path>\data4.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup4]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data5',
FILENAME = N'<full path>\data5.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup5]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data6',
FILENAME = N'<full path>\data6.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup6]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data7',
FILENAME = N'<full path>\data7.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup7]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data8',
FILENAME = N'<full path>\data8.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup8]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data9',
FILENAME = N'<full path>\data9.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup9]
GO
ALTER DATABASE DBName
ADD FILE
(NAME = N'data10',
FILENAME = N'<full path>\data10.ndf',
SIZE = 5000MB,
MAXSIZE = 10000MB,
FILEGROWTH = 500MB)
TO FILEGROUP [Filegroup10]
GO
--create partition function
CREATE PARTITION FUNCTION BucketPartitionFN (int) AS
RANGE LEFT FOR VALUES
( 1,2,3,4,5,6,7,8,9)
--create partition scheme
CREATE PARTITION SCHEME BucketScheme AS
PARTITION BucketPartitionFN TO
[Filegroup1],
[Filegroup2],
[Filegroup3],
[Filegroup4],
[Filegroup5],
[Filegroup6],
[Filegroup7],
[Filegroup8],
[Filegroup9],
[Filegroup10]
--Now create sample table based on scheme
create table PartitionTest
ID int IDENTITY(1,1),
Val int,
BucketNo int
ON BucketScheme(BucketNo)
--populate some sample data
;WITH T1 AS (SELECT 1 N UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1),
T2 AS (SELECT 1 N FROM T1 a,T1 b),
T3 AS (SELECT 1 N FROM T2 a,T2 b),
T4 AS (SELECT 1 N FROM T3 a,T3 b),
Numbers AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS Seq FROM T4)
INSERT PartitionTest (Val,BucketNo)
SELECT Seq,NTILE(10) OVER (ORDER BY Seq)
FROM Numbers
--Check the partitions where data resides with recordcount
SELECT $partition.BucketPartitionFN(BucketNo) AS PartitionNo,MIN(BucketNo) AS StartBucketNo,MAX(BucketNo) AS EndBucketNo,COUNT(*) AS RecordCount
FROM PartitionTest
GROUP BY $partition.BucketPartitionFN(BucketNo)
just replace DBName and path in above script and you will see it splits up data into 10 partitions based on Bucket value
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
plz anybody tell me about the buffring types.
i will give good points.Hi rangamma,
Buffering
You must define whether and how a table is buffered in the
technical settings for the table. There are three possibilities
here:
1> Buffering not permitted: Table buffering is not permitted,
for
example because application programs always need the
most
recent data from the table or the table is changed too
frequently.
2> Buffering permitted but not activated: Buffering is
permitted from the business and technical points of view.
Applications which access the table execute correctly with
and without table buffering.
Whether or not table buffering will result in a gain in
performance depends on the table size and access profile
of the table (frequency of the different types of table
access).
Table buffering is deactivated because it is not possible to
know what these values will be in the customer system. If
table buffering would be advantageous for the table size and
access profile of the table, you can activate it in the customer
system at any time.
3> Buffering activated: The table should be buffered. In this
case you must specify a buffering type.
Buffering types:
1> Single-Record buffering
With single-record buffering, only the records that are actually
read are loaded into the buffer. Single-record buffering therefore
requires less storage space in the buffer than generic and full
buffering. The administrative costs in the buffer, however, are
greater than for generic or full buffering. Considerably more
database accesses are necessary to load the records than for
the other buffering types.
When Should you Use Single-Record Buffering?
Single-record buffering should be used particularly for
large tables where only a few records are accessed with
SELECT SINGLE. The size of the records being accessed
should be between 100 and 200 KB.
Full buffering is usually more suitable for smaller tables
that are accessed frequently. This is because only one
database access is necessary to load such a table with full
buffering, whereas several database accesses are
necessary for single-record buffering.
2> Generic buffering
With generic buffering, all the records in the buffer whose
generic key fields match this record are loaded when one
record of the table is accessed. The generic key is a part of
the primary key of the table that is left-justified.
3> Full buffering
With full buffering, either the entire table is in the
buffer or the table is not in the buffer at all. All the
records of the table are loaded into the buffer when
one record of the table is read.
When Should you Use Full Buffering?
When deciding whether a table should be fully buffered, you
should take into account the size of the table, the number of
read accesses,
and the number of write accesses. Tables best suited to full
buffering are small, read frequently, and rarely written.
Full buffering is recommended in the following cases:
Tables up to 30 KB in size. If a table is accessed frequently, but
allaccesses are read accesses, this value can be exceeded.
However, youshould always pay attention to the buffer
utilization.
Larger tables where large numbers of records are frequently
Provide some points if it is helpful.
Rgds,
P.Nag
Maybe you are looking for
-
HT201365 my iphone says it is disabaled why? and how do i fix it?
sd
-
Error in Message Monitoring in XI.
Hi All Affter clicking Message Monitoring in RWB there is DETAIL button above the message listed when i am trying to click on the detail button it shows the error like "The page cannot be displayed". pls help me . Thaking you in advance.
-
I want to put several photos on one sheet. In other words, I want to open a blank document and move several photos on to it. Is it possible to do this? If so, how? Thanks, Leslie
-
Business Package for Transport Management System & Travel Mangement System
Hi, Is there any <b>Business package</b> available for <b>"Transport Management System" and "Travel Mangement System"</b>? Please provide some link, if it is available. Thanks in advance. Manish
-
Having trouble with syncing my phone ? can you help me please ?
having trouble with syncing my phone ? can you help me please ?