Partitioning Advice on PRIMARY FILE GROUP

Hi,
I wonder if you can help. I would like some advice on partitioning within the PRIMARY file group. Basically I have a lot of experience on partitioning and in the past I have followed best practice is documents such as
Partitioned Table and Index Strategies Using SQL Server 2008   and the
Analysis Services Performance Guide   so in summary when partitioning in the past I have followed best practice, e.g. have one file group per partition and have your partition key as a component within the primary key of the table you are
partitioning.
However following best practice in this way can require a lot of maintainence, and the system I am performance tuning at the moment has a limited life span.
So in this context I wonder if there are any advantages of basing partitioning on a PRIMARY file group?
Kind Regards,
Kieran.
Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

Hi Erland,
Many thanks for your advice which has helped me set a wider context.
I'm sorry, I think this question is drifting onto Analysis Services even though I started out asking a question related to Partitioning Advice on PRIMARY FILE GROUP.
Your answer increases my empathy with people who are skeptical about the performance advantages of partitioning very large tables even when these very large tables are the data source of a cube.
Is it possible for a question in these exceptional circumstances to be associated with 2 forums, i.e Transact SQL and Analysis Services? - since the scope of this question covers both areas.
I have had a lot of success in dramatically speeding up the processing of a cube with multi-billion record fact tables in the underlying data warehouse for previous clients. Where this cube had the partition key in the WHERE clause
of the Cube partition which was the same as the partition index in the underlying fact table. And where each fact table had a different physical file group for each partition. The cube, data warehouse, transaction logs, and tempdbs were on separate spindles.
However the partitions within the cube and the partitions within the data warehouse were not on separate spindles for each of the partitions.
A proven advantage of implementing the above design strategy is that it increases parallelism. Please see page 86 of the best practice document on
Analysis Services 2008 R2 Performance Guide.
So getting back to my question in the above context: If you slice on a particular partition with a particular partition key value, e.g. PeriodYYYYMM = 201408 on a relational database table, will it make a difference to performance if this table is partitioned
by the PRIMARY file group, or should the partitions have a separate FILE GROUP for each partition?
Kind Regards,
Kieran.
Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

Similar Messages

  • Adding Data file to existing primary file group with 1 data file

    Currently our databases are configured to only have 1 data file and 1 log file.  I am looking at adding a 2nd data file to the primary group, which will be on a separate lun.
    Will we benefit from adding the 2nd data file (same size as 1st data file and same autogrowth rate) , or should we create a new database with 2 data files (equal size and autogrowth rate), and import the data from the database with the single data file.
    Thanks.
    DJ

    Having another data file pointing to different Physical Volume
    will give you better performance gains. Additionally, you should pre-size them (Same as First Data File) with same growth settings (Preferably in Mb
    instead of Percentages) .
    It is perfectly OK to add another data file to PRIMARY file-group as well and SQL Server will automatically balance the data across multiple files over the period time (Due to Data Striping)
    HTH
    Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • How to transfer the tables from one file group to another file group in SQL 2008.?

    Hello all,
    I have few issues regarding the transfer of the tables from one file group to another file group  in SQL 2008 and also How can we  backup
    and restore the particular database based on file group level.
    Let’s say I have a tables stored within the different FG. such as
    Tables                                                    
      File group
    Dimension tables                                              
                                                                     Primary
    Fact tables                                               
                                                                              FG1
               FG2…
    zzz_tables                                               
                                                                              DEFAULT_FG    
    dim.table1                                                                                                                          DEFAULT_FG
    dim.table2                                                                                                                          DEFAULT_FG
    Here all I want to transfer the dim.table1 ,dim.table2  from  DEFAULT_FG to the Primary File
    group .So is there simple methods for transfer the dim.table1,2  from one FG to another .I have tried somewhat but I couldn’t get the exact way .So if someone have better idea please share your knowledge that would be really appreciated.
    Secondly after moving those dim.table1 ,dim.table2 from DEFAULT_FG to Primary ,All I want to backup and restore the database only containing  the Primary and FG1,FG2… not
    a DEFAULT_FG.Is it possible or not.?
    Hope to hear from the one who knows better approach for this kind of task .Your simple help will be much appreciated.
    Regards,
    Anil Maharjan

    Well after all my full day research on this topic had paid off, I finally got the solution and am so happy to research on these things. It makes
    us feel really happy after all our research and hard work doesn't goes as waste.
    Finally I got what I am looking for and want to make sure that I am able to transfer the tables from DEFAULT_FG to another FG without tables
    having clustered index on that tables .
    With the help of the link below I finally got my solution where Roberto’s coded store procedure simply works for this.
    Really thanks to him for his great post and thanks to all for your response and your valuable time.
    http://gallery.technet.microsoft.com/scriptcenter/c1da9334-2885-468c-a374-775da60f256f
    Regards,
    Anil Maharjan

  • Change Default File group

    Hi All,
    I have 500GB database with 7 Data files.
    One data file is on X: Drive - Primary file group - Default- Has only one file.
    Other five on Y: Drive. - Secondary file group SEC_1
    I have added a new data file on Z: Drive, create a new secondary file group SEC_2, as Y: Drive was filling up quickly and also to leverage number of writes.
    Is it preferable to change the filegroup SEC_2 to default?
    Any help appreciated
    Thanks.
    KRanp.

    Default FG setting would be used while creating new table. For existing you need to move data (or use DROP_EXISTING with ALTER INDEX)
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • How to add payment advice for XML file filed in vendor account group

    Hi All,
    I have a requirment to add Payment advice for XML file field in vendor account group under payment transcation tap,
    kindly advice where i can add above mention field in vendor account group.
    thanks
    khaja

    done

  • Error occurred while processing the "sales" partition of the "sales" measure group in the cube

    Hi 
    when i ran the job for processing the cube it showing the error like "error occurred while processing the sales partition of the sales measure group in the cube". but in log files no error massage was there. after get that error message we ran
    the cube manually
    at that time the cube executed  successfully. 
    my aim when the job was run it will be process automatically but it is not like that.
    can you suggest  the solution.
    thank you
    satyak248

    Hi Satyak248,
    According to your description, you get the error when using Windows Task to process a cube on a schedule, however can process the cube on SSMS successful manually, right?
    In your scenario, you can process the cube manually, the issue can be cause by Windows Task was not set correctly. So you can try to process the cube using SSIS package. The Analysis Services Processing Task in SQL Server Integration Services (SSIS) allows
    for the processing of one to many to all Analysis Services objects in an SSIS package. Once the SSIS package is created, then a job can be created within the SQL Server Management Studio which will allow for scheduling.
    http://www.mssqltips.com/sqlservertip/2994/configuring-the-analysis-services-processing-task-in-sql-server-2012-integration-services/
    Regards,
    Charlie Liao
    TechNet Community Support

  • Read_Only and Read_Write File Groups in Same Database

    We have fairly static reference data in a database we have set to Read_Only for a number of reasons and it has worked well in that state. I am being asked to change that now so we can load data daily into this database. I am thinking about creating a read_write
    filegroup in the database to do this and still allow us to have the original tables in the database on a read_only filegroup. I am wondering what issues may occur with this approach, concerned about taking this highly read database to a read_write state and
    causing issues. It appears the primary data file can't be set to read only, so I would need to create the read_only filegroup and move all the existing data/tables to it that file group. Anyone have comments/experience along these lines?

    When a Filegroup is marked as Read-only, SQL Server will not bother with Page or Row locks on the tables or indexes contained in them. This reduces SQL Server overhead and improves performance. Since the data is not changing, index fragmentation does not
    occur so maintenance, such as rebuilding or reorganizing, is unnecessary. That saves time and effort also. Also,  in SQL Server 2008 and later, you can mark a Filegroup as Read-only without having exclusive access to the entire database....
    Raju Rasagounder Sr MSSQL DBA

  • Error in checking a primary file

    Hi ,
    I am trying to use the CHECKIN_UNIVERSAL idcservice froma standalone java class. I have appeneded the following xml string in my request and trying to use a primary file from my local m/c.
    <code>
    requestString = "<?xml version='1.0' ?><SOAP-ENV:Envelope xmlns:SOAP-ENV=\"http://schemas.xmlsoap.org/soap/envelope/\"><SOAP-ENV:Body><idc:service xmlns:idc=\"http://www.stellent.com/IdcService/\" IdcService=\"CHECKIN_UNIVERSAL\"><idc:document dDocName="+param1+" dDocAuthor=\"sysadmin\" dDocTitle="+param2+" dDocType="+param3+" dSecurityGroup="+param4+" dDocAccount=''>
    <idc:file name=\"primaryFile\" href =`D:/soapfile.txt`></idc:file></idc:document></idc:service></SOAP-ENV:Body></SOAP-ENV:Envelope>";
    </code>
    But when I am trying to invoke the service from a jsp, I am getting the follwoing error in the response:
    <idc:field name="StatusMessage">Content item &#39;test&#39; was not successfully checked in. The content item must have a primary file.</idc:field>
    I need to fix this urgently. Please help!!.

    I've tried to not use CHECKIN_UNIVERSAL. I've written java class (extends ServiceHandler) that changes some binder parameters.
    Written Service on top of this:
    <tr>
         <td>TEST_CHECKIN_BYNAME</td>
         <td>DocService
              34
              null
              null
              null<br>
              null</td>
         <td>3:determineID::2:null
    3:doSubService:TEST_CHECKIN_SUB:0:null</td>
    </tr>
    <tr>
         <td>TEST_CHECKIN_SUB</td>
         <td>DocService
              34
              null
              SubService
              null<br>
              null</td>
         <td>3:TEST_prepareMetaData::0:null
    3:prepareCheckinSecurity::0:null
              3:checkSecurity::0:null
              3:doSubService:CHECKIN_NEW_SUB:0:null</td>
    </tr>
    So:
    1. If I put
    m_binder.addTempFile(tmpFileName);
    in my method TEST_prepareMetaData Checkin succeeds.
    2. If I do not put m_binder.addTempFile(tmpFileName); //tmpFileName is the link to file being uploaded
    The same error is thrown: "The content item must have a primary file"
    First question: Why?
    1. If I put
    m_binder.addTempFile(tmpFileName);
    in method TEST_prepareMetaData Checkin succeeds - status message "<idc:field name="StatusMessage">Successfully checked in content item 'TTT_0000001'.</idc:field>";
    BUT there is no content item in the ContentServer!!
    Seqond question: Why?

  • Index File group on same drive as data files

    I've just found a file group used for indexes on the same drive as the data files.
    Am i correct in saying there is little benefit to this. The index file group should be on it's own spindle?
    Mr Shaw... One day I might know a thing or two about SQL Server!

    Definitely there will be performance gain provided you are querying for related data which as references index on those index filegroups.
    It helps in parallel processing , having data and index on multiple disk heads helps in reading the data parallel. For more information you can refer the below link
    https://technet.microsoft.com/en-us/library/ms190433%28v=sql.105%29.aspx
    --Prashanth

  • The content item must have a primary file.

    I am trying to check in a new file use the service "CHECKIN_NEW" in UCM 11.1.1.6.
    I checkin the *"CHECIN_NEW_TEST.hcst"* into the content server. And run it by submit the page to test the service.
    The error cause like this: Content item '001026' was not successfully checked in. The content item must have a primary file.
    And the log is :
    !csUserEventMessage,wladmin,192.168.6.250:16200!$!csUnableToCheckIn,001024!csCheckinPrimaryFileRequired
    intradoc.common.ServiceException: !csUnableToCheckIn,001024!csCheckinPrimaryFileRequired
    *ScriptStack CHECKIN_NEW_SUB
    3:doScriptableAction,**no captured values**3:doSubService,**no captured values**CHECKIN_NEW_SUB,**no captured values**3:validateStandard,dDocName=001024
    at intradoc.server.ServiceRequestImplementor.buildServiceException(ServiceRequestImplementor.java:2115)
    at intradoc.server.Service.buildServiceException(Service.java:2326)
    at intradoc.server.Service.createServiceExceptionEx(Service.java:2320)
    at intradoc.server.Service.createServiceException(Service.java:2315)
    at intradoc.server.DocServiceHandler.validateStandard(DocServiceHandler.java:1339)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at intradoc.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:86)
    at intradoc.common.ClassHelperUtils.executeMethodReportStatus(ClassHelperUtils.java:324)
    at intradoc.server.ServiceHandler.executeAction(ServiceHandler.java:79)
    at intradoc.server.Service.doCodeEx(Service.java:603)
    at intradoc.server.Service.doCode(Service.java:575)
    at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1643)
    at intradoc.server.Service.doAction(Service.java:547)
    at intradoc.server.ServiceRequestImplementor.doActions(ServiceRequestImplementor.java:1458)
    at intradoc.server.Service.doActions(Service.java:542)
    at intradoc.server.ServiceRequestImplementor.executeSubServiceCode(ServiceRequestImplementor.java:1322)
    at intradoc.server.Service.executeSubServiceCode(Service.java:4023)
    at intradoc.server.ServiceRequestImplementor.executeServiceEx(ServiceRequestImplementor.java:1200)
    at intradoc.server.Service.executeServiceEx(Service.java:4018)
    at intradoc.server.Service.executeService(Service.java:4002)
    at intradoc.server.Service.doSubService(Service.java:3912)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at intradoc.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:86)
    at intradoc.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:310)
    at intradoc.common.ClassHelperUtils.executeMethod(ClassHelperUtils.java:295)
    at intradoc.server.Service.doCodeEx(Service.java:620)
    at intradoc.server.Service.doCode(Service.java:575)
    at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1643)
    at intradoc.server.Service.doAction(Service.java:547)
    at intradoc.server.Service.doScriptableAction(Service.java:3964)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at intradoc.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:86)
    at intradoc.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:310)
    at intradoc.common.ClassHelperUtils.executeMethod(ClassHelperUtils.java:295)
    at intradoc.server.Service.doCodeEx(Service.java:620)
    at intradoc.server.Service.doCode(Service.java:575)
    at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1643)
    at intradoc.server.Service.doAction(Service.java:547)
    at intradoc.server.ServiceRequestImplementor.doActions(ServiceRequestImplementor.java:1458)
    at intradoc.server.Service.doActions(Service.java:542)
    at intradoc.server.ServiceRequestImplementor.executeActions(ServiceRequestImplementor.java:1391)
    at intradoc.server.Service.executeActions(Service.java:528)
    at intradoc.server.ServiceRequestImplementor.doRequest(ServiceRequestImplementor.java:737)
    at intradoc.server.Service.doRequest(Service.java:1956)
    at intradoc.server.ServiceManager.processCommand(ServiceManager.java:437)
    at intradoc.server.IdcServerThread.processRequest(IdcServerThread.java:265)
    at intradoc.idcwls.IdcServletRequestUtils.doRequest(IdcServletRequestUtils.java:1354)
    at intradoc.idcwls.IdcServletRequestUtils.processFilterEvent(IdcServletRequestUtils.java:1731)
    at intradoc.idcwls.IdcIntegrateWrapper.processFilterEvent(IdcIntegrateWrapper.java:222)
    at sun.reflect.GeneratedMethodAccessor138.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at idcservlet.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:87)
    at idcservlet.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:305)
    at idcservlet.common.ClassHelperUtils.executeMethodWithArgs(ClassHelperUtils.java:278)
    at idcservlet.ServletUtils.executeContentServerIntegrateMethodOnConfig(ServletUtils.java:1704)
    at idcservlet.IdcFilter.doFilter(IdcFilter.java:457)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119)
    at java.security.AccessController.doPrivileged(Native Method)
    at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:315)
    at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:442)
    at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103)
    at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171)
    at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:139)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119)
    at java.security.AccessController.doPrivileged(Native Method)
    at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:315)
    at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:442)
    at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103)
    at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171)
    at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3730)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3696)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2273)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1490)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    The CHECIN_NEW_TEST.hcst  as follows:
    <html>
    <head>
    <$include std_html_head_declarations$>
    </head>
    <$include body_def$>
    <$include std_page_begin$>
    <form name="cuxCheckinNewPG" method="get" action="<$HttpCgiPath$>">
    <input type="hidden" name="IdcService" value="CHECKIN_NEW" >
    <input type="hidden" name="dDocAuthor" value=<$UserName$>
    <$include idc_token_form_field$>
    <table width="53%" height="100" border="1">
    <tr>
    <td width="306" height="26">dSecurityGroup</td>
    <td width="325">
    <input type="text" name="dSecurityGroup" value="Public" /> </td>
    </tr>
    <tr>
    <td width="306" height="26">dDocTitle</td>
    <td width="325">
    <input type="text" name="dDocTitle" /> </td>
    </tr>
    <tr>
    <td>primaryFile</td>
    <td colspan="3"><INPUT NAME="primaryFile" TYPE="file"> </td>
    </tr>
    <tr>
    <td colspan="2" align="middle">
    <input type="submit" value="SubmitBtn" name="checkinSubmit"></td>
    </tr>
    </table>
    </form>
    <$include std_page_end$>
    </body>
    </html>
    Thanks
    Mandy
    Edited by: user8898100 on Jul 2, 2012 6:16 PM
    Edited by: user8898100 on Jul 2, 2012 6:17 PM

    It's really more of an HTML issue with the form itself than a Content Server issue.
    <form name="cuxCheckinNewPG" method="get" action="<$HttpCgiPath$>">You can't submit files using a GET method. This attribute must be "POST".
    <form name="cuxCheckinNewPG" method="POST" enctype="multipart/form-data" action="<$HttpCgiPath$>">

  • [SOLVED] Install arch on a partition and still access files on other

    Basicaly what i want to achieve is to have a dual boot system on my laptop where i can access the files in some sort of shared storage or just to be able to access my windows files when im booted up in arch is there a way to achieve this?
    Last edited by icyfox101 (2013-05-25 13:00:19)

    Yes you can, you could have a partition for each OS, and then you could have a third partition for your shared files (I think it would have to be ntfs for the shared partition)
    Or you could just mount the windows partition from arch.
    https://wiki.archlinux.org/index.php/NTFS-3G

  • Question about primary file

    When we create a contributor data file primary file i.e. default.xml is empty. From the contribution mode the changes which are made when saved gets reflected in XML. Any idea on which service is being called to update the default.xml file?
    Consider the following scenario ... we have a hcsp form with custom meta data fields and a text area field.
    The primary file field is hidden and am using sitestudio variables to check in default.xml file.
    When the data is entered and upon submission i am calling check in service. Is there a way to store the textArea content in the default.xml file?

    Hi ,
    Can you elaborate on what exactly is the requirement which you are looking at .
    Thanks
    Srinath

  • How to set primary file when check in

    I am trying to check in a document in java class, before i execute the CHECKIN_NEW_SUB service, how to set the primary file, i tried to use putlocal("primaryfile", 'C:/xxx.txt'), but i always got the error msg
    "Content item 'xxxxxx' was not successfully checked in. The content item must have a primary file"
    Thanks.

    Hi,
    Thanks for the response.
    I am trying to update the document thru UCM application with new revision number. When i save the changes, It gives below error.
    "Content Server Request Failed
    Content item '005195' was not successfully checked in. The content item must have a primary file."
    Detailed error from log file.
    <!-- IDCLOG: Error: (22/09/2010 9:44) !csUserEventMessage,sysadmin,grants02:10018!$!csUnableToCheckIn,005195!csCheckinPrimaryFileRequired -->
    <tr><td>Error</td><td>22/09/2010 9:44</td><td>Event generated by user 'sysadmin' at host 'Test102:10018'. Content item '005195' was not successfully checked in. The content item must have a primary file. [ <a style="color:993333" href="javascript:if(typeof show!='undefined')show('0.286966295691763')">Details</a> ]
    <div id="0.286966295691763" style="display:none;" class="details"><pre><code>An error has occurred. The stack trace below shows more information.
    !csUserEventMessage,sysadmin,grants02:10018!$!csUnableToCheckIn,005195!csCheckinPrimaryFileRequired
    intradoc.common.ServiceException: !csUnableToCheckIn,005195!csCheckinPrimaryFileRequired
         at intradoc.server.ServiceRequestImplementor.buildServiceException(ServiceRequestImplementor.java:1760)
         at intradoc.server.Service.buildServiceException(Service.java:1997)
         at intradoc.server.Service.createServiceExceptionEx(Service.java:1991)
         at intradoc.server.Service.createServiceException(Service.java:1986)
         at intradoc.server.DocServiceHandler.validateStandard(DocServiceHandler.java:1119)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at intradoc.common.IdcMethodHolder.invokeMethod(ClassHelperUtils.java:617)
         at intradoc.common.ClassHelperUtils.executeMethodReportStatus(ClassHelperUtils.java:293)
         at intradoc.server.ServiceHandler.executeAction(ServiceHandler.java:79)
         at intradoc.server.Service.doCodeEx(Service.java:490)
         at intradoc.server.Service.doCode(Service.java:472)
         at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1360)
         at intradoc.server.Service.doAction(Service.java:452)
         at intradoc.server.ServiceRequestImplementor.doActions(ServiceRequestImplementor.java:1201)
         at intradoc.server.Service.doActions(Service.java:447)
         at intradoc.server.ServiceRequestImplementor.executeSubServiceCode(ServiceRequestImplementor.java:1071)
         at intradoc.server.Service.executeSubServiceCode(Service.java:3497)
         at intradoc.server.ServiceRequestImplementor.executeServiceEx(ServiceRequestImplementor.java:942)
         at intradoc.server.Service.executeServiceEx(Service.java:3492)
         at intradoc.server.Service.executeService(Service.java:3476)
         at intradoc.server.Service.doSubService(Service.java:3465)
         at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at intradoc.common.IdcMethodHolder.invokeMethod(ClassHelperUtils.java:617)
         at intradoc.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:279)
         at intradoc.common.ClassHelperUtils.executeMethod(ClassHelperUtils.java:264)
         at intradoc.server.Service.doCodeEx(Service.java:507)
         at intradoc.server.Service.doCode(Service.java:472)
         at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1360)
         at intradoc.server.Service.doAction(Service.java:452)
         at intradoc.server.ServiceRequestImplementor.doActions(ServiceRequestImplementor.java:1201)
         at intradoc.server.Service.doActions(Service.java:447)
         at intradoc.server.ServiceRequestImplementor.executeActions(ServiceRequestImplementor.java:1121)
         at intradoc.server.Service.executeActions(Service.java:433)
         at intradoc.server.ServiceRequestImplementor.doRequest(ServiceRequestImplementor.java:635)
         at intradoc.server.Service.doRequest(Service.java:1707)
         at intradoc.server.ServiceManager.processCommand(ServiceManager.java:359)
         at intradoc.server.IdcServerThread.run(IdcServerThread.java:197)

  • Payment Advice and Payment Files

    AT F110, i know it clears the open item line item and creates checks, but what is the Payment Advice and Payment Files that are created, where they will be created. How to see them. If it is to send to Bank how to send it. Please can any one eloborate.
    Satish

    Hi Satish
    Payment advice is something you send as a notice to the vendor whoever is getting paid.  Generally when creating the variant for the payment program you would have clicked the 'payment advice' check box for it to be printed along with the check.  If you haven't clicked that, then please click that & you will see two pages, one with check & the other with the advice.
    The second part of your question about files & how they need to be sent to the bank....if you are going to be sending for example ACH payment file to the bank, we need to use program RFFOUS_T to generate a file.  The variant for this program has to be configured with output medium as '0' & file identifier as 'A'.  But before that you need to make sure in FBZP -> House banks -> DME config there is something entered in 'Company number' field.  Once that is done, please check all the standard settings like the payment method in country, in company code, etc.  Also make sure there are bank details of the vendor in the vendor master.  Then if you do the payment proposal & check the box that says 'create payment medium' & then if you goto 'Environment -> DME Admin -> PAyment medium' in F110 screen, it will take you to a user screen & then if you hit execute it will pull up the file that you generated.  Then when you do the payment run, click on that 'create payment medium' box & it would have generated the actual file in DME admin screen.  This file could be sent to the bank from that DME admin screen.  Play with it for a little bit & I am sure you will understand that.
    Award points please, if you find this useful.
    Thanks

Maybe you are looking for

  • Mixing with Buses

    Hi, I am currently mixing a large project (80 or so tracks) and I'm trying out various bus set ups to try and save on CPU. I want to be able to send groups of channels (i.e. drums or vocals) to a bus via a send (in the normal manner) but I also want

  • Do I need After Effects to do chroma key or titles in Premiere CS6?

    My question is pretty much as it appears in the thread title. I plan to make some very straightforward videos that will teach the viewer English. So we need beginning and ending credits/titles, plus subtitles in Japanese over our English audio. Also,

  • Using 2004s Query in 3.x webtemplate

    Hello, I came accross some a point in SAP help portal saying that "SAP Recommends not to use 2004s Queries and Query views in 3.x webtemplates." Did any one of you used 2004s queries in 3.x webtemplates. I have tested many 3.x templates with 2004s qu

  • 9i HOST() DEMO not working

    HI. I downloaded the 9iDS HOST() sample code, compiled the class file and when I try to run the form, the host90.fmx runs fine but nothing happens when I type in a command and push HOST button. ie I type DIR C:\ and nothing is displayed in console ou

  • Is there a plugin for map viewer 6.5 available?

    Does anyone know if there is a plugin for map viewer 6.5 available? Or a way I can view maps using firefox as the browser? I cannot access it in IE11 but can get into it with firefox but it is asking for a plugin??? Thanks. Any help appreciated.