Sliding window approach

Hai,
I am planning to implement a SLiding window approach to segment time series data.
COuld any 1 help me to start with, how to make my sliding window in java.

svpriyan00 wrote:
Hai, That word is 'hi' or 'hello'.
I am planning to implement a SLiding window approach to segment time series data.
COuld any 1 .. 'anyone'
..help me to start with, how to make my sliding window in java.'Java'
As to help, what was wrong with the help offered on [your other thread on this topic|http://forums.sun.com/thread.jspa?threadID=5408424]?

Similar Messages

  • Sliding Window Protocol

    Hai Guys,
    I am interested in doing "Conversion of Time Series Data Using a Sliding Window Approach" under Mining.
    so i have to implement a java program in Sliding Window Approach.
    DO any 1 have hints or ideas how i can start with.
    thanks

    svpriyan00 wrote:
    its like i have some data like this:
    | month | sale |
    | 1 | 23 |
    | 2 | 12 |
    | 3 | 18 |
    | 4 | 23 |
    | 5 | 24 |
    | 6 | 30 |
    | 7 | 89 |
    for this i need to use sliding window approach like window size 3 to predict.I'd say you would probably need data going back 15 or 16 months.
    c0 = sum of last 3 months
    c1 = sum of the same 3 months last year
    d1 = data for this month last year
    predicted data for this month
    d0 = d1 * c0 / c1
    but maybe I misunderstood what you are after.

  • I dont know where to begin, what is a sliding window?

    The problem is to create a "smooth" image/"oil painting" image for a given image. You must create a GUI interface that include a menu which uses a JFileChooser to load/save an image file (a plain PGM file). The menu also include an exit menu item. The GUI should also include two radio buttons for smoothing/oil painting, and another group of radio buttons for choosing a sorting method. This group of buttons should have no effect if the smooth button is chosen. A text field should be included to enter the size of the sliding window. The GUI should also include a JScrollPane to display the log of processing time. My GUI looks like the following:
    In addition to your source code, you should also turn in a short report listing the experiments you perform and the processing times. You should run your code with different methods (4 methods) and window sizes ranging from (1 x 1) to (50 x 50). In your report, answer the following questions: Which one of the four methods gives the best running time for the (3 x 3) window? Which is the best for the (5 x 5) window? Which is the best for the (10 x 10) window? Which is the best for the (20 x 20) window? Which is the best for the (50 x 50) window? What is the maximal window size for which the processed image is still recognizable for the human eye?
    Here is how to proceed. Begin with the underlying classes. Develop each one with a main method for testing. As you finish one, move on to the next.
    1. Create a ImageProcess class that extends JFrame and includes all necessary GUI components.
    2. Create a SlidingWindow class that takes an image Array and an strategy object that implements Measurer :
    interface Measurer{
    int measure(Object m); //returns the measure of the object
    The class include a getNewGrayScale method that takes a pixel position of the new image and returns an integer value of the gray scale for that pixel, a setWindowSize method, and a setSortingMethod method.
    3. Two strategy classes ByAvg and ByMedian implements Measurer, one for calculating the average and one for finding the median of an array of integers. ByMedian class should include three sorting methods that implement sorting algorithms: quicksort, counting sort and insertion sort. Note: You may use the implementations of quicksort and insertion sort from your text book, but you must implement your own counting sort!
    4. The pgm (portable gray map) image format requires 4 entries followed by the greyscale values (some files include comments lines starting with the character #). The four entries are: the literal "P2", an integer representing the x dimension, an integer representing the y dimension, and an integer representing the maximum greyscale value. There should be x times y number of grey-level values after these 4 numbers. Part of a sample plain pgm image bug1.pgm is shown below. You may download a Java file to view the image. Caution: my code works very slowly for displaying large image files b/c it paints every pixel as a square. You get bonus points if you can solve the problem.
    P2
    # Created by IrfanView
    40 42
    255
    192 192 192 192 192 192 192 192 192 192 192 192 192 192 192 192 192
    192 192 192 192 192 192 192 192 192 192 197 197 197 191 192 192
    5. The processing procedure works as follows. For each pixel of an image, the procedure considers an (n x n) window centered on that pixel, computes the mean/median of gray-level values in the window, and the mean/median becomes the new gray-level value of the new image. The median is defined as the middle value in a sorted sequence. For example, consider the following (3 x 3) window of pixels: 11 90 74 71 14 92 20 87 68. The sorted sequence of these pixels is <11, 14, 20, 68, 71, 74, 87, 90, 92>, and the middle of the sequence is 71, which is the median of the window. The median of the sequence <1,4,7,9> is (4+7)/2=5. The result of replacing all pixels with average values is a smoothed image. Using median values producing the oil painting effect. On the negative side, the new image is blurrier than the original. Corners and edges of the image need to be handled differently. For this project, for the boundary cases, the sliding window will be resized and include only the image pixels within the window.
    6. Display the image. Your image should be resizable, but the height and width ratio should remain fixed, i.e draw each pixel as a square. Extra credit: if you figure out how to directly display image of pgm format in Java.
    7. Programs to convert a file to pgm format: IrfanView, ImageMagick.
    Be sure to document every class and method (including the class supplied for you). Include a printed UML diagram for this project. Don't forget to check the documentation standards and include the signed cover sheet with your printed submission.

    i need code for connection b/n jsp and mysqlWTF has this got to do with this thread?

  • Sliding window sanario in PTF vs Availability of recently loaded data in the staging table for reporting purpose

    Hello everybody, I am a SQL server DBA and I am planning to implement table partitioning on some of our large tables in our data warehouse. I
    am thinking to design it using the sliding window scenario. I do have one concern though; I think the staging tables we use for new data loading and for switching out the old partition are going to be non-partitioned, right?? Well, I don't have an issue with
    the second staging table that is used for switching out the old partition. My concern is on the first staging table that we use it for switch in purpose, since this table is non-partitioned and holding the new data, HOW ARE WE GOING TO USE/access THIS DATA
    FOR REPORTING PURPOSE before we switch in to our target partitioned table????? say, this staging table is holding a one month worth of data and we will be switching it at the end of the month. Correct me if I am wrong okay, one way I can think of accessing
    this non-portioned staging table is by creating views, which we don’t want to change our codes.
    Do you guys share us your thoughts, experiences???
    We really appreciate your help.

    Hi BG516,
    According to your description, you need to implement table partitioning on some of our large tables in our data warehouse, the problem is that you need the partition table only hold a month data, please correct me if I have anything misunderstanding.
    In this case, you can create non-partitioned table, import the records which age is more than one month into the new created table. Leave the records which age is less than one month on the table in your data warehouse Then you need to create job to
    copy the data from partition table into non-partitioned table at the last day of each month. In this case, the partition table only contain the data for current month. Please refer to the link below to see the details.
    http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
    https://msdn.microsoft.com/en-us/library/ms190268.aspx?f=255&MSPPError=-2147217396
    If this is not what you want, please provide us more information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Sliding window for historical data purge in multiple related tables

    All,
    It is a well known question of how to efficiently BACKUP and PURGE historical data based on a sliding window.
    I have a group of tables, they all have to be backed up and purged based on a sliding time window. These tables have FKs related to each other and these FKs are not necessary the timestamp column. I am considering using partition based on the timestamp column for all these tables, so i can export those out of date partitions and then drop them. The price I have to pay by this design is that timestamp column is actually duplicated many times among parent table, child tables, grand-child tables although the value is the same, but I have to do the partition based on this column in all tables.
    It's very much alike the statspack tables, one stats$snapshot and many child tables to store actual statistic data. I am just wondering how statspack.purge does this, since using DELETE statement is very inefficient and time consuming. In statspack tables, snap_time is only stored in stats$snapshot table, not everywhere in it's child table, and they are not partitioned. I guess the procedure is using DELETE statement.
    Any thought on other good design options? Or how would you optimize statspack tables historical data backup and purge? Thanks!

    hey oracle gurus, any thoughts?

  • RMS sliding window

    Hi Everyone
    I am calculating the RMS from a 50 sample window (sampling rate 1000Hz). I am using the Biosignal RMS.vi but I want to change the window to use 25 samples before and 25 samples after the point I want  to calculate. I am still new to labview and have been running in circles a little. Help is appreciated! I have attached the VI I am using along with some sample data.
    Thanks again!
    Attachments:
    Raw data.xlsx ‏52 KB
    RMS VI.vi ‏16 KB

    This is basically the same answer David gave you, but stated a little differently.  Consider what you want to do, namely take n RMS value of a 50-sample sliding window.  Let's say your total number of samples is 100.  How many 50-second sliding windows can you "fit"?  The answer is 100 - 50 = 50 (because the first window has Samples 1 to 50, and the 50'th has samples 51 to 100 -- any more windows won't have 50 samples).
    Now consider how you want to interpret the values in, say, the first Window, samples 1 to 50.  The RMS value you compute represents responses over the entire 50-sample period, so you could reasonable say it represents the "midpoint" value (which would actually occur halfway between Samples 25 and 26, but that's a detail).  So if you ran your sliding window over the 100 points, you could say that the 50 points "represent the RMS values of samples 26 through 75".  Note that dealing with the "end points" when doing digital filtering is always an interesting question -- a reasonable thing to do is to start sampling before you need to save the data (say, 25 samples before) and continue sampling after you need to save (say by 25 samples).  Alternatively, throw away the first and last 25 samples, or "make your best guess" at the values (one method is to say "the first 25 samples are the same as Sample 26", or "the first 25 samples are a least-squares polynomial interpolation from points 26-75").
    Bob Schor

  • About sliding window

    Does anyone knows where to find sample code about implementing sliding window in java?
    thz

    You mean a splitter? Check out the turorial.

  • "Sliding window" with table partition

    Hi,
    I have a partitioned table with one partition containing history data and the other contains current (on-line) data. The table is range partitioned with date as the key and I have enabled row movements.
    I would like to have the on-line partition to store data for the last 24 months. When data (inserted earlier) gets out of date, I would like the rows to be moved to the history partition.
    My question is: what is the best implementation for such a "sliding window" technique?
    Do I have to update the partition key (date) for all rows to trigger the row movements?
    /johan

    My suggestion would be to define a second column of type number on which the
    partitioning is based, lets say 0 for actual and 1 for history data.
    Then you can schedule a job running over night or every weekend doing the
    update job
    update tab set old_new_flag = 1 where old_new_flag = 0
    and insert_ts < sysdate - 365 * 2.

  • Sliding window

    I have an interesting problem that I'd welcome some thoughts on.
    I need to track a proportion of success/failures in a sliding window of discrete events.
    In English :-) the user says "If at any time 7 out of the last 10 events were failures, let me know."
    Now this is easy enough to do with a circular buffer of booleans (for example). But I will potentially need to keep very many (x1000s) of these counters and the user parameters could get very big (like 900 out of 1000).
    Can anyone think of a more efficient way of doing this? My best idea so far involves the circular buffer - of bits, wrapped in an array of ints. This is workable but there must be a better way.
    Any and all replies welcome. Duke $$ for good or interesting suggestions ;-)

    Bugger
    Forgot to say a number of things about the algorithm I just posted. First - it is not linear therefore it only keeps an approximate track of failures.
    Second the normalised threshold weight should not be used as the trigger
    criteria as it assumes linear response (It tends to be too large as
    this algorithm produces an exponential decay). A better threshold value can be calculated by using the normalisedThreshold as a starting point and
    deriving a more appropriate value by multiplying it by approximatley 0.7. (Sorry can't remember why its 0.7 ish somthing to do with gaussians which this filter approximates quite accuratley.
    Third - I hadn't tried it so there were a few errors in it. Corrected code follows
    * ConditionCheck.java
    * Created on 02 July 2003, 23:34
    package org.fudge.TestCondition;
    * @author  Matthew Fudge copyright 2003
    public class ConditionCheck
        /** Creates a new instance of ConditionCheck */
        public ConditionCheck()
        private int numFailures = 0;
        private int windowSize = 0;
        private double oldCoeff = 0.0;
        private double newCoeff = 0.0;
        private double oldValue = 0.0;
        private double normalisedThreshold = 0.0;
    * set up the thing such that numFailures out of windowSize causes
    * notification
    public void setConditions(int numFailures, int windowSize)
      // do some sanity checks on parameters here
      this.numFailures = numFailures;
      this.windowSize = windowSize;
      // set up the multiplicative coefficients
      // for an argument of windowSize = 100 this creates
      // coefficients of 0.99 and 0.01 for old and new respectively
      this.oldCoeff = (1.0/(double)windowSize)*(windowSize-1);
      this.newCoeff = 1.0/(double)windowSize;
      // set up a new threshold such that if 80 out of 100 errors are
      // to be the threshold the value of normalisedThreshold is 0.8
      this.normalisedThreshold = (1.0/(double)windowSize)*numFailures;
      // see multiply by the mystic 0.7
      this.normalisedThreshold*=0.7;
    * tests if the condition has been met and returns true if so.
    * EventStatus should be passed as true if a failure has not occured
    * false if one has.
    * @return true if the condition is met (ie if 80 out of 100 errors
    * are found) false otherwise
    public boolean conditionTriggered(boolean eventStatus)
       double newValue = 1.0;
       if(eventStatus == true) newValue = 0.0;
       // do the funcky math basicly for 80 out of 100 errors
       // we multiply the oldValue by 0.99 and the new value by 0.01
       this.oldValue = (this.oldValue*this.oldCoeff)+(newValue*this.newCoeff);
        // now check if value is equal to or greater then 0.8 (for the
        // example arguments)
       System.out.println(oldValue);
       if(this.oldValue>=this.normalisedThreshold)
          return true;
       return false;
    }matfud

  • Sliding window technique

    Hi,
        I am doing a project in image processing,i would like to know what is sliding window technique and purpose of using it in image processing?.
    Thank you

    Hello,
    in image processing a sliding window technique is a technique of moving a window along the image, where some image processing is performed on the underlying sub-image. For example image convolution uses a sliding window. Or pattern matching.
    See the simple attached example.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
    Attachments:
    10.10.2013 13-48-13.00.gif ‏268 KB

  • Sliding Window Averaging

    Can someone tell me the best/easiest way to do sliding window averaging on
    my data. I'd like to do sliding window averaging over say 10 points. I use
    VB6 and Measurement Studio 1.0/CW3.0. I can't seem to find an obvious way
    to do it from looking through the help files.
    Thanks

    Hi,
    It looks like you can definatelly use some of the DSP or statistics functions from Measurement Studio. Here are a couple of pinters:
    Take multiple samples and perform averaging. Instead of recording a single point every second I would recoment acquires 100 samples @ 1000 Samples/Sec; get the average of those samples and send that to the graph and to the moving average. This will clean up a lot each data point (specially when working with load cells in something that is changing mass).
    Take a look at the functions in CWStat; there is a linear fit function that you could use to extrat the slope information without the need for averaging, It uses the least squares method; so the result should be accurate.
    If the data has lots of noise yo
    u can perform some filtering; probably a Butterworth low-pass filter (found in CWDSP) could help reduce the high frequency variations on your data and give you better results.
    There are some good example that ship with MStudio that can help you get started with the DSA and Stat function. Also the MStudio reference help contains some valuable information and small pieces of code.
    I hope this helps.
    Regards,
    Juan Carlos.
    N.I.

  • Sliding window compression, need some help

    hey, i'm doing a sliding window (LZ77) compression project in java and am trying to figure out if there is already an object in java i can use for the sliding window itself, basically a queue, any help would be greally appreciated, thanks in advance, also any comments on this algorithm would be welcome

    How about an ArrayList (or Vector)? Queue with add(Object), dequeue with remove(0)

  • Sliding window over wavetrain in waveform graph?

    At NIWeek 2006, Michael Cerna gave a very nice presentation in the area of Math & Signal Processing.  One of his example VIs I believe was entitled 'Cursor-based Measurement.vi' and in it he showed an interesting method of sliding a window over the signal to compute local properties over just that portion of the waveform that the window covered.  The windowed waveform was hilighted in a different color than the remainder of the waveform.  I have done something similar with a set of slide controls that allows the length and position  of the window to be changed by moving the slides.  I am trying to find his example both in the LabVIEW 7.1 and LabVIEW 8 examples and cannot locate it.  I want to see how he did what he did and perhaps modify it so that one can lengthen the window by clicking and dragging rather than inputting a length into a numeric control.  If this example was written in LabVIEW 8.2, I wonder if could possibly be saved to 8.0 and posted here.
    Thanks,
    Don

    IN case anybody wants it, here's a way to use a picture to display the data and the sliding, adjustable window.
    It uses a MODIFIED copy of the standard PLOT WAVEFORM vi, modified only to bring out the SCALE FACTORS so I can use them later.
    It's not very pretty.
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks
    Attachments:
    Test Graphing Picture.vi ‏83 KB
    Modified Plot Waveform.vi ‏81 KB

  • Turn off hiding/sliding windows

    Snow Leopard has this feature where if I accidentally push a Safari window too far up, it slides off & I have to click on the Safari icon on the Dock, or switch under the Window menu to get my other window back on the screen. I don't even understand how I accidentally do this - I can't seem to reproduce the effect now. But in any case, I'm sure it's useful for some people, but it's annoying as **** to me. So...
    How can I turn this feature off. I just want all my windows stacked on top of each other like they always to to be.
    Thank you,
    Mr. Old Fashioned Man

    In that case, I don't understand what's going on. Part of it is imprecise language. Can you better describe what you're doing and what you're seeing? I don't understand what you mean by "pushing up" a Safari window... if I drag a window to the top of the screen, it just goes up to directly beneath the menu bar. I can also think of several possible effects that could be described as "sliding off." What exactly is the window doing?

Maybe you are looking for

  • Help - draw objects doesn't appear on JFrame

    i got this code from a book... import javax.swing.*; import java.awt.*; class Ch5SampleGraphics { public static void main( String[] args ) { JFrame win; Container contentPane; Graphics g; win = new JFrame("My First Rectangle"); win.setSize(300, 200);

  • Data flow diagram - missing information stores

    I am creating data flow diagrams using sql developer data modeler. Often, the information stores that I created and used in the Data Flow Diagrams in the process model are simpy gone when save, close and reopen the model. Even the XML files on the di

  • Public role in Rooms

    Hi everybody, I want to know if the collaboration rooms works as it follows: If you use the role public and you create a public room the user that gets in will have the initial role, however if the room is set full access the user will get the public

  • Can illustrator and photoshop share the same gradient swatches?

    I love the metals that illustrator has in its swatch library can I bring those across to photoshop as well? Any idea?

  • Color "shifts" in iPhoto '06

    I recently upgraded to iLife '06, and have noticed that my photos do not look anything they looked like with iPhoto '05. There seems to be a red shift in the colors, as well as an unatural saturation applied to the images. I have read several other p