Group by vs partition by

Hi,
I am new to my project where I need some help.
My requirement is like below
In EMP table I need to group the all the employees who belongs to one department, I  used group by function to fetch the results, but my lead asked me to use partition by clause. So I am not sure is I get different result set by using partition by ?
If yes How can I use it.
Thanks,

grouping reduces the number of result rows - and analytics do not. Here is a small emp example:
select ENAME
     , job
     , count(*) over (partition by job) job_count
  from emp
order by ename;
ENAME      JOB        JOB_COUNT
ADAMS      CLERK              4
ALLEN      SALESMAN           4
BLAKE      MANAGER            3
CLARK      MANAGER            3
FORD       ANALYST            2
JAMES      CLERK              4
JONES      MANAGER            3
KING       PRESIDENT          1
MARTIN     SALESMAN           4
MILLER     CLERK              4
SCOTT      ANALYST            2
SMITH      CLERK              4
TURNER     SALESMAN           4
WARD       SALESMAN           4
With group by you would only get one line per job group. You could get the same result with a self join with a grouped result:
select e1.ename
     , e1.job
     , e2.job_count
  from emp e1
     , (select JOB
             , count(*) job_count
          from emp
         group by job) e2
where e1.job = e2.job
order by e1.ename;
ENAME      JOB        JOB_COUNT
ADAMS      CLERK              4
ALLEN      SALESMAN           4
BLAKE      MANAGER            3
CLARK      MANAGER            3
FORD       ANALYST            2
JAMES      CLERK              4
JONES      MANAGER            3
KING       PRESIDENT          1
MARTIN     SALESMAN           4
MILLER     CLERK              4
SCOTT      ANALYST            2
SMITH      CLERK              4
TURNER     SALESMAN           4
WARD       SALESMAN           4

Similar Messages

  • Logical Volume Group and Logical Partition not matching up in free space

    I was dual booting Windows 7 and Mountain Lion. Through Disk Utility, I removed the Windows 7 Partition and expanded the HFS+ partition to encompass the entire hard drive. However, the Logical Volume Group does not think that I have that extra free space. The main problem is that I cannot resize my partition. I am wanting to dual boot Ubuntu with this. Any ideas? Any help is appreciated. I will post some screenshots with the details. Furthermore, here are some terminal commands I ran: /dev/disk0
    #: TYPE NAME SIZE IDENTIFIER
    0: GUID_partition_scheme *250.1 GB disk0
    1: EFI 209.7 MB disk0s1
    2: Apple_CoreStorage 249.2 GB disk0s2
    3: Apple_Boot Recovery HD 650.0 MB disk0s3
    /dev/disk1
    #: TYPE NAME SIZE IDENTIFIER
    0: Apple_HFS MAC OS X *248.9 GB disk1 Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on
    /dev/disk1 243031288 153028624 89746664 64% 38321154 22436666 63% /
    devfs 189 189 0 100% 655 0 100% /dev
    map -hosts 0 0 0 100% 0 0 100% /net
    map auto_home 0 0 0 100% 0 0 100% /home CoreStorage logical volume groups (1 found)
    |
    +-- Logical Volume Group 52A4D825-B134-4C33-AC8B-39A02BA30522
    =========================================================
    Name: MAC OS X
    Size: 249199587328 B (249.2 GB)
    Free Space: 16777216 B (16.8 MB)
    |
    +-< Physical Volume 6D7A0A36-1D86-4A30-8EB5-755D375369D9
    | ----------------------------------------------------
    | Index: 0
    | Disk: disk0s2
    | Status: Online
    | Size: 249199587328 B (249.2 GB)
    |
    +-> Logical Volume Family FDC4568F-4E25-46AB-885A-CBA6287309B6
    Encryption Status: Unlocked
    Encryption Type: None
    Conversion Status: Converting
    Conversion Direction: backward
    Has Encrypted Extents: Yes
    Fully Secure: No
    Passphrase Required: No
    |
    +-> Logical Volume BB2662B7-58F3-401C-B889-F264D79E68B4
    Disk: disk1
    Status: Online
    Size (Total): 248864038912 B (248.9 GB)
    Size (Converted): 130367356928 B (130.4 GB)
    Revertible: Yes (unlock and decryption required)
    LV Name: MAC OS X
    Volume Name: MAC OS X
    Content Hint: Apple_HFS

    Here is another try via the command line:
    dhcp-10-201-238-248:~ KyleWLawrence$ diskutil coreStorage resizeVolume BB2662B7-58F3-401C-B889-F264D79E68B4 210g
    Started CoreStorage operation
    Checking file system
    Performing live verification
    Checking Journaled HFS Plus volume
    Checking extents overflow file
    Checking catalog file
    Incorrect block count for file 2012.12.11.asl
    (It should be 390 instead of 195)
    Checking multi-linked files
    Checking catalog hierarchy
    Checking extended attributes file
    Checking volume bitmap
    Checking volume information
    Invalid volume free block count
    (It should be 21713521 instead of 21713716)
    The volume MAC OS X was found corrupt and needs to be repaired
    Error: -69845: File system verify or repair failed

  • Failed to revert logical volume group while merging partition

    Hi All,
    Previously, on my macbook pro, I partitioned my disk for win7 for my dual operation system and only left 120 GB for OS, now I want to delete win7 and return disk space to OS. Currently my OS is yosemite.
    I have already deleted disk space for win7 and merged it back to disk0s3. And, I had a problem to merge disk0s2 with disk0s3.
    I was told it is because I need to revet my Logical Volume Group. I am blocked at reverting it.
    This is my disk info:
    rescomp-14-251133:~ rico$ diskutil list; diskutil  cs list
    /dev/disk0
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:      GUID_partition_scheme                        *500.1 GB   disk0
       1:                        EFI EFI                     209.7 MB   disk0s1
       2:          Apple_CoreStorage                         119.3 GB   disk0s2
       3:                  Apple_HFS Recovery HD             380.6 GB   disk0s3
    /dev/disk1
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:                  Apple_HFS Macintosh HD           *119.0 GB   disk1
                                     Logical Volume on disk0s2
                                     CC457129-6FE9-41A0-B0D2-F547F21A7555
                                     Unencrypted
    CoreStorage logical volume groups (1 found)
    |
    +-- Logical Volume Group F454017F-C531-43BA-B270-E2058E05BFF4
        =========================================================
        Name:         Macintosh HD
        Status:       Online
        Size:         119290187776 B (119.3 GB)
        Free Space:   4096 B (4.1 KB)
        |
        +-< Physical Volume AC7A2748-0DA1-49D6-B50C-30348838760E
        |   ----------------------------------------------------
        |   Index:    0
        |   Disk:     disk0s2
        |   Status:   Online
        |   Size:     119290187776 B (119.3 GB)
        |
        +-> Logical Volume Family 4B7E6277-69BC-475A-BBB7-7A94D6434D9E
            Encryption Status:       Unlocked
            Encryption Type:         AES-XTS
            Conversion Status:       Converting
            Conversion Direction:    -none-
            Has Encrypted Extents:   Yes
            Fully Secure:            No
            Passphrase Required:     No
            |
            +-> Logical Volume CC457129-6FE9-41A0-B0D2-F547F21A7555
                Disk:                  disk1
                Status:                Online
                Size (Total):          118954639360 B (119.0 GB)
                Conversion Progress:   -none-
                Revertible:            Yes (unlock and decryption required)
                LV Name:               Macintosh HD
                Volume Name:           Macintosh HD
                Content Hint:          Apple_HFS
    when I type unlock:
    rescomp-14-251133:~ rico$ diskutil corestorage unlockVolume CC457129-6FE9-41A0-B0D2-F547F21A7555 -stdinpassphrase
    CC457129-6FE9-41A0-B0D2-F547F21A7555 is already unlocked and is attached as disk1
    It is already unlocked
    then, I tried revert it
    rescomp-14-251133:~ rico$ diskutil coreStorage revert CC457129-6FE9-41A0-B0D2-F547F21A7555
    Passphrase:
    Started CoreStorage operation on disk1 Macintosh HD
    Error: -69750: Unable to modify a FileVault context
    Does anyone how I can revert it, then merge disk0s2 and disk0s3

    Here is another try via the command line:
    dhcp-10-201-238-248:~ KyleWLawrence$ diskutil coreStorage resizeVolume BB2662B7-58F3-401C-B889-F264D79E68B4 210g
    Started CoreStorage operation
    Checking file system
    Performing live verification
    Checking Journaled HFS Plus volume
    Checking extents overflow file
    Checking catalog file
    Incorrect block count for file 2012.12.11.asl
    (It should be 390 instead of 195)
    Checking multi-linked files
    Checking catalog hierarchy
    Checking extended attributes file
    Checking volume bitmap
    Checking volume information
    Invalid volume free block count
    (It should be 21713521 instead of 21713716)
    The volume MAC OS X was found corrupt and needs to be repaired
    Error: -69845: File system verify or repair failed

  • How Will Yosemite Logical Volume Group Affect Two-Partition Drive?

    After installing Yosemite, my Macintosh HD became a Logical Volume Group. Now, in Disk Utility, both the Drive and the Volume have the same name — Macintosh HD. I want to use one of two partitions on an extra internal drive in my Mac Pro (Mid 2010) as a bootable clone of my Yosemite startup drive, using SuperDuper!. When cloning is complete, will my spare drive become a Logical Volume Group with a renamed drive? Will that effect the other partition, which is a bootable Mavericks clone?

    Thanks for the tip. I see its a common trouble. But i stuck a problem that cannot be solved with this method cause i dont have revertable logic volume.
    SO i cannot install system now. There's  no visible disk any more.

  • Difference in number of records in GROUP BY and PARTITION BY

    Hi Experts
    If I run the following query I got 997 records by using GROUP BY.
    SELECT c.ins_no, b.pd_date,a.project_id,
    a.tech_no
    FROM mis.tranche_balance a,
    FMSRPT.fund_reporting_period b,
    ods.proj_info_lookup c,
    ods.institution d
    WHERE a.su_date = b.pd_date
    AND a.project_id = c.project_id
    AND c.ins_no = d.ins_no
    AND d.sif_code LIKE 'P%'
    AND d.sif_code <> 'P-DA'
    AND a.date_stamp >='01-JAN-2011'
    AND pd_date='31-MAR-2011'
    GROUP BY c.ins_no,
    b.pd_date,
    a.project_id,
    a.tech_no;
    I want to show the extra columns a.date_stamp and a.su_date
    in the out put so that I have used PARTITION BY in the second query but I got 1079 records.
    SELECT c.ins_no, b.pd_date,a.date_stamp,a.su_date, a.project_id,
    a.tech_no,
    COUNT(*) OVER(PARTITION BY c.ins_no,
    b.pd_date,
    a.project_id,
    a.tech_no)c
    FROM mis.tranche_balance a,
    FMSRPT.fund_reporting_period b,
    ods.proj_info_lookup c,
    ods.institution d
    WHERE a.su_date = b.pd_date
    AND a.project_id = c.project_id
    AND c.ins_no = d.ins_no
    AND d.sif_code LIKE 'P%'
    AND d.sif_code <> 'P-DA'
    AND a.date_stamp >='01-JAN-2011'
    AND pd_date='31-MAR-2011'
    Please help me why I got 1079 records.
    And also please help me how to show the two extra columns in the out put whcich are not used in
    GROUP BY clause.
    Thanks in advance.

    Hi,
    user9077483 wrote:
    Hi Experts
    If I run the following query I got 997 records by using GROUP BY. ...Let''s call this "Query 1", and the number of rows it returns "N1".
    The results tell you that there are 997 distinct combinations of the GROUP BY columns (c.ins_no, b.pd_date, a.project_id, a.tech_no).
    I want to show the extra columns a.date_stamp and a.su_date
    in the out put so that I have used PARTITION BY in the second query but I got 1079 records. ...Let's call the query without the GROUP BY "Query 2", and the number of rows it returns "N2".
    Please help me why I got 1079 records.Because there are 1079 rows that meet all the GROUP BY conditions. Query 2 has nothing to with distinct values in any columns. You would expect that N2 to be at least as high as N1, but it's not surprising that the N2 is higher than N1.
    And also please help me how to show the two extra columns in the out put whcich are not used in
    GROUP BY clause.Doesn't Query 2 show those two columns already? If the Query 2 is not producing the results you want, then what results do you want?
    Post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all the tables involved, and the results you want from that data.
    Explain, using specific examples, how you get those results from that data. Point out a couple of places where Query 2 is not doing what you want.
    Always say which version of Oracle you're using.

  • Group by using partition getting error

    select
    code,
    code_value_text
    ext_contact_organization,
    regulatory_group_name,
    sum(spd.case_id) over (partition by code,code_value_text,regulatory_group_name)
    from AERSP.code_list_dtls cdl,
    AERSP.ec_contact_log cntct,
    AERSP.de_suspect_drugs spd
    where cntct.ext_contact_organization=cdl.code
    and code_list_name= 'SP_ORGANIZATION'
    and spd.case_id=cntct.case_id
    I used above SQL to group based on the columns I wanted but when I run that SQL getting error INVALID NUMBER,
    Why Im getting error, Please help.
    Thanks in advance
    Murthy

    select
    code,
    code_value_text
    ext_contact_organization,
    regulatory_group_name,
    sum(spd.case_id) over (partition by
    code,code_value_text,regulatory_group_name)
    rom AERSP.code_list_dtls cdl,
    AERSP.ec_contact_log cntct,
    AERSP.de_suspect_drugs spd
    where cntct.ext_contact_organization=cdl.code
    code_list_name= 'SP_ORGANIZATION'
    and spd.case_id=cntct.case_id
    I used above SQL to group based on the columns I
    wanted but when I run that SQL getting error INVALID
    NUMBER,
    Why Im getting error, Please help.
    Thanks in advance
    MurthyIt's difficult to say 100%, because we know nothing about your data.
    But i can suppose, that problem is in sum(spd.case_id).
    Probably you have any not-numeric data in the column case_id
    Regards
    Dmytro

  • Group by Vs Partition By Clause

    Hello,
    Can you please help me in resolving the below issue,
    I have explained the scenario below with dummy table,
    CREATE TABLE emp (empno NUMBER(12), ename VARCHAR2(10), deptno NUMBER(12));
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (1, 'A', 10
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (2, 'B', 10
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (3, 'C', 20
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (4, 'D', 20
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (5, 'E', 30
    COMMIT ;
    SELECT DISTINCT deptno, SUM (empno) / SUM (empno) OVER (PARTITION BY deptno)
    FROM emp
    GROUP BY deptno;
    ORA-00979: not a GROUP BY expression
    Earlier i had the query like
    SELECT DISTINCT deptno, SUM (empno) OVER (PARTITION BY deptno,empno) / SUM (empno) OVER (PARTITION BY deptno)
    FROM emp;
    which executed successfully with wrong result.
    Please guide me how to resolve this issue,
    Thanks,
    Santhosh

    Hi,
    santhosh.shivaram wrote:
    Hello all, sorry for the providing the limited data, I have now depicting the actual data set and the current select query which is giving error and desired output. Please let me know if you need further information on this.
    /* Formatted on 2012/09/14 08:00 (Formatter Plus v4.8.8) */ ...If you're going to the trouble of formatting the data, post it inside \ tags, so that this site won't remove the formatting.  See the forum FAQ {message:id=9360002}
    **Current query:**
    SELECT rep_date, cnty, loc, component_code,
    SUM (volume) / SUM (volume) OVER (PARTITION BY rep_date, cnty, loc)This is the same problem you had before, and was explained in the first answer {message:id=10573091}  Don't you read the replies you get?SUM (volume) OVER (PARTITION BY rep_date, cnty, loc)
    can't be used in this GROUP BY query, because it depends on volume, and volume isn't one of the GROUP BY expressions.
    FROM table1
    GROUP BY rep_date, cnty, loc, component_code;
    when execute this query i am getting "ORA-00979: not a GROUP BY expression" error
    My desired output_Formatting is especially important for the output.  Which do you think is easier to read and understand: what you posted:
    Rep_Date     Cnty     Loc     Component_Code     QTY_VOL
    9/12/2012     2     1     CONTRACT      -0.019000516
    9/12/2012     2     1     CONTRACT      -0.019000516
    9/12/2012     2     1     NON-CONTRACT      -0.893525112
    9/12/2012     2     1     NON-CONTRACT      -0.89322
    9/12/2012     2     1     CONTRACT-INDEX     1.912525629
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     4     CONTRACT     0.015197825
    9/12/2012     2     4     CONTRACT     0.015198
    9/12/2012     2     4     NON-CONTRACT     0.984802175
    9/12/2012     2     4     NON-CONTRACT     0.984802or this?Rep_Date     Cnty     Loc     Component_Code     QTY_VOL
    9/12/2012     2     1     CONTRACT -0.019000516
    9/12/2012     2     1     CONTRACT -0.019000516
    9/12/2012     2     1     NON-CONTRACT      -0.893525112
    9/12/2012     2     1     NON-CONTRACT      -0.89322
    9/12/2012     2     1     CONTRACT-INDEX     1.912525629
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     4     CONTRACT     0.015197825
    9/12/2012     2     4     CONTRACT     0.015198
    9/12/2012     2     4     NON-CONTRACT     0.984802175
    9/12/2012     2     4     NON-CONTRACT     0.984802
    Which do you think will lead to more answers?  Quicker answers?  Better answers?
    Please let me know if you need any more information.Explain the results.
    How do you compute the qty_vol column?  Give a couple of very specific examples, showing step by step how you calculate the values given from the sample data.
    What does each row of the output represent? Your query says
    GROUP BY rep_date, cnty, loc, component_code;which means the result set will have 1 row for each distinct combiation of rep_date, cnty, loc and component_code, but your desired output has at least 2 rows for every distinct combination of them, and in one case you want 3 rows with the same rep_date, cnty, loc and component_code.  How do you decide when you want 2 rows, and when you need 3?  Will there be occassions when you need 4 row, or 5, or 1?
    All the rows with the same rep_date, cnty, loc and component_code have *nearly* the same qty_vol, but usually not quite the same.  Sometimes qty_col is rounded: sometimes it's changed slightly, but not just rounded (-0.893525112 get converted to -0.89322).  How do you decide when it's rounded, when it remains the same, and when it's changed to a completely different number?  When it's rounded, how do you decide how many digits to round it to?
    Edited by: Frank Kulash on Sep 14, 2012 12:44 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Partition Name in Select and Group-By

    Is there a function that returns the partition name a row is stored in? I'd like to use it in a SELECT statement, if it exists.
    The goal is to get row counts per partition to determine skew on a hash-based partition (otherwise, I'd just select and group-by the partition key). I want to do something like this:
    SELECT PARTITION_NAME, COUNT(*) as PER_PART_COUNT
      FROM SOMETABLE
    GROUP BY PARTITION_NAME;Obviously, I would replace the string "*PARTITION_NAME*" with whatever function returns that value.
    NOTE: Before the forum sharks jump all over me, I did search the documentation and forums, but I came up empty handed.

    Hi,
    This is hokey - but you can use the undocumented function "tbl$or$idx$part$num" like so:
    SELECT   atp.partition_name
           , COUNT (*) AS count_rows
        FROM all_tab_partitions atp
           , (SELECT tbl$or$idx$part$num ("SCOTT"."TAB1"
                                        , 0
                                        , 1
                                        , 0
                                        , partition_col1 -- your 1st partition key column
                                        , partition_col2 -- your 2nd partition key column
                                         ) AS part_num
                   , 'TAB1' AS table_name
                   , 'SCOTT' AS owner
                FROM scott.tab1) pt
       WHERE atp.table_name = pt.table_name
         AND atp.table_owner = pt.owner
         AND atp.partition_position = pt.part_num
    GROUP BY atp.partition_name;Note - this probably won't perform well at all... but it could be an option...
    Message was edited by:
    PDaddy
    ... On 2nd thought - don't do it - it seems it is pretty unstable - brought my session to a halt a couple of times... Forget I mentioned it :)
    ... On 3rd thought - if you add ROWNUM - it works fine - seems you need to materialize the result set in the inner query - like so:
    CREATE TABLE part_tab (part_key int
                         , a int
    partition by range (part_key)
    (partition p1 values less than (100)
    ,partition p2 values less than (200)
    ,partition p3 values less than (maxvalue)
    insert into part_tab
    select rownum
         , rownum
    FROM all_objects
    where rownum <= 1000; SELECT atp.partition_name
         , COUNT(*) AS count_rows
    FROM all_tab_partitions atp
       , (SELECT TBL$OR$IDX$PART$NUM("PART_TAB", 0, 1, 0, part_key) AS part_num
               , 'PART_TAB' AS table_name
               , user AS owner
               , ROWNUM AS rn -- Materialize the result set by adding ROWNUM...
          FROM part_tab) pt
    WHERE atp.table_name = pt.table_name
      AND atp.table_owner = pt.owner
      AND atp.partition_position = pt.part_num
    GROUP BY atp.partition_name;

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Partitioning Advice on PRIMARY FILE GROUP

    Hi,
    I wonder if you can help. I would like some advice on partitioning within the PRIMARY file group. Basically I have a lot of experience on partitioning and in the past I have followed best practice is documents such as
    Partitioned Table and Index Strategies Using SQL Server 2008   and the
    Analysis Services Performance Guide   so in summary when partitioning in the past I have followed best practice, e.g. have one file group per partition and have your partition key as a component within the primary key of the table you are
    partitioning.
    However following best practice in this way can require a lot of maintainence, and the system I am performance tuning at the moment has a limited life span.
    So in this context I wonder if there are any advantages of basing partitioning on a PRIMARY file group?
    Kind Regards,
    Kieran.
    Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

    Hi Erland,
    Many thanks for your advice which has helped me set a wider context.
    I'm sorry, I think this question is drifting onto Analysis Services even though I started out asking a question related to Partitioning Advice on PRIMARY FILE GROUP.
    Your answer increases my empathy with people who are skeptical about the performance advantages of partitioning very large tables even when these very large tables are the data source of a cube.
    Is it possible for a question in these exceptional circumstances to be associated with 2 forums, i.e Transact SQL and Analysis Services? - since the scope of this question covers both areas.
    I have had a lot of success in dramatically speeding up the processing of a cube with multi-billion record fact tables in the underlying data warehouse for previous clients. Where this cube had the partition key in the WHERE clause
    of the Cube partition which was the same as the partition index in the underlying fact table. And where each fact table had a different physical file group for each partition. The cube, data warehouse, transaction logs, and tempdbs were on separate spindles.
    However the partitions within the cube and the partitions within the data warehouse were not on separate spindles for each of the partitions.
    A proven advantage of implementing the above design strategy is that it increases parallelism. Please see page 86 of the best practice document on
    Analysis Services 2008 R2 Performance Guide.
    So getting back to my question in the above context: If you slice on a particular partition with a particular partition key value, e.g. PeriodYYYYMM = 201408 on a relational database table, will it make a difference to performance if this table is partitioned
    by the PRIMARY file group, or should the partitions have a separate FILE GROUP for each partition?
    Kind Regards,
    Kieran.
    Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

  • Installing OS X Yosemite has changed my Macintosh HD type to Logical Volume Group

    I have just installed OS X 10.10 Yosemite on my late 2013 MacBook Pro with Retina Display,
    The installation has worked perfectly and my computer is now running fine. However after opening disk utility I have noticed where previously my HD would be named "Apple SSD...." and the partition underneath would be named Macintosh HD, both are now the same. Also, previously the HD was GUID type and now they have changed to Logical Volume Group and Logical Partition.
    I installed the software by creating a bootable memory stick with OS X 10.10 and completed a clean install by completely erasing the HD and then installing the new software. After the HD had been erased, Disk Utility still showed the HD as "Apple SSD...." with the partition "Macintosh HD" below and with Type: GUID. It wasn't until after the installation was complete and I had set up the computer and then gone back into Disk Utility when I released it had changed to both labels being "Macintosh HD" and the format type being Logical Volume Group and Logical Partition.
    I have no clue if this is normal and was a planned changed (i.e the software was designed to change it) or if my installation has gone horribly wrong. Can the HD format type be changed back or does Yosemite have to run on Logical Partition and Logical Volume Group formatted HD's.
    Any help would be appreciated as I'm not an expert and I have tried googling for hours and can't find anything helpful/useful or that will work.

    It's a designed change during the Yosemite install process. John Siracusa's excellent review of Yosemite over at Ars Technica provides some commentary on the change, specifically this page: http://arstechnica.com/apple/2014/10/os-x-10-10/2/

  • Logical Volume Group.

    My 1TB SSD main drive on my MBP Retina volume has changed to Logical Volume Group and the Partition is named Logical Partition.  Now it works fine but was wondering why it has changed from Mac OS Extended (Journaled).  If I want to partition the drive, it is grayed out with no options.  I'm not pleased that I can't control my own drive.  Any answers?

    FileVault is not activated, although it was at one point.  I deleted the partition and...
    I believe that may have caused the problem when I attempted to load the operating system on a new partition from the drive.  I deleted it and thought I reformatted it to Mac OS Extended, but I had major issues because it was locked at one point.  Just recently, I restored to a TM backup which should have formatted the drive correctly but this is what I ended up with.  Like I said, the option is grayed out now.
    Gary

  • Need help in optimizing the query with joins and group by clause

    I am having problem in executing the query below.. it is taking lot of time. To simplify, I have added the two tables FILE_STATUS = stores the file load details and COMM table that is actual business commission table showing records successfully processed and which records were transmitted to other system. Records with status = T is trasnmitted to other system and traansactions with P is pending.
    CREATE TABLE FILE_STATUS
    (FILE_ID VARCHAR2(14),
    FILE_NAME VARCHAR2(20),
    CARR_CD VARCHAR2(5),
    TOT_REC NUMBER,
    TOT_SUCC NUMBER);
    CREATE TABLE COMM
    (SRC_FILE_ID VARCHAR2(14),
    REC_ID NUMBER,
    STATUS CHAR(1));
    INSERT INTO FILE_STATUS VALUES ('12345678', 'CM_LIBM.TXT', 'LIBM', 5, 4);
    INSERT INTO FILE_STATUS VALUES ('12345679', 'CM_HIPNT.TXT', 'HIPNT', 4, 0);
    INSERT INTO COMM VALUES ('12345678', 1, 'T');
    INSERT INTO COMM VALUES ('12345678', 3, 'T');
    INSERT INTO COMM VALUES ('12345678', 4, 'P');
    INSERT INTO COMM VALUES ('12345678', 5, 'P');
    COMMIT;Here is the query that I wrote to give me the details of the file that has been loaded into the system. It reads the file status and commission table to show file name, total records loaded, total records successfully loaded to the commission table and number of records that has been finally transmitted (status=T) to other systems.
    SELECT
        FS.CARR_CD
        ,FS.FILE_NAME
        ,FS.FILE_ID
        ,FS.TOT_REC
        ,FS.TOT_SUCC
        ,NVL(C.TOT_TRANS, 0) TOT_TRANS
    FROM FILE_STATUS FS
    LEFT JOIN
        SELECT SRC_FILE_ID, COUNT(*) TOT_TRANS
        FROM COMM
        WHERE STATUS = 'T'
        GROUP BY SRC_FILE_ID
    ) C ON C.SRC_FILE_ID = FS.FILE_ID
    WHERE FILE_ID = '12345678';In production this query has more joins and is taking lot of time to process.. the main culprit for me is the join on COMM table to get the count of number of transactions transmitted. Please can you give me tips to optimize this query to get results faster? Do I need to remove group and use partition or something else. Please help!

    I get 2 rows if I use my query with your new criteria. Did you commit the record if you are using a second connection to query? Did you remove the criteria for file_id?
    select carr_cd, file_name, file_id, tot_rec, tot_succ, tot_trans
      from (select fs.carr_cd,
                   fs.file_name,
                   fs.file_id,
                   fs.tot_rec,
                   fs.tot_succ,
                   count(case
                            when c.status = 'T' then
                             1
                            else
                             null
                          end) over(partition by c.src_file_id) tot_trans,
                   row_number() over(partition by c.src_file_id order by null) rn
              from file_status fs
              left join comm c
                on c.src_file_id = fs.file_id
             where carr_cd = 'LIBM')
    where rn = 1;
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS
    LIBM    CM_LIBM.TXT          12345678                5          4          2
    LIBM    CM_LIBM.TXT          12345677               10          0          0Using RANK can potentially produce multiple rows to be returned though your data may prevent this. ROW_NUMBER will always prevent duplicates. The ordering of the analytical function is irrelevant in your query if you use ROW_NUMBER. You can remove the outermost query and inspect the data returned by the inner query;
    select fs.carr_cd,
           fs.file_name,
           fs.file_id,
           fs.tot_rec,
           fs.tot_succ,
           count(case
                    when c.status = 'T' then
                     1
                    else
                     null
                  end) over(partition by c.src_file_id) tot_trans,
           row_number() over(partition by c.src_file_id order by null) rn
    from file_status fs
    left join comm c
    on c.src_file_id = fs.file_id
    where carr_cd = 'LIBM';
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS         RN
    LIBM    CM_LIBM.TXT          12345678                5          4          2          1
    LIBM    CM_LIBM.TXT          12345678                5          4          2          2
    LIBM    CM_LIBM.TXT          12345678                5          4          2          3
    LIBM    CM_LIBM.TXT          12345678                5          4          2          4
    LIBM    CM_LIBM.TXT          12345677               10          0          0          1

  • ORACLE 10G group by sorting Suggestion rqd

    Hai friends,
    recently we migrated from Oracle 9i to oracle 10g. We are instructed that Group by will not sort the records in Oracle 10g, so we are asked to add Order By Clause in wherever applicable in the package.
    But after migration i inserted 100 records in 10g database and tested. I am getting correct sorting of records based on Group by column. Can any one explain me why it is? I heard the news sorting will difer on Oracle 10g versions. I am using Oracle Database 10g Enterprise Edition Release 10.2.0.4.0. if it automaticallly sorts, then no need to change the package for sortiing right?
    Give me your valuable suggestions..
    S

    And here's a simple example on 10.2.0.4 that clearly shows the results are not sorted.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    SQL> select object_type, count(*) from all_objects group by object_type;
    OBJECT_TYPE                     COUNT(*)
    CONSUMER GROUP                         2
    INDEX PARTITION                      718
    TABLE SUBPARTITION                    14
    SEQUENCE                             228
    SCHEDULE                               1
    TABLE PARTITION                      301
    PROCEDURE                             21
    ...Like everyone else has already said, Oracle does not guarantee the order from a group by clause with no order by.
    If you need the results in order you must use an order by clause.
    Pretty simple really.

  • Top space consuming Tables (which considers LOBs and Partitions )

    11.2.0.3/Solaris
    When u run a query like below which lists the top space consuming segments , It will only list segments for LOBs and Partitions of a table separately..
    select segment_name, bytes/1024/1024/1024 gb, segment_type from dba_segments where owner = 'MCS_WM_USR'  order by gb desc;
    SEGMENT_NAME                                                                             GB SEGMENT_TYPE
    WMERROR                                                                          14.4287109 TABLE
    SYS_LOB0000737548C00014$$                                                        9.93554687 LOBSEGMENT
    WMSERVICE                                                                        6.00488281 TABLE
    WMSESSION                                                                        5.11621093 TABLE
    SYS_LOB0000737571C00017$$                                                        4.61914062 LOBSEGMENT
    WMCUSTOMPROCESSDATA                                                               2.8046875 TABLE
    SYS_C00511081                                                                    1.25097656 INDEX
    IDX_SVC_COM1                                                                     0.95501708 INDEX
    IDX_SESS_AUDTM                                                                   0.92291259 INDEX
    .Does anyone have a query which will list the actual table sizes which is calculated after summing up all LOBs and Partitions in that table ?
    For example;
    If table EMP has 3 LOB columns, in user_segments , it will appear something like
    SEGMENT_NAME                                                                          GB     SEGMENT_TYPE
    EMP                                                                               3           TABLE
    SYS_LOB0000737548C00014$$                                                         7           LOBSEGMENT
    SYS_LOB0000451978C00014$$                                                         2           LOBSEGMENT
    SYS_LOB0000875128C00014$$                                                         5           LOBSEGMENTThe total size of EMP table is 3 + 7 + 2 + 5 = 17 GB
    I am looking for query which will do a lookup in DBA_LOBS and DBA_PARTITIONS and add up space for each table.
    So, my required output will looks like
    TABLE_NAME          Size
    EMP               17g
    DEPT                1g
    WMERRORS          104g
    .

    Hi Garry,
    the segment_name in DBA_SEGMENTS can be used to group all table partitions of a partitioned table. So we only have to join DBA_LOBS to DBA_SEGMENTS and group the result rows:
    with
    basedata as (
    select owner
         , segment_name
         , segment_type
         , round(sum(bytes)/1024/1024/1024) gb
         , sum(bytes) bytes
         , count(*) segment_count
      from dba_segments s
    group by owner, segment_name, segment_type
    lobs as (
    select owner
         , table_name
         , segment_name
      from dba_lobs
    all_segs as (
    select coalesce(lobs.table_name, basedata.segment_name) table_name
         , basedata.*
      from basedata
      left outer join
           lobs
        on (basedata.segment_name = lobs.segment_name
            and basedata.owner = lobs.owner)
    select table_name
         , sum(bytes) bytes
         , sum(gb) gb
      from all_segs 
    group by table_name
    having sum(gb) > .1;Regards
    Martin

Maybe you are looking for

  • Publish HTML to xMII 12.0 and system performance is poor

    Hi all, I'm having an issue since we upgraded to 12.0.   When I publish my page to MII, the whole system slows down.  This lasts forever.  ( 2 - 3 minutes)  And then the HTML pages return to normal processing speed. As I develop, I publish many times

  • Noise on the photo. How to set up the camera?

    Noise on the photo. How to set up the camera? Phone has become worse to do a pictures. A lot of noise on the photo. Often photo obtained not clear, smeared. I've heard that i can configure through iTunes. But I do not know how?? Or tell me another wa

  • Embedded BI in 2004s ECC 6.0

    Dear BI/ECC 6.0 practitioners, Is there a specific statement from SAP or paper out there regarding the embedded BI in ECC 6.0 ? I read note 855534 and it seems that SAP does not recommend separating BI from ECC 6.0. This is very apparent, as one cann

  • Nm-applet not starting

    Hi to everyone, I just installed arch x86_64 on my notebook with EFI, I am using also multilib and the xf86-video-intel driver. I installed gnome but not able to use networkmanager correctly, the applet is not starting even if it seems the daemon is

  • Can't run or uninstall spotify

    I'm on Windows 7 64bit.  I'm trying to get Spotify Desktop up and running on my work laptop.  I was able to run Spotify once, immediately after installing it.  Then, I closed it by accident and I started to get a crash when launching Spotify and have