Index File group on same drive as data files

I've just found a file group used for indexes on the same drive as the data files.
Am i correct in saying there is little benefit to this. The index file group should be on it's own spindle?
Mr Shaw... One day I might know a thing or two about SQL Server!

Definitely there will be performance gain provided you are querying for related data which as references index on those index filegroups.
It helps in parallel processing , having data and index on multiple disk heads helps in reading the data parallel. For more information you can refer the below link
https://technet.microsoft.com/en-us/library/ms190433%28v=sql.105%29.aspx
--Prashanth

Similar Messages

  • I had no idea Aurora used the same actual App Data file as it's own source, I thought it was just emulating Firefox. I unistalled it and lost everything...

    I had installed Aurora 21.0a2 some time last year and had only used it once. I recently uninstalled it and without knowledge that it used the same exact App Data files as Firefox does, so I removed all data. Now I have lost 100's of bookmarks... I do backup my system often but not my browser data, I had only once made a copy of my Firefox data in August 2013, so I was able to copy and paste that info, which had maybe 35-40% of my bookmarks...
    I had tried Windows Recovery a couple times, it says it wasn't able to restore the data successfully each time. I had also tried Recuva, not really sure if it found the file I was searching for, so I would kindly ask any of the Forum Mods or users who would suggest this method, what "file" exactly should I search for.
    I had been searching for ".default" and ".defaults" files, and any other data relating to Firefox but with no success.
    Do I really have any options of recovery or ....?

    You should never choose to remove "personal data" when you uninstall your current Firefox version, because this will remove all profile folders and you lose personal data like bookmarks and passwords including data in profiles created by other Firefox versions.
    You will have to try to recover a recent JSON backup (bookmarks-####-##-##_xxxx.json) in the bookmarkbackups folder of that removed Firefox profile folder with Recuva or another undelete utility.
    *http://kb.mozillazine.org/Profile_folder_-_Firefox
    *https://support.mozilla.org/kb/Recovering+important+data+from+an+old+profile
    *http://kb.mozillazine.org/Transferring_data_to_a_new_profile_-_Firefox
    *http://kb.mozillazine.org/Backing_up_and_restoring_bookmarks_-_Firefox

  • File Group 'xilinx_softwaredriver (Software Driver)': Requires either a MLD or a MDD file

    Hi all,
    Since upgrading to 2015.1 i keep receiving and odd warning message when packaging IP. The File Groups window keeps showing the warning given below:
    [IP_Flow 19-4629] File Group 'xilinx_softwaredriver (Software Driver)': Requires either a MLD or a MDD file.  Neither have been added to file group: "xilinx_softwaredriver"
    According to what I can see in the file lists the MDD is there so I don't really understand the warning. I have tried removing and re-adding the MDD file but this does not seem to change anything. This only seems to be a problem on 2015.1 and didn't show up at all on 2014.2.
    Can anyone shed any light on this matter?
    Regards
    Simon

    Hi Simon,
    actually what was marked as a "solution" was definitely not the solution.
    The warning is not harmless. In my case the IP driver files were not copied into my SDK project.
    You can make the IP integrator accept the files by changing the values in the "Type" column in the File Groups panel of the PackageIP tab.
    The <ip_name>.mdd file's Type field shall be set to mdd driver_mdd
    The <ip_name>.tcl file's Type field shall be set to tclSource driver_src
    The Makefile file's Type field shall be set to unknown driver_src
    You can make a further check by creating a new AXI IP from Vivado 2015.1 and comparing the settings.
    Hope it'll help.
    Cheers
    Alessandro

  • Read_Only and Read_Write File Groups in Same Database

    We have fairly static reference data in a database we have set to Read_Only for a number of reasons and it has worked well in that state. I am being asked to change that now so we can load data daily into this database. I am thinking about creating a read_write
    filegroup in the database to do this and still allow us to have the original tables in the database on a read_only filegroup. I am wondering what issues may occur with this approach, concerned about taking this highly read database to a read_write state and
    causing issues. It appears the primary data file can't be set to read only, so I would need to create the read_only filegroup and move all the existing data/tables to it that file group. Anyone have comments/experience along these lines?

    When a Filegroup is marked as Read-only, SQL Server will not bother with Page or Row locks on the tables or indexes contained in them. This reduces SQL Server overhead and improves performance. Since the data is not changing, index fragmentation does not
    occur so maintenance, such as rebuilding or reorganizing, is unnecessary. That saves time and effort also. Also,  in SQL Server 2008 and later, you can mark a Filegroup as Read-only without having exclusive access to the entire database....
    Raju Rasagounder Sr MSSQL DBA

  • How to writing the data into the file at the same frequency of data acquisition using myRIO

    Hi everyone,
    I have a question regarding data acquisition fequency and data recording frequency of myRIO. Hope you guys can help me out.
    Basically, I want to acquire voltage input at analog input 0 at the frequency of 1kHz and then write the data into a file (tdms format). 
    However, I always found there were only 55 or 56 data points recorded every second in the data file (see the excel sheet). 
    To confirm my data acquisition was performed at the correct frequency, I added a small function in the main loop to indicate the time spent between two acquisition events. 
    To my surprise, the period of data acquistion is correct (1ms or 1kHz) but there are only 55 or 56 data point per second recorded in the data file.
    How can I record every data point acquired by the analog input?
    Thank you!
    P.S. I am very new to myRIO. How can I manually set the system time for myRIO? The default time of my myRIO is wrong. 
    Best,
    Tengyang
    Attachments:
    test result.xlsx ‏16 KB
    Main with timed loop.vi ‏122 KB
    test result.xlsx ‏16 KB

    Have a look at the Jakarta POI project, they have a Java API for creating Word documents.
    http://jakarta.apache.org/poi/hwpf/index.html

  • Target file of the same name as source file

    Hi,
        I am working on a file to file scenario. I need the target file name to be same as the source file name.
        I checked the adapter specific message attributes in the sender as well as receiver file adapter,and also checked the file name.
        Now am using variable substitution for the file name in the receiver file adapter.
        I defined the variable value as :
        message:FileName
       But no file is getting created in the destination folder.
       Can anyone help me out with this.
    Thanks and regards,
    Pravesh Puria.

    Pravesh,
    If you have selected  File adapter --> Adapter Specfic Attribuest --> File name in the sender and receiver file adapter , then do not go for variable name substittutuion.
    Just make a dummy file name in the receiver file adapter for the file name field and XI will automatically use the same file name as the source when creating the file.
    Regards,
    Bhavesh

  • ORA-01242: data file suffered media failure - ORA-01208: data file is an

    Hi,
    I am running Oracle 9.2.0.5 and if fails almost on a daily basis.
    I get the following errors:
    *** 2008-04-09 09:31:46.334
    *** SESSION ID:(4.1) 2008-04-09 09:31:46.318
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    ORA-01122: database file 11 failed verification check
    ORA-01110: data file 11: 'E:\ORACLE\ORADATA\MYDB\MYDB.ORA'
    ORA-01208: data file is an old version - not accessing current version
    error 1242 detected in background process
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    ORA-01122: database file 11 failed verification check
    ORA-01110: data file 11: 'E:\ORACLE\ORADATA\MYDB\MYDB.ORA'
    ORA-01208: data file is an old version - not accessing current version
    I run the following SQL and DB is recovered but I can't keep doing this everytime it crashes:
    sqlplus>startup nomount;
    sqlplus>alter database mount;
    sqlplus>alter database recover;
    sqlplus> alter database open;
    Any ideas how can I resolve this problem.
    Regards
    Spiros

    Refer to this metalink note:
    ORA-1242, ORA-1122, ORA-1110 & ORA-1208 Errors Occurring Intermittently
    Doc ID: Note:471280.1
    Werner

  • TS3276 why is jpeg file lost and replaced with winmail.dat file in mail

    I sent a jpeg file to my hotmail account, which is synchronised in my Mac Mail but the file was lost/converted to winmail.dat attachment, which I couldn't import to iPhoto. How do I import a jpeg image, emailed to me from an external source, to iPhoto?

    I think you need to configure Outlook in Windows to use HTML messages.
    Here is Microsoft's incomprehensible support document on the issue.
    Here is one from about.com that makes sense.

  • Does iCould Control Panel V2 still create new Outlook data files rather than using the default data files already in use?

    This has always made iCloud for the PC an impossibility for me, and seems to be just a plain stupid implementation.  Moving all the calendar and tasks to a proprietary iCloud data file within Outlook breaks so many other things that I need to use within Outlook.  Does version 2 of iCloud Control Panel still work this way?

    Yes, just like Microsoft Exchange, Zimbra, Gmail etc.

  • Create XML format file in bulk insert with a data file with out delimiter

    Hello
    I have a date file with no delimiter like bellow
    0080970393102312072981103378000004329392643958
    0080970393102312072981103378000004329392643958
    I just know 5 first number in a line is for example "ID of bank"
    or 6th and 7th number in a line is for example "ID of employee"
    Could you help me how can I create a XML format file?
    thanks alot

    This is a fixed file format. We need to know the length of each field before creating the format file. Say you have said the first 5 characters are Bank ID and 6th to 7th as Employee ID ... then the XML should look like,
    <?xml version="1.0"?>
    <BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <RECORD>
      <FIELD ID="1"xsi:type="CharFixed"LENGTH="5"/>
      <FIELD ID="2"xsi:type="CharFixed"LENGTH="2"/>
      <FIELD ID="3" xsi:type="CharFixed" LENGTH="8"/>
      <FIELD ID="4" xsi:type="CharFixed" LENGTH="14"/>
      <FIELD ID="5" xsi:type="CharFixed" LENGTH="14"/>
      <FIELD ID="6" xsi:type="CharFixed" LENGTH="1"/>
    </RECORD>
    <ROW>
      <COLUMNSOURCE="1"NAME="c1"xsi:type="SQLNCHAR"/>
      <COLUMNSOURCE="2"NAME="c2"xsi:type="SQLNCHAR"/>
      <COLUMN SOURCE="3" NAME="c3" xsi:type="SQLCHAR"/>
      <COLUMN SOURCE="4" NAME="c4" xsi:type="SQLINT"
    />
      <COLUMN SOURCE="5" NAME="c5" xsi:type="SQLINT"
    />
    </ROW>
    </BCPFORMAT>
    Note: Similarly you need to specify the other length as well.
    http://stackoverflow.com/questions/10708985/bulk-insert-from-fixed-format-text-file-ignores-rowterminator
    Regards, RSingh

  • Same driver version, different file sizes? What the dif?

    <div class="DownloadBriefBox"><div class="DownloadNameBox">Creative Sound Blaster Audigy series beta driver 2.8.00 [/i]
    <span class="DownloadSizeBox">Filesize : <span class="DownloadSizeText">42.35 MB <img height="6" src="http://support.creative.com/images/icon_download.gif[/img] width="6">[url="http://support.creative.com/downloads/download.aspx?nDownloadId=063">Download [/url]<div class="DownloadDate">Release date : 25 Aug 09 <div class="DownloadShortDesc">This?download is a?beta driver providing Microsoft? Windows?7, Windows Vista? and Windows XP support for Creative Sound Blaster? Audigy? series of audio devices. For more details, read the rest of this web release note. <div class="DownloadShortDesc">
    <div class="DownloadShortDesc">or<div class="DownloadShortDesc">
    <div class="DownloadShortDesc"><div class="DownloadBriefBox"><div class="DownloadNameBox">[url="http://support.creative.com/Downloads/welcome.aspx#">Creative Sound Blaster Audigy series driver 2.8.00 [/url]
    <span class="DownloadSizeBox">Filesize : <span class="DownloadSizeText">37.62 MB <img height="6" src="http://support.creative.com/images/icon_download.gif[/img] width="6">[url="http://support.creative.com/downloads/download.aspx?nDownloadId=0960">Download [/url]<div class="DownloadDate">Release date : 3 Jul 09 <div class="DownloadShortDesc">This?download is a?driver providing Microsoft? Windows?7, Windows Vista? and Windows XP support for Creative Sound Blaster? Audigy? series of audio devices. For more details, read the rest of this web release note. <div class="DownloadShortDesc">
    <div class="DownloadShortDesc">I think someone's already answered this question, but I can't remember, soz.

    = There is a missing build number in the version info.
    2.8.00.6? - 3 Jul 09?
    - Adds a check to prevent third party SPDIF endpoints from being displayed in the Encoder tab.
    - Properly saves and restores the Enable Dolby Digital Li've checkbox state when you close and reopen the Audio Console application.
    - [color="#ff0000"]Bug: Does not tag the Audigy SPDIF Output, so no devices are available in the Encoder tab in Audio Console.
    - [color="#ff0000"]Bug: Internal error in localized versions of Audio Console when you enable the Encoder.
    2.8.00.8? - 25 Aug 09?
    - Properly tags the Audigy SPDIF Output as a Encoder capable device, fixing the bug in previous version.
    - [color="#ff0000"]Bug: Does not add the 64-bit .UDA catalog file to?the Windows catroot repository, resulting the Enhancements errors and issues with Windows Media Player.
    - [color="#ff0000"]Bug: Error in localized Audio Console still not fixed.

  • Total of line items in line item file not the same as in statement file

    Hi,
    Can anyone give an idea about this error?
    I analysed teh multicash file for the totals and line items total ties out with the statement total and formats of the file also seem to be fine.
    not sure why the error persists?
    Raj/

    You should give the exact message, with message number.
    If it is FB 777, "DTAUS: Number of line items not equal to control total; see long text", you should see the long text. I think it contains sufficient information:
    Text                                                                               
    DTAUS: Number of line items not equal to control total; see long text
    Diagnosis                                                                               
    data records of record type 'C' were handed over in the file you    
        imported. There must be  data records according to the control total 
        from the fourth field of record type 'E'.                                                                               
    Processing was therefore terminated.                                                                               
    The DTAUS file was not imported into the bank data clipboard. No     
        postings were generated.                                                                               
    Procedure                                                                               
    Inform your credit institution about the error which has occurred and
        let them give you a correct DTAUS file.                                                                               
    No actions are necessary in the SAP System.

  • Adding Data file to existing primary file group with 1 data file

    Currently our databases are configured to only have 1 data file and 1 log file.  I am looking at adding a 2nd data file to the primary group, which will be on a separate lun.
    Will we benefit from adding the 2nd data file (same size as 1st data file and same autogrowth rate) , or should we create a new database with 2 data files (equal size and autogrowth rate), and import the data from the database with the single data file.
    Thanks.
    DJ

    Having another data file pointing to different Physical Volume
    will give you better performance gains. Additionally, you should pre-size them (Same as First Data File) with same growth settings (Preferably in Mb
    instead of Percentages) .
    It is perfectly OK to add another data file to PRIMARY file-group as well and SQL Server will automatically balance the data across multiple files over the period time (Due to Data Striping)
    HTH
    Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue

  • Dense Restructure 1070020 Out of disk space. Can't create new data file

    During a Dense Restructure we receive: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    Essbase 6.5.3 32-bit
    Windows 2003 32bit w/16GB RAM
    Database is on E: drive with 660GB space total, database is ~220GB.
    All cubes are unlimited
    Tried restoring from backup same problem.
    Over years and years the database is never recalculated, never exported and imported, never verified. Only new data loaded and dense restructured.
    Towards the end of a dense restructure (about 89 pan files through about 101 2GB pag files), getting an error: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    There are still several hundred GB of free space available, and we can write to this free space outside of the essbase application within windows.
    The server's file system is consistent, defragmented, and can prove use of additional space. Hard drive controller and system does not report any "hardware issues".
    Essbase.cfg file
    ; The following entry specifies the full path to JVM.DLL.
    JvmModuleLocation C:\Hyperion\Essbase\java\jre13\bin\hotspot\jvm.dll
    ;This statement loads the essldap.dll as a valid authentication module
    ;AuthenticationModule LDAP essldap.dll x
    DATAERRORLIMIT 30000
    ;These settings are here to deal with error 1040004
    NETRETRYCOUNT 2000
    NETDELAY 1600
    App log
    [Sat Oct 17 13:59:32 2009]Local/removedfrompost/removedfrompost/admin/Info(1007044)
    Restructuring Database [removedfrompost]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost/removedfrompost/admin/Error(1070020)
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008108)
    Essbase Internal Logic Error [7333]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008106)
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    log00002.xcp
    Assertion Failure - id=7333 condition='((!( dbp )->bFatalError))'
    - line 11260 in file datbuffm.c
    - arguments [0] [0] [0] [0]
    Additional log info from database start to restructure failure
    Starting Essbase Server - Application [removedfrompost]
    Loaded and initialized JVM module
    Reading Application Definition For [removedfrompost]
    Reading Database Definition For [removedfrompost]
    Reading Database Definition For [TempOO]
    Reading Database Definition For [WTD]
    Reading Database Mapping For [removedfrompost]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    Waiting for Login Requests
    Received Command [Load Database]
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Reading Outline For Database [removedfrompost]
    Declared Dimension Sizes = [289 125 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 119 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34391]
    Maximum Declared Blocks is [1960864521] with data block size of [72250]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17138]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [removedfrompost] can hold a maximum of [76] blocks.
    The Dyn.Calc.Cache for database [removedfrompost], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [removedfrompost]...
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\removedfrompost\removedfrompost.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Reading Outline For Database [TempOO]
    Declared Dimension Sizes = [277 16 2 1023 139047 ]
    Actual Dimension Sizes = [277 16 1 1022 138887 ]
    The number of Dynamic Calc Non-Store Members = [68 3 0 0 0 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [4432]
    Maximum Declared Blocks is [142245081] with data block size of [8864]
    Maximum Actual Possible Blocks is [141942514] with data block size of [2717]
    Essbase needs to retrieve [1] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [TempOO] can hold a maximum of [591] blocks.
    The Dyn.Calc.Cache for database [TempOO], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [TempOO]...
    Data cache size ==> [3145728] bytes, [144] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\TempOO\TempOO.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Reading Outline For Database [WTD]
    Declared Dimension Sizes = [2 105 2 11649 158778 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 6 ]
    Actual Dimension Sizes = [1 99 1 1293 127722 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 5 ]
    The number of Dynamic Calc Non-Store Members = [0 29 0 257 57 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [99]
    Maximum Declared Blocks is [1849604922] with data block size of [420]
    Maximum Actual Possible Blocks is [165144546] with data block size of [70]
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [WTD] can hold a maximum of [26479] blocks.
    The Dyn.Calc.Cache for database [WTD], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [WTD]...
    Data cache size ==> [3145728] bytes, [5617] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\WTD\WTD.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Set Database State]
    Writing Parameters For Database [removedfrompost]
    Writing Parameters For Database [removedfrompost]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [TempOO]
    Writing Parameters For Database [TempOO]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [WTD]
    Writing Parameters For Database [WTD]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [SetApplicationState]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    User [admin] set active on database [removedfrompost]
    Clear Active on User [admin] Instance [1]
    User [admin] set active on database [removedfrompost]
    Received Command [Restructure] from user [admin]
    Reading Parameters For Database [Drxxxxxx]
    Reading Outline For Database [Drxxxxxx]
    Reading Outline Transaction For Database [Drxxxxxx]
    Declared Dimension Sizes = [289 126 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 120 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34680]
    Maximum Declared Blocks is [1960864521] with data block size of [72828]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17347]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [Drxxxxxx] can hold a maximum of [75] blocks.
    The Dyn.Calc.Cache for database [Drxxxxxx], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Reading Parameters For Database [Drxxxxxx]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Performing transaction recovery for database [Drxxxxxx] following an abnormal termination of the server.
    Restructuring Database [removedfrompost]
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    Essbase Internal Logic Error [7333]
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    Exception error log completed -- please contact technical support and provide them with this file
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

    To avoid all these things as a best practice
    we didn't allow dense restructure on the cubes size>30 GB
    As an altrnative, we will export the level0 data, clear the DB, and load the new data. After that aggregate the cube to store the data at all the consolidation levels.

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

Maybe you are looking for

  • Using an open mobil phone in europe

    Traveling from usa to germany, czech republic, hungary.  Can I use my open mobil phone, with skype number for cell to cell?

  • New iPad wont connect to Wi-fi automatically

    Hi everytime i turn the new iPad on i need to manully enter the Wi-Fi password.  This doesn't seem right thanks

  • Assigning default values to root and subnode attributes

    Hello, I created a BO with a root node. In order to assign some default values after a root instance is created, I've implemented a determination. Everything is working as expected. When the instance is created (and only then) the init method is proc

  • SAP PI SOAP Sender Adatper using HTTPS Without Authentification

    Dears experts, I have a requirement where i need to implement the next flow: POS (Java code to web service soap) ---> (SOAP HTTPS - SAP PI - XI) --->ECC (XI) So, have configured my SOAP sender adapter as: Transport protocol: HTTP Message protocol: SO

  • Issue with Varable entry

    I have a variable with Multiple single values for PO.When I give the values for more than 10000 values,the query gives an SQL error.Any one have an idea how can I run the query. Tks,