Cluster Size issues

Hi,
I am running SQL Server 2008 R2 on Windows Server 2008 R2. My databases are residing on a RAID5 configuration.
Recently I have had to replace one of the HDDs in the RAID with a different HDD. The result is that I now have 2 HDD with a physical and logical cluster size of 512 Bytes and 1 with 3072 Bytes physical and 512 Logical.
Since the rebuild, the databases and SQL have been fine. I could read and write to and from the databases and Backups had no issues either. Today however (2 months down the line of the RAID rebuild) I could no longer access the databases and backups did
not work either. I kept getting this error when trying to detach the database or backing it up:
TITLE: Microsoft SQL Server Management Studio
Alter failed for Database 'dbname'.  (Microsoft.SqlServer.Smo)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Alter+Database&LinkId=20476
ADDITIONAL INFORMATION:
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
Cannot use file 'D:\dbname.MDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
Cannot use file 'D:\dblogname_1.LDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
Database 'dbname' cannot be opened due to inaccessible files or insufficient memory or disk space.  See the SQL Server errorlog for details.
ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5178)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500&EvtSrc=MSSQLServer&EvtID=5178&LinkId=20476
BUTTONS:
OK
My Temporary solution to this was to move the DB to my C: drive and attach it from there. This is not ideal as I am losing the redundancy of the RAID. 
Can anybody tell me if it is because of the hard drive with the larger sector size? (This is the only logical explanation i have) And why would it only happen now? 
I am sorry if this is the wrong Forum for this question

Apparently it was not until recently that the database spilled over to that new disk. No, I don't too know much about RAIDs.
But it seems obvious that you need to make sure that all disks in the RAID have the same sector size.
Erland Sommarskog, SQL Server MVP, [email protected]

Similar Messages

  • NTFS Cluster size is set at 4k instead of recommended 64k.

    We have found that our partition is not aligned and need to get some
    feedback on a few things.
    Here are our numbers:
    Starting partition offset = 32,256
    Stripe Size = 128k (131,072)
    Cluster size = 4k (4096)
    We are experiencing high "Avg Queue length" and high "avg disk usage"
    in windows performance monitor...
    My question is this: How important is the NTFS cluster size at 4k. I know
    the recommendation is 64k but how bad would a 4k cluster size ALSO
    affect performance given we know that our partition is mis-aligned ?
    Thanks..
    Ld

    > My question is this: How important is the NTFS cluster size at 4k. I know
    > the recommendation is 64k but how bad would a 4k cluster size
    It's very important from performance perspective, especially when you're facing a huge database and quantity of disks. 64K is the rule of thumb as cluster package size for SQL server, can be considered a minimum size of unit at least in this case. Imagine if it's cut down to 4K which is 1/16, then the disk arms need to do 16 times to grab the same amount of data.
    > Starting partition offset = 32,256
    >ALSO
    > affect performance given we know that our partition is mis-aligned ?
    Starting offset has a similar impact as allocation unit size does, but usually it only happens on old fashion storage device as far as I know, not sure maybe I'm wrong here. Last time an issue related to that I know was on HP EVA 5000 4 years ago, and after that starting offset value is part of initial optimization when it's being installed by storage vendor provided, no manual change from customer needed. But to be on the safe side, please do check with your storage vendor to make sure.
    In either case of allocation unit size or starting offset size, it would be very difficult to change as long as it's set and has gone live in production environment, from either downtime or transition storage device.
    Regards,

  • Large Cluster Size

    I believe by default cluster size is limited to 436 members by default. We would like to increase this to about 550 temporarily. Can someone remind me of the setting to do this? Any critical drawbacks (I know its no ideal). Thanks in advance... Andrew.

    There was some changes made in in some 3.7.1.x release to support large clusters. Previous there was some issues with the preferred MTU, which was 64K causing OOM in the Publisher/Receiver.

  • Paper Size issues with CreatePDF Desktop Printer

    Are there any known paper size issues with PDFs created using Acrobat.com's CreatePDF Desktop Printer?
    I've performed limited testing with a trial subscription, in preparation for a rollout to several clients.
    Standard paper size in this country is A4, not Letter.  The desktop printer was created manually on a Windows XP system following the instructions in document cpsid_86984.  MS Word was then used to print a Word document to the virtual printer.  Paper Size in Word's Page Setup was correctly set to A4.  However the resultant PDF file was Letter size, causing the top of each page to be truncated.
    I then looked at the Properties of the printer, and found that it was using an "HP Color LaserJet PS" driver (self-chosen by the printer install procedure).  Its Paper Size was also set to A4.  Word does override some printer driver settings, but in this case both the application and the printer were set to A4, so there should have been no issue.
    On a hunch, I then changed the CreatePDF printer driver to a Xerox Phaser, as suggested in the above Adobe document for other versions of Windows.  (Couldn't find the recommended "Xerox Phaser 6120 PS", so chose the 1235 PS model instead.)  After confirming that it too was set for A4, I repeated the test using the same Word document.  This time the result was fine.
    While I seem to have solved the issue on this occasion, I have not been able to do sufficient testing with a 5-PDF trial, and wish to avoid similar problems with the future live users, all of which use Word and A4 paper.  Any information or recommendations would be appreciated.  Also, is there any information available on the service's sensitivity to different printer drivers used with the CreatePDF's printer definition?  And can we assume that the alternative "Upload and Convert" procedure correctly selects output paper size from the settings of an uploaded document?
    PS - The newly-revised doc cpsid_86984 still seems to need further revising.  Vista and Windows 7 instructions have now been split.  I tried the new Vista instructions on a Vista SP2 PC and found that step 6 appears to be out of place - there was no provision to enter Adobe ID and password at this stage.  It appears that, as with XP and Win7, one must configure the printer after it is installed (and not just if changing the ID or password, as stated in the document).

    Thank you, Rebecca.
    The plot thickens a little, given that it was the same unaltered Word document that first created a letter-size PDF, but correctly created an A4-size PDF after the driver was changed from the HP Color Laser PS to a Xerox Phaser.  I thought that the answer may lie in your comment that "it'll get complicated if there is a particular driver selected in the process of manually installing the PDF desktop printer".  But that HP driver was not (consciously) selected - it became part of the printer definition when the manual install instructions were followed.
    However I haven't yet had a chance to try a different XP system, and given that you haven't been able to reproduce the issue (thank you for trying), I will assume for the time being that it might have been a spurious problem that won't recur.  I'll take your point about using the installer, though when the opportunity arises I might try to satisfy my cursed curiosity by experimenting further with the manual install.  If I come up with anything of interest, I'll post again.

  • How do I retrieve binary cluster data from a file without the presense of the cluster size in the data?

    Hey guys,  I'm trying to read a binary data file created by a C++ program that didn't append sizes to the structures that were used when writing out the data.  I know the format of the structures and have created a cluster typedef in LabView.  However the unflatten from string function expects to see additional bytes of data identifying the size of the cluster in the file.   This just plain bites!  I need to retrieve this data and have it formatted correctly without doing it manually for each and every single element.  Please Help!
    Message Edited by AndyP123 on 06-04-2008 11:42 AM

    Small update.  I have fixed size arrays in the clusters of data in the file and I have been using arrays in my typedefs in LabView and just defining x number of indexes in the arrays and setting them as the default value under Data Operations.  LabView may maintain the default values, but it still treats an array as an unknown size data type.  This is what causes LabView to expect the cluster size to be appended to the file contents during an unflatten.  I can circumvent this in the most simplest of cases by using clusters of the same type of data in LabView to represent a fixed size array in the file.  However, I can't go around using clusters of data to represent fixed size arrays BECAUSE I have several multi-dimentional arrays of data in the file.  To represent that as a cluster I would have to add a single value for every element to such a cluster and make sure they are lined up sequentially according to every dimension of the array.  That gets mighty hairy, mighty fast. 
    EDIT:  Didn't see that other reply before I went and slapped this in here.  I'll try that trick and let you know how it works.......
    Message Edited by AndyP123 on 06-04-2008 12:11 PM

  • Maximum cluster size?

    Hi all,
    I'm using LV 7.1 and am trying to access a function in a DLL.  The function takes a pointer to a data structure that is large and complex.  I have calculated that the structure is just under 15kbytes.
    I have built the structure as a cluster and then attempted to pass the cluster into the call library function node with this parameter set as adapt to type.  I get memory error messages and when analysing the size of the cluster I am producing it appears to be much smaller than required.
    I have also tried creating an array and passing that but I think that won't work as it needs to be a fixed size of array which can only be acheived, according to what I've read, by changing it to a cluster and this is limited to a size of 256.
    Does anybody have a suggestion of how to overcome this?
    If any more detail is required then I'm happy to supply. 
    Dave.

    John.P wrote:
    Hi Dave,
    You have already received some good advice from the community but I wanted to offer my opinion.
    I am unsure as to why the cluster size will not exceed 45, my only suggestion is to try using a type cast node as this is the suggested method for converting from array to cluster with greater than 256 elements.
    If this still does not work then in this case I would recommend that you do use a wrapper DLL, it is more work but due to the complexity of the cluster you are currently trying to create I would suggest this is a far better option.
    Have a look at this KB article about wrapper DLL's.
    Hope this helps,
    John P
    John, I am having a hard time converting an array of greater than 256 elements to a cluster.  I attempted to use the type cast node you suggested and didn't have any luck.  Please see the attached files... I’m sure I’m doing something wrong.  The .txt file has a list of 320 elements.  I want to run the VI so that in the end I have a cluster containing equal number of integer indicators/elements inside.  But more importantly, I don't want to have to build a cluster of 320 elements.  I'd like to just change the number of elements in the .txt file and have the cluster automatically be populated with the correct number of elements and their values.  No more, no less.   One of the good things about the convert array to cluster was that you could tell the converter how many elements to expect and it would automatically populate the cluster with that number of elements (up to 256 elements only).  Can the type cast node do the same thing?  Do you have any advice?  I posted this question with more detail to my application at the link below... no luck so far.  
    http://forums.ni.com/ni/board/message?board.id=170​&thread.id=409766&view=by_date_ascending&page=1
    Message Edited by PhilipJoeP on 05-20-2009 06:11 PM
    Attachments:
    cluster_builder.vi ‏9 KB
    config.txt ‏1 KB

  • Array to Cluster, Automate cluster size?

    I often use the Array to Cluster VI to quickly change an array of data into a cluster of data that I can then connect to a Waveform Chart.  Sometimes the number of plots can be different which results in extra plots (full of zeros) on the graph, or missing plots if the array is larger than the current cluster size.  I know I can right-click on the node and set the cluster size (up to 256) manually.  I could also use a case structure with as many Array to Cluster nodes as I need, set them individually and wire an Array Size to the case structure selector but that's kind of a PITA. 
    My question is whether or not anyone knows a way to control the cluster size value programatically.  It seems that if I can right-click it and do it manually there must be some way to automate it but I sure can't figure it out.  It would be nice if you could simply wire your desired value right into an optional input on the node itself.  Any ideas will be much appreciated.
    Using LabVIEW: 7.1.1, 8.5.1 & 2013
    Solved!
    Go to Solution.

    I'm under the impression it's impossible.  See this idea for related discussion.
    Tim Elsey
    LabVIEW 2010, 2012
    Certified LabVIEW Architect

  • Smartcardio ResponseAPDU buffer size issue?

    Greetings All,
    I’ve been using the javax.smartcardio API to interface with smart cards for around a year now but I’ve recently come across an issue that may be beyond me. My issue is that I whenever I’m trying to extract a large data object from a smart card, I get a “javax.smartcardio.CardException: Could not obtain response” error.
    The data object I’m trying to extract from the card is around 12KB. I have noticed that if I send a GETRESPONSE APDU after this error occurs I get the last 5 KB of the object but the first 7 KB are gone. I do know that the GETRESPONSE dialogue is supposed to be sent by Java in the background where the responses are concatenated before being sent as a ResponseAPDU.
    At the same time, I am able to extract this data object from the card whenever I use other APDU tools or APIs, where I have oversight of the GETRESPONSE APDU interactions.
    Is it possible that the ResponseAPDU runs into buffer size issues? Is there a known workaround for this? Or am I doing something wrong?
    Any help would be greatly appreciated! Here is some code that will demonstrate this behavior:
    * test program
    import java.io.*;
    import java.util.*;
    import javax.smartcardio.*;
    import java.lang.String;
    public class GetDataTest{
        public void GetDataTest(){}
        public static void main(String[] args){
            try{
                byte[] aid = {(byte)0xA0, 0x00, 0x00, 0x03, 0x08, 0x00, 0x00};
                byte[] biometricDataID1 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x08};
                byte[] biometricDataID2 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x03};
                //get the first terminal
                TerminalFactory factory = TerminalFactory.getDefault();
                List<CardTerminal> terminals = factory.terminals().list();
                CardTerminal terminal = terminals.get(0);
                //establish a connection with the card
                Card card = terminal.connect("*");
                CardChannel channel = card.getBasicChannel();
                //select the card app
                select(channel, aid);
                //verify pin
                verify(channel);
                 * trouble occurs here
                 * error occurs only when extracting a large data object (~12KB) from card.
                 * works fine when used on other data objects, e.g. works with biometricDataID2
                 * (data object ~1Kb) and not biometricDataID1 (data object ~12Kb in size)
                //send a "GetData" command
                System.out.println("GETDATA Command");
                ResponseAPDU response = channel.transmit(new CommandAPDU(0x00, 0xCB, 0x3F, 0xFF, biometricDataID1));
                System.out.println(response);
                card.disconnect(false);
                return;
            }catch(Exception e){
                System.out.println(e);
            }finally{
                card.disconnect(false)
        }  

    Hello Tapatio,
    i was looking for a solution for my problem and i found your post, first i hope your answer
    so i am a begginer in card developpement, now am using javax.smartcardio, i can select the file i like to use,
    but the problem is : i can't read from it, i don't now exactly how to use hexa code
    i'm working with CCID Smart Card Reader as card reader and PayFlex as smart card,
              try {
                          TerminalFactory factory = TerminalFactory.getDefault();
                      List<CardTerminal> terminals = factory.terminals().list();
                      System.out.println("Terminals: " + terminals);
                      CardTerminal terminal = terminals.get(0);
                      if(terminal.isCardPresent())
                           System.out.println("carte presente");
                      else
                           System.out.println("carte absente");
                      Card card = terminal.connect("*");
                     CardChannel channel = card.getBasicChannel();
                     ResponseAPDU resp;
                     // this part select the DF
                     byte[] b = new byte[]{(byte)0x11, (byte)0x00} ;
                     CommandAPDU com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                        //this part select the Data File
                     b = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                     byte[] b1 = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xB2, (byte)0x00, (byte)0x04, b1, (byte)0x0E); */
                        // the problem is that i don't now how to built a CommandAPDU to read from the file
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                      card.disconnect(false);
              } catch (Exception e) {
                   System.out.println("error " + e.getMessage());
              }read record : 00 A4 ....
    if you know how to do , i'm waiting for your answer

  • Raid-0 Stripe & Cluster Size

    I just ordered 2 10k RPM Raptor S-ATA Drives from newegg, they should arrive shortly. I plan to configure my system with them as Raid-0 for increased performance, I just read the "Raid Setup Guide 865/875 LSR/FIS2R Rev 1.01" by Vango and it seems that my Mobo can be configured as Raid-0 with either the Intel ICH5R Controller or the promise controller.
    Will use promise as my raid controller, it seems it's faster , now i got another question.
    What about stripe size/cluster size? my research is giving me too many setting suggestions with all very different settings, can't decide on what to do. Can someone suggest some good setting, Intel raid manual suggest a 128 KB stripe for best performance, and said nothing about cluster size. Vango posted somewhere he used a 64kb for Stripe, but no info on cluster size.
    I will be using 2 36 gb WB Raptors in raid-0 as my main and only windows array (disk) (Will install windows and apps+games to it) then use PATA drive for backups and movie storage. My computer is used mostly for working with office, creation of web pages, playing Everquest (big game), and watching video (divx movies). I use WinXP Pro Sp1.
    Can someone suggest some general setting on stripe/cluster size that give good performance this kind of usage? what is the easiest (best) way to change the 4k default cluster size on the array after i get windows installed to it? do I bother with changing the cluster size? I got partition magic and other softtware available to do this, but dunno what's the best procedure to do this.
    Thanks in Advance

    I've always just used the 4K cluster size that Windows creates if you use NTFS. I honestly don't think this makes a big difference. If you want a different size, use PM to format the drive that way before installing XP. I would recommend against converting from one size to another. Did this once and ended up with all my files labeled in DOS 8.3 format.   (this was NOT good for my 1000+ MP3's)
    I use 64k stripe size as a compromise. My research showed that people were getting the "best scores" using a small stripe size. This seemed to come at the cost of CPU usage going up and I'm unconvinced these scores relate much to how I actually use my HDD's. They say if all your files are 128K and bigger you don't need a smaller stripe size. If you're using the Raid as your XP drive you'll actually have lots of small files so I would recommend something smaller than 128K. Maybe try 32k?
    Let us know how it goes.

  • Do we need to format data and log files with 64k cluster size for sql server 2012?

    Do we need to format data and log files with 64k cluster size for sql server 2012?
    Does this best practice still applies to sql server 2012 & 2014?

    Yes.  The extent size of SQL Server data files, and the max log block size have not changed with the new versions, so the guidance should remain the same.
    Microsoft SQL Server Storage Engine PM

  • Routing and Remote access can cause cluster network issues?

    After enabling routing and remote access on the servers, we found lots of cluster issues on our server like<o:p></o:p>
    Cluster Service stopped
    Communication was lost and reestablished between cluster nodes
    Unable to access witness resource
    Cluster resource failed
    can RRAS enabling causes cluster network issues?
    Rahul

    Hi TwoR,
    Please offer more information about your current cluster and RRAS configuration, such as are you installed the RRAS role on any cluster node? Are your cluster in Hyper-V environment?
    Or if you want to create the RRAS cluster you can refer the following KB:
    Deploy Remote Access in a Cluster
    http://technet.microsoft.com/en-us/library/jj134175.aspx
    How to configure Network Load Balancing (NLB) based cluster of VPN Servers
    http://blogs.technet.com/b/rrasblog/archive/2009/07/02/configuring-network-load-balancing-nlb-cluster-of-vpn-servers.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Recording File Size issue CS 5.5

    I am using CS 5.5, a Balckmagic Ultra Studio Pro through USB 3.0 being fed by a Roland HD Video switcher. Everything is set for 720P 60fps (59.94) and the Black Magic is using the Motion JPEG compression. I am trying to record our sermons live onto a Windows 7 machine with an Nvidia Ge-Force GTX 570, 16 GB of Ram and a 3TB internal raid array (3 drives). It usually works great but more often now when I push the stop button in the capture window, the video is not proceesed and becomes unusable. Is it a file size issue or what. I get nervous when my recording goes longer than 50 Minutes. Help

    Jim thank you for the response. I have been away and busy but getting
    caught up now.
    I do have all drives formatted as NTFS. My problem is so sporadic that I
    can not get a pattern down. This last Sunday recorded fine so we will see
    how long it last. Thanks again.

  • Swap size issues-Unable to install DB!!

    Unable to install DB. End part i am getting failed due to swap size issue .. FYI...
    [root@usr~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda2             5.9G  5.9G     0 100% /
    /dev/hda3             3.0G  848M  2.0G  31% /tmp
    /dev/hda5              34G   12G   21G  37% /refresh
    /dev/hda1              99M   12M   83M  12% /boot
    tmpfs                 3.9G     0  3.9G   0% /dev/shm
    [root@usr~]#
    Please help me...Thanks

    You can increase your swap space.. I have also faced same issue
    Just try: http://www.thegeekstuff.com/2010/08/how-to-add-swap-space/
    ~J

  • Unable to install DB(swap size issues)

    Unable to install DB. End part i am getting failed due to swap size issue .. FYI...
    [root@usr~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda2             5.9G  5.9G     0 100% /
    /dev/hda3             3.0G  848M  2.0G  31% /tmp
    /dev/hda5              34G   12G   21G  37% /refresh
    /dev/hda1              99M   12M   83M  12% /boot
    tmpfs                 3.9G     0  3.9G   0% /dev/shm
    [root@usr~]#
    Please help me...Thanks

    I tried with dd if=/dev/hda3 of=/dev/hda5 count=1024 bs=3097152
    Now the o/p is below ...
    [root@user/]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda2             5.9G  5.9G     0 100% /
    /dev/hda3              35G   33G  345M  99% /tmp
    /dev/hda5              34G   12G   21G  37% /refresh
    /dev/hda1              99M   12M   83M  12% /boot
    tmpfs                 3.9G     0  3.9G   0% /dev/shm
    [root@user/]# cd /tmp/

  • Programmaticly change cluster size

    I'm i'm facing the following problem, i want to display several plots in a chart, to do this i'm using a build to cluster function, because the source is an array with N items. The sollution i came up to, was not correct because the cluster has a fixed size and the array doesn't. Which results in 9 plots (default cluster size) and only N of them have to be displayed in the chart.
    I included a part of my program. By which i hope it explains my problem a bit more. This subvi is executed X times and saves N value's to an array and after each run the array is converted into a cluster and is displayed in a chart.
    So my question is, is it possible to programmatically change the cluster size, or is it possible to realise a chart which is updated during runtime with new value's.
    Attachments:
    Wavelength AnalyseAndSave.vi ‏208 KB

    I figured it out by myself, havn't slept to mutch last night For those who are interested in my solution, here is what i did after the array is coming out from the forloop i reshaped it so that the dementions are the same as the number of plots i want to have, then i wired the output to a chart and it works

Maybe you are looking for