Ndf & mdf files are bigger in size

Hi,
My mdf file is 168 GB and ndf file is 22 GB.
What shall I do to reduce their space??
I ran the command sp_usedspace, and this is what I got :
Please advice...
Regards, Kunjay Shah

Through DBCC SHRINKFILE to release unused space and physical file size. See http://msdn.microsoft.com/en-us/library/ms189493.aspx.
Second option ,delete/archive unneeded data.
To identify tables using the most space,
Right-click on the database node in SSMS
select Reports >
Standard Reports >
Disk Usage by Top Tables.
Ahsan Kabir Please remember to click Mark as Answer and Vote as Helpful on posts that help you. This can be beneficial to other community members reading the thread. http://www.aktechforum.blogspot.com/

Similar Messages

  • Why DNG files are bigger when converting from TIFF?

    When I shoot in RAW, my normal workflow is to convert all images to DNG and start editing in Lightroom. But as you know, there are some photos that need extra editing in pixel based software such as Photoshop.
    When I right-click a DNG in Lightroom and chose "Edit In... » Edit in Adobe Photoshop", the photo will be opened in Photoshop as an 8-bit TIFF file for editing. For demonstration of my issue, let's assume I don't do any changes and just save the file as it is. A new TIFF file will be created next to the source DNG with "-Edit" in it's name.
    Back into Lightroom, both files are almost the same, one is DNG, the other in 8-bit TIFF. From this point (assuming I did change something in Photoshop, otherwise what would be the point in opening it there) I should do my further editing in Lightroom in the TIFF and not the DNG one. Let's say I'm done with the work and export the final edited TIFF file back into a DNG (I like this format and I like to keep all metadata changes so I can easily revert back to the original). I've also exported the original DNG file for comparison.
    And now I realize the following:
    DNG to DNG » ~7Mb (basically the same as the original DNG)
    TIFF to DNG » ~15Mb
    Why such a big difference?
    I didn't do any editing either in the original DNG or the Photoshop converted TIFF. Any techincal reason for this that I'm not aware or am I doing something wrong?

    I did a little bit of research and:
    1) The embedded previews are always JPEG files (medim or full size, doesn't matter). But now that I think about it, I don't think you were talking about the previews when you mentioned the TIFF being embedded.
    2) I did a quick EXIF lookup on 3 files exported to DNG: a) Original DNG b) 16-bit TIFF converted to DNG c) 8-bit converted to DNG. Here's the EXIF results:
    a)
    SubfileType                     : Full-resolution Image
    ImageWidth                      : 3736
    ImageHeight                     : 2772
    BitsPerSample                   : 16
    Compression                     : JPEG
    PhotometricInterpretation       : Color Filter Array
    SamplesPerPixel                 : 1
    b)
    SubfileType                     : Full-resolution Image
    ImageWidth                      : 3648
    ImageHeight                     : 2736
    BitsPerSample                   : 16 16 16
    Compression                     : JPEG
    PhotometricInterpretation       : Linear Raw
    SamplesPerPixel                 : 3
    c)
    SubfileType                     : Full-resolution Image
    ImageWidth                      : 3648
    ImageHeight                     : 2736
    BitsPerSample                   : 8 8 8
    Compression                     : JPEG
    PhotometricInterpretation       : Linear Raw
    SamplesPerPixel                 : 3
    This probably means something like you said... The embedded files are different and some take more space than the others. The first one is 7.220Kb, the second one (16-bit TIFF) is 44.030Kb and the third one (8-bit TIFF) is 15.284Kb.
    It makes sense I guess, but I still would love to hear a more technical explanation for it but it's obvious it has something to do on how the pixels are saved in the DNG file. The keyword is probably the PhotometricInterpretation.

  • AME rendered files are bigger than in Premiere CS6

    Hi!
    I have a big project - 6 hours prepared for rendering in 4 sequences (each sequence is about 1h 30 minutes - 1h 45 minutes) I try to export them with H.264 - Blu ray default settings. Estimated file size for each sequence is about 17 - 20 GB, but when I export them into AME I got files 52 - 55 GB ! When I export same sequences one by one directly from Premiere CS6 I got 17 - 20 GB files. What's wrong with AME? Same input, same settings but files are more than duble sized?!
    Thx!

    Here are the screenshots of all export settings related to one of these projects. As you can see estimated file size is 16321 MB and when I click on "export", rendering is about 15% faster and file is about 17 GB. But when I click on "queue" rendering is slower and files are about 52 GB

  • My Captivate 4 files are 10x the size in Captivate 6

    Hi,
    I am trying to upgrade from 4 to 6.
    I load up old files and save them - this works fine for Capivate files based on PPT.
    For the ones based on screen caprture plus audio, they bloat to about 10x the size. I can't deal with capivate files of almost a Gig!
    Detail:
    I took a project with 200 slides (128meg), and:
    1) Loaded into Captivate 6 and saved (728Meg)
    2) Cut it down to 5 slides in Captivate 4 (108Meg)
    3) In Captivate 4 I cut and pasted the slides into a new project and saved it (2.8 Meg)
    4) I then loaded each into Captivate 6 and saved it (15Meg)
    5) I repeated 2, 3 nd 4 with 50 slides and got 10x the size
    Any idea what is going on?
    Hugh

    Most likely it's because Captivate 4 was predominatly AS2 and "old architeture" whereas Captivate 6 is AS3 only.
    There is a discussion here that shows the same problem:http://captivate.adobe.com/captivate/topics/captivate_6_file_size_bloat
    If it were me I would probably try and convert my Captivate 4 project to Captivate 5 and then from Captivate 5 to Captivate 6 to see if that helps.
    /Michael
    www.cpguru.com - Adobe Captivate Widgets, Tutorials, Tips and Tricks and much more..

  • Pls immediate help required  log files are getting big size than 1GB

    oc4j_DBConsole_mbcvizpilot3.mbc.uae_vizrtdb\log

    Do as follows :
    - stop EM
    - rename log directory to log.old
    - create a new log directory
    - restart EM
    - check that new log files have been created in the new log directory
    - after checking that everything works fine you can remove the old directory.
    You shouldn't have any problem, at least I didn't have any...

  • Deduplication: how to identify which files are "in-policy" (and which are not)?

    Dear experts,
    in eventlog I find the following information for one of my deduplicated volumes:
    Volume: F: (\\?\Volume{9794e270-fb08-487e-979b-8a08fa9ca311}\)
    Error code: 0x0
    Error message:
    Savings rate: 24
    Saved space: 853327714684
    Volume used space: 2681869406208
    Volume free space: 66771247104
    Optimized file count: 1199618
    In-policy file count: 1188049
    Job processed space (bytes): 0
    Job elapsed time (seconds): 2
    Job throughput (MB/second): 0
    Some things look strange to me: 
    - The savings rate is only 24% which looks odd because I get about 40% on other file shares with similar characteristics.
    - The "Optimized file count" is higher than the "in-policy" file count.
    - The volume has about 3.5 million files but only 1.1 million files are "in-policy".
    My understanding of "in-policy" files is that all files are in-policy which are older than 3 days (if not configured different, which is not the case) and which are not contained in an excluded folder (nothing configured here) and which have no
    excluded file extension (nothing configured here apart from the default). Right?
    I'm sure that only a small fraction of the files on this volume are younger than three days, so something seems to be wrong there, but I have no idea how to find the cause of the problem.
    Is there any way to identify the files (full path) which are considered to be not "in-policy"?
    How can I find out if the optimization process really looks at all files - and doesn't skip a million?
    Thanks!
    Regards
    Christoph

    Mandy, thanks for your feedback! I've read these technet articels, but I couldn't learn something really helpfull related to my problem there. To make investigation of the problem easier I did the following:
    - created a brand new empty volume (x:) on the same server
    - enabled data deduplication on that volume with a minimal file age of 0 days
    - copied about 2GB of data from the problem-volume in 15450 files to this new volume
    - started "Start-DedupJob x: -Type Optimization -Memory 50"
    After a few seconds that job completed and I got the following result:
    Volume: X: (\\?\Volume{d1ba9ed8-28f1-44d1-9535-fe603d6a70c6}\)
    Error code: 0x0
    Error message:
    Savings rate: 0
    Saved space: 12360718
    Volume used space: 3091664896
    Volume free space: 7610097664
    Optimized file count: 153
    In-policy file count: 153
    Job processed space (bytes): 46095214
    Job elapsed time (seconds): 9
    Job throughput (MB/second): 4.88
    The question is: why are only 153 files in-policy?
    All files (of more than 15000) are of course "older than 0 days", there are no excluded folders configured, there are no excluded extensions configured, no file has one of the default excluded extensions and more than 1000 files are bigger than 100kB
    (minimal size for deduplication is 32kB). So what makes only 153 files qualify for deduplication?
    My impression is that there are further undocumented requirements which have to be met by a file to qualify for deduplication. But if there's no detailed log of an optimization run available how to find out what's going on?
    (Or is something going wrong on my server?)
    Kind Regards
    Christoph

  • Opening MDF files in Diadem generated with Vector CANape v7.0?

    Hi all,
    I have a Vector CANape v7.0 data aquisition system that generates files in MDF format.  I found the Diadem data plugin for MDF files and installed it.  I can open the files, see the parameter names, but all the data is 0 (and it should not be).
    The plugin looks up to date and the description mentions MDF version 3.0 & 3.2.  We have CANape version 7.0.  How do those numbers correspond?  How can I open this file correctly?
    Plugin Description: 
    This DataPlugin supports reading / importing of Bosch MDF data files, ETAS INCA® MDF data and Vector CANape® and other products from Vector. The DataPlugin supports MDF files up to version 3.2. The MDF DataPlugin also supports writing of MDF 3.0 and 3.2 files. Writing behavior is controlled by the root property 'version'; set version to "MDF 3.2" to write MDF 3.2 files.
    The Measure Data Format (MDF) is well established in the automotive industry for recording, exchanging, and post-measurement analysis. MDF files are produced by eg. INCA® from ETAS, and by CANape® and other products from Vector.
    Thanks- Dshow
    P.S.  I have tried this on Diadem v10.1 and 11 with the same results.  Thx
    Message Edited by Dshow on 07-13-2009 02:12 PM
    Solved!
    Go to Solution.

    Hello Dshow,
    Since I don' have any MDF files to test this with, I can't help with your immediate question. I can offer up an alternative though:
    With version 11.1, DIAdem has gained the ability to load CAN files directly through the "CAN Converter" that can be found in the NAVIGATOR "File" menu. You can test this with the free evaluation version that can be found at http://www.ni.com/diadem/ - this might take care of the issue you are having. It also offers a number of additional features you might be interested in ...
    Best regards,
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

  • .mp4 and .mov files are just LARGE

    Sorry, without posting all the encoding details, is there an obvious reason why my .mp4 and .mov files are ~10x the size that they should be after AME? I'm talking about files that are 30mb .mp4's before and 300mb.mov afterwards. If you need details I'll post. Thanks.

    File size is determined primarily by only two factors - duration and bitrate.  The codec used, resolution, frame rate, etc. all have little or no effect on file size.
    Duration, and bitrate.  If you want a smaller file, one or the other of those has to be reduced.

  • How to convert mkd mdf file to regular file format

    I am pretty new to video converting.. I just wander if anyone know which software could convert these 2 file mkd, mdf format to something else, like avi or mpg format.
    I am using mini mac G4. os x 10.4.
    pls help

    mkd files are CAD drawing files, you would need a CAD application to open them
    mdf files are disc image files used in system backups
    you can try this http://www.magiciso.com/download.htm
    they are both Windows files types so you need a PC to open them and
    these files have nothing to do with video

  • I noticed that when I develop in light room 5 that the raw files are larger and the finished product is 80 percent smaller

    anybody have an idea why or is that the norm
    my raw file was 11,2 and my JPEG was 1.26

    The RAW files are the same size as the camera made them. Lightroom doesn't change the file size or image size.
    The JPG size depends on a dozen or so things, some of which you can control and some of which you cannot control, but almost always the JPG is smaller than the corresponding RAW.. I see no particular problem with a JPG that is smaller, depending on the export options you chose, and I have never yet seen Lightroom produce an incorrect size JPG
    By the way, normally numbers have units, you did not mention any units, so I really can't comment further without knowing the units for the numbers you show

  • Query to find indexes bigger in size than tables sizes

    Team -
    I am looking for a query to find the list of indexes in a schema or in a entire database which are bigger in size than the respective tables size .
    Db version : Any
    Thanks
    Venkat

    results are the same in my case
      1  select di.owner, di.index_name, di.table_name
      2  from dba_indexes di, dba_segments ds
      3  where ds.blocks > (select dt.blocks
      4               from dba_tables dt
      5               where di.owner = dt.owner
      6               and  di.leaf_blocks > dt.blocks
      7               and   di.table_name = dt.table_name)
      8*  and ds.segment_name = di.index_name
    SQL> /
    OWNER                      INDEX_NAME                TABLE_NAME
    SYS                      I_CON1                     CON$
    SYS                      I_OBJAUTH1                OBJAUTH$
    SYS                      I_OBJAUTH2                OBJAUTH$
    SYS                      I_PROCEDUREINFO1            PROCEDUREINFO$
    SYS                      I_DEPENDENCY1                DEPENDENCY$
    SYS                      I_ACCESS1                ACCESS$
    SYS                      I_OID1                     OID$
    SYS                      I_PROCEDUREC$                PROCEDUREC$
    SYS                      I_PROCEDUREPLSQL$           PROCEDUREPLSQL$
    SYS                      I_WARNING_SETTINGS           WARNING_SETTINGS$
    SYS                      I_WRI$_OPTSTAT_TAB_OBJ#_ST     WRI$_OPTSTAT_TAB_HISTORY
    SYS                      I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST WRI$_OPTSTAT_HISTGRM_HISTORY
    SYS                      WRH$_PGASTAT_PK                WRH$_PGASTAT
    SYSMAN                      MGMT_STRING_METRIC_HISTORY_PK  MGMT_STRING_METRIC_HISTORY
    DBADMIN                  TSTNDX                     TSTTBL
    15 rows selected

  • SQL mdf file size vs ndf

    Hi,
    If multiple database files are created does the mdf file hold any of the row data or does is it exclusively hold the table definitions?
    If the mdf file is exclusively used to hold table definitions i'm assuming I can size it quite small?
    Thanks 
    Mr Shaw

    As Olaf said, it completely depends on how and where you create the ndf file.
    If you create additional ndf files in the primary file group itself, then it utilizes both the mdf and ndf file in a round robin fashion and both grow at the same pace.
    If you want you can segregate mdf and ldf by different methods.
    One option is to keep the mdf file only for system objects and all user objects in ndf files . Check the "Filegroup" section in this article -
    http://www.codeproject.com/Articles/43629/Top-steps-to-optimize-data-access-in-SQL-Serv
    Another option is to move just indexes or just few objects .Check this article for moving data from mdf to ndf -
    http://blogs.msdn.com/b/sqlserverfaq/archive/2011/08/02/moving-data-from-the-mdf-file-to-the-ndf-file-s.aspx
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Hi, I am using HP11 and iPlanet web server. When trying to upload files over HTTP using FORM ENCTYPE="multipart/form-data" that are bigger than a few Kilobytes i get a 408 error. (client timeout).

    Hi, I am using HP11 and iPlanet web server. When trying to upload files over HTTP using FORM ENCTYPE="multipart/form-data" that are bigger than a few Kilobytes i get a 408 error. (client timeout). It is as if the server has decided that the client has timed out during the file upload. The default setting is 30 seconds for AcceptTimeout in the magnus.conf file. This should be ample to get the file across, even increasing this to 2 minutes just produces the same error after 2 minutes. Any help appreciated. Apologies if this is not the correct forum for this, I couldn't see one for iPlanet and Web, many thanks, Kieran.

    Hi,
    You didnt mention which version of IWS. follow these steps.
    (1)Goto Web Server Administration Server, select the server you want to manage.
    (2)Select Preference >> Perfomance Tuning.
    (3)set HTTP Persistent Connection Timeout to your choice (eg 180 sec for three minutes)
    (4) Apply changes and restart the server.
    *Setting the timeout to a lower value, however, may    prevent the transfer of large files as timeout does not refer to the time that the connection has been idle. For example, if you are using a 2400 baud modem, and the request timeout is set to 180 seconds, then the maximum file size that can be transferred before   the connection is closed is 432000 bits (2400 multiplied by 180)
    Regards
    T.Raghulan
    [email protected]

  • Existing Replica MDF file size is increased in size than the New replica install

    Please I need help here.I really really appreciate your help.
    Here are my scenarios:
    We have an application with replicated environment setup on sql server 2012 . Users will have a replica on their machines and they will replicate to the master database. It has 3 subscriptions subscribed to
    the publications on the master db.
    1) We set up a replica(which uses sql server 2012) on a machine with no sql server on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 33gigs and ldf has grown to 41 gigs.
    I went to sql server management studion . Right click and checked the properties of the local database. over all size is around 84 gb with little empty free space available.
    2) We set up a replica(which uses sql server 2012) on a machine with sql server 2008 on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 49 gigs and ldf has grown to 41 gigs.
    I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
    3) We set up a replica(which uses sql server 2012) on a machine with sql server 2012 on it. We have dropped the local database and recreated the local db and did the initial synchronization using replmerge tool.
    The mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
    Why it is allocating the space differently? This is effecting our initiall replica set up times. Any input will be greatly appreciated.
    Thanks,
    Asha.

    https://technet.microsoft.com/en-us/library/ms151791(v=sql.110).aspx

  • All the secondary database files are generated with mdf extensions

    Hi,
    See I have set up DR Server with SQL database. The problem is that when it creates new database on DR Server, all the secondary files, which are with .ndf extension on primary servers, gets created as .mdf files instead of ndf files.
    Can anybody tell me how to solve this problem.
    Will it create any problem when the secondary(DR Server) act as Primary server in case of failure.

    Hello,
    you can easily rename the files by following this procedure (see note 151603 for more details):
    - detach the database from the server
    - rename the files to the naming convention you like
    - attach the database with the new names again
    I do not exactly know what you mean with DR Server, but i assume that you mean
    Log Shipping or Database Mirroring. Both are not affected from the different filenames as they are using only the internal database name (e.g. PRD).
    Regards
      Clas

Maybe you are looking for

  • Works in Tomcat but not in iPlanet Web server

    I have a servlet which first generates a form which has one input field, when user fills in the text field and submits the form it is posted to same servlet. The posted data is processed and redirected to another servlet. Everything works fine on Tom

  • MSATA not showing up in Device Manager

    I have a Lenovo Thinkpad T420 running a Samsung 830 as the main drive.  I purchased a Crucial 64GB mSata drive so I can backup files from my main drive, to the Crucial drive.  I installed the drive, it shows up in the Bios but I don't see it in the d

  • Headphone jack too close for some cables

    The headphone jack is Way too close for many 3.5 jacks. We get radio shack cables at work and they are too fat to be plugged in while charging. A solution for this is to take a razor blade and carefully shave off the plastic until it will fit. Only n

  • Peachpit Project Files Missing in 10.0.6

    This is more of a tip than a problem. If you use the Peachpit 'Final Cut Pro X: Professional Video by Diana Weynand, you may have noticed that some of the projects you should be seeing in the lesson folders are no longer visible in FCPX 10.0.6. They

  • How can I read txt fomat ebook on my iPad?

    And how can I transfer txt format ebook from pc to my iPad? Thanks a lot in advance.