Best practice of expanding partition size

Hi,
I am just wondering what's the best practice of expanding a partition. Currently the server I am working in has the partitions configure as below:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup01-LogVol00
5046848 4630796 155548 97% /
/dev/sda1 101086 50293 45574 53% /boot
tmpfs 4087296 2388360 1698936 59% /dev/shm
/dev/sda3 5162828 141544 4759024 3% /tmp
/dev/sdb1 1923068 734956 1090424 41% /fs0
/dev/sdb2 1923084 70908 1754488 4% /fs1
/dev/sdc1 89406276 68095788 16768916 81% /fs2
/dev/sdc2 13804204 7061912 6041056 54% /fs3
/dev/sdb3 6474880 4509556 1636416 74% /fs4
Let's say I want to increase /fs2 which is /dev/sdc1 by another 15GB. I can go into the VMware and increase the disk size by 20GB but how will I go about increasing the partition size without affecting other partitions?
Most of the instructions I see online basically creates a new partition but I want to avoid creating a new partition. So is it possible to do it?
Thanks.

You cannot increase a partition on a disk without affecting other partitions on the same disk unless it is the last partition. You will have to completely re-organize your partitions and restore from backup. You can however extend partitions that are under LVM control by adding a free LVM controlled partition or disk and assign it to any existing LVM volume in order to increase it's capacity. There is plenty of documentation and info available on the web. If you plan to increase your LVM root volume you need to boot form external media.
Since you are using VMware there is actually no need to create multiple partitions on any disk. You can simply put each partition on it's own virtual disk. Then after you increase the underlying virtual disk, you can use "lvextend" for devices under LVM, or "gparted" to modify the partition, and use the "resize2fs" command to adjust the file system, even while the mounted volume is online.

Similar Messages

  • Best Practices for Keeping Library Size Minimal

    My library file sizes are getting out of control. I have multiple library's that are now over 3TB in size!
    My question is, what are the best practices in keeping these to a manageable size?
    I am using FCPX 10.2. I have three camera's (2x Sony Handycam PJ540 and 1x GoPro Hero 4 Black).
    When I import my PJ540 videos they are imported as compressed mp4 files. I have always chosen to transcode the videos when I import them. Obviously this is why my library sizes are getting out of hand. How do people deal with this? Do they simply import the videos and choose not to transcode them and only transcode them when they need to work on them? Do they leave the files compressed and work with them that way and then transcoding happens when exporting your final project?
    How do you deal with this?
    As for getting my existing library sizes down, should I just "show package contents" of my library's and start deleting the transcoded versions or is there a safer way to do this within FCPX?
    Thank you in advance for you help.

    No. Video isn't compressed like compressing a document. When you compress a document you're just packing the data more tightly. When you compress video you do it basically by throwing information away. Once a video file is compressed, and all video files are heavily compressed in the camera, that's not recoverable. That information is gone. The best you can do is make it into a format that while not deteriorate further as the file is recompressed. Every time you compress a file, especially to heavily compressed formats, more and more information is discarded. The more you do this the worse the image gets. Transcoding converts the media to a high resolution, high data rate format that can be manipulated considerably without loss, and go through multiple generations without loss. You can't go to second generation H.264 MPEG-4 without discernible loss in quality.

  • Best practice for replicating Partitioned table

    Hi SQL Gurus,
    Requesting your help on the design consideration for replicating a partitioned table.
    1. 4 Partitioned tables (1 master table with foreign key constraints to 3 tables) partitioned based on monthly YYYYMM
    2. 1 table has a XML column in it
    3. Monthly switch partition to remove old data, since it is having foreign key constraint; disable until the switch is complete
    4. 1 month partitioned data is 60 GB
    having said the above, wanted to create a copy of the same tables to a different servers.
    I can think of
    1. Transactional replication, but then worried about the XML column,snapshot size and the alter switch will make the same thing
    on the subscriber or row by row delete.
    2. Logshipping with standby with every 15 minutes, but then it will be for the entire database; because I have other partitioned monthly table which is of 250 GB worth.
    3. Thinking about replicating the Partitioned table as Non Partitioned, in that case how the alter switch will work. Is it possible to ignore delete when setting up the replication.
    3. SSIS or Stored procedure method of moving data on a daily basis.
    4. Backup and restore on a daily basis, but this will not work when the source partition is removed.
    Ganesh

    Plz refer to
    http://msdn.microsoft.com/en-us/library/cc280940.aspx

  • What is the best practice on mailbox database size in exchange 2013

    Hi, 
    does anybody have any links to good sites that gives some pros/cons when it comes to the mailbox database sizes in exchange 2013? I've tried to google it - but hasn't found any good answers. I would like to know if I really need more than 5 mailbox databases
    or not on my exchange environment. 

    Hi
       As far as I know, 2TB is recommended maximum database size for Exchange 2013 databases.
       If you have any feedback on our support, please click
    here
    Terence Yu
    TechNet Community Support

  • Best practice on mailbox database size & we need how many server for deployment exchange server 2013

    Dear all,
    We have  an server that runs Microsoft exchange server 2007 with the following specification:
    4 servers: Hub&CAS1 & Hub&CAS2 & Mailbox1 & Mailbox2 
    Operating System : Microsoft Windows Server 2003 R2 Enterprise x64
    6 mailbox databases
    1500 Mailboxes
    We need to upgrade our exchange server from 2007 to 2013 to fulfill the following requirment:
    I want to upgrade the exchange server 2007 to exchange server 2013 and implement the following details:
    1500 mailboxes
    10GB or 15GB mailbox quota for each user
    How many
    servers and databases  are
    required for this migration<ins cite="mailto:Mohammad%20Ashouri" datetime="2014-05-18T22:41"></ins>?
    Number of the servers:
    Number of the databases:
    Size of each database:
    Many thanks.

    You will also need to check server role requirement in exchange 2013. Please go through this link to calculate the server role requirement : http://blogs.technet.com/b/exchange/archive/2013/05/14/released-exchange-2013-server-role-requirements-calculator.aspx
    2TB is recommended maximum database size for Exchange 2013 databases.
    Here is the complete checklist to upgrade from exchange 2007 to 2013 : http://technet.microsoft.com/en-us/library/ff805032%28v=exchg.150%29.aspx
    Meanwhile, to reduce the risks and time consumed during the completion of migration process, you can have a look at this proficient application(http://www.exchangemigrationtool.com/) that would also be
    a good approach for 1500 users. It will help you to ensure the data security during the migration between exchange 2007 and 2013.

  • Expanding Partition Size for Windows

    I used Bootcamp to set up a partition on my computer for Windows, and spent some time tweaking... and then found that I was nearly out of space when selecting a 40GB partition. So as much as it pained me to do so, I deleted the partition, and then tried to start all over. But now, on that screen of the Bootcamp setup, it won't allow me to specify a partition larger than 44GB, and as you can see from the screen shot below, I have plenty of excess capacity on my computer (after deleting several things, emptying my trash and rebooting).
    Any ideas on why I won't "slide" any further?

    I'm afraid not. The easy way is to make a bootable backup of your hard drive. Boot from the backup, erase the internal drive, then restore the backup from the backup drive. The tool for that is Disk Utility.
    Clone the internal drive to an external backup drive.
    Boot from the Recovery HD on the newly made clone.
    Erase the internal drive.
    Restore the clone to the internal drive.
    Clone Yosemite, Mavericks, Lion/Mountain Lion using Restore Option of Disk Utility
    Boot to the Recovery HD:
    Restart the computer and after the chime press and hold down the COMMAND and R keys until the menu screen appears. Alternatively, restart the computer and after the chime press and hold down the OPTION key until the boot manager screen appears. Select the Recovery HD and click on the downward pointing arrow button.
         1. Select Disk Utility from the main menu then press the Continue
             button.
         2. Select the destination volume from the left side list.
         3. Click on the Restore tab in the DU main window.
         4. Select the destination volume from the left side list and drag it
             to the Destination entry field.
         5. Select the source volume from the left side list and drag it to
             the Source entry field.
         6. Double-check you got it right, then click on the Restore button.
    Destination means the external backup drive. Source means the internal startup drive.
    Boot Using OPTION key:
      1. Restart the computer.
      2. Immediately after the chime press and hold down the
          "OPTION" key.
      3. Release the key when the boot manager appears.
      4. Select the disk icon of the external drive's Recovery HD.
      5. Click on the arrow button below the icon.
    When the Utility Menu appears select Disk Utility and click on the Continue button. Select the OS X volume on the internal drive from the sidebar list, click on the Erase tab in Disk Utility's main window, set the format to Mac OS Extended, Journaled and click on the Erase button.
    Restore the clone
         1. Select the destination volume from the left side list.
         2. Click on the Restore tab in the DU main window.
         3. Select the destination volume from the left side list and drag it
             to the Destination entry field.
         4. Select the source volume from the left side list and drag it to
             the Source entry field.
         5. Double-check you got it right, then click on the Restore button.
    Source means the external backup drive. Destination means the internal startup drive.

  • Disk Partition Sizes - Best Practices

    In the old days of SunOs 4.x we used to spend a lot of time figuring out disk slice sizes to make sure that we maximized the space on the old 200 meg disks.
    Then once we started using the 1 to 4 gig disk we used just make it one big slice and not worry about things filling up because when it was full, it was full.
    Now, we have these huge 32 gig to 100+ gig drives and things become tricky again. I was wondering how other large companies are slicing up their drives and what thie best practices are.
    The size I was interested in were for:
    8,18,36,72,146 G disk
    My thought is that the 8 gig is still small enough to do the complete disk, but the others may need some thought.
    If some people could let me know what they do for:
    /var
    /opt
    /usr
    I know that we should handle non DB machines different than DB machines so any advice on that would be great.
    Thanks in advance.

    well normally when I setup systems I don't like /usr or /var or anything so I lump everything into root "/", since you don't have any other filesystems your space dun get "locked" and wasted it also makes it easier to do manual backups like ufsdumps also.
    But if you do want to partition the exception would be /var and /opt as a lot of patches and software normally defaults to these 2 as the base installation directory but as I said normally I throw everything into "/".
    Then there's slice 2, 3, 4 and 7 which normally I don't install anything on them
    slice 2 is "backup" represents the entire disk, it can be used but normally nobody touches this. if you use it, I think there will be a problem if you wanna do things like 'dd'
    not sure now but In the old days when you used things like volume manager, vm will take slice 3 and 4 as a private/public region if you have data on it prior to encapsulation it will move the data to other slices for you, but if there are no free slices to move data to from slice 3 and 4 then vm install will fail or if you want to unencapsulate the rootdisk later on you will have a confusing problem as orignally your data from slice 3 and 4 was moved elsewhere during vm encapsulation.
    slice 7 by default with I first install os I will either leave it blank for future or allocate 10mb to it, this is in case I wanna mirror the disk using disksuite so I normally designate slice 7 of all the disks as my metadb, if you use up all the space and leave slice 7 blank you can still allocate space to it for metadb by allocating a few sectors to it from the "swap" area in cdrom single user mode.

  • What is the best Practice JCO Connection Settings for DC  Project

    When multiple users are using the system data is missing from Web Dynpro Screens.  This seems to be due to running out of connections to pull data.
    I have a WebDynpro Project based on component development using DC's.  I have one main DC which uses other DC's as Lookup Windows.  All DC's have their Own Apps.  Also inside the main DC screen, the data is populated from multiple function modules.
    There are about 7 lookup DC Apps accessed by the user
    I have created JCO destinations with following settigns
    Max Pool Size 20
    Max Number of Connections 200
    Before I moved to DC project it was regular Web Dynpro Project with one Application and all lookup windows were inside the same Project.  I never had the issue with the same settings.
    Now may be becuase of DC usage and increase in applications I am running out of connections.
    Has any one faced the problem.  Can anyone suggest the best practice of how to size JCO connections.
    It does not make any sense that just with 15-20 concurrent users I am seeing this issue.
    All lookup components are destroyed after its use and is created manually as needed.  What else can I do to manage connections
    Any advise is greatly appreciated.
    Thanks

    Hi Ravi,
    Try to go through this Blog its very helpful.
    [Web Dynpro Best Practices: How to Configure the JCo Destination Settings|http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=(J2EE3417600)ID2054522350DB01207252403570931395End?blog=/pub/wlg/1216]
    Hope It will help.
    Regards
    Jeetendra

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • BEST PRACTICE TO PARTITION THE HARD DISK

    Can some Please guide me on THE BEST PRACTICE TO PARTITION THE[b] HARD DISK FOR 10G R2 on operating system HP-UX-11
    Thanks,
    Amol
    Message was edited by:
    user620887

    I/O speed is a basic function of number of disk controllers available to read and write, physical speed of the disks, size of the I/O pipe(s) between SAN and server, and the size of the SAN cache, and so on.
    Oracle recommends SAME - Stripe And Mirror Everything. This comes in RAID10 and RAID01 flavours. Ideally you want multiple fibre channels between the server and the SAN. Ideally you want these LUNs from the SAN to be seen as raw devices by the server and use these raw devices as ASM devices - running ASM as the volume manager. Etc.
    Performance is not achieved by just partitioning. Or just a more memory. Or just a faster CPU. Performance planning and scalability encapsulates the complete system. All parts. Not just a single aspect like partitioning.
    Especially not partitioning as an actual partition is simple a "logical reference" for a "piece" of the disk. I/O performance has very little do with how many pieces you split a a single disk into. That is the management part. It is far more important how you stripe and whether you use RAID5 instead of a RAID1 flavour, etc.
    So I'm not sure why you are all uppercase about partitioning....

  • Expanding windows 7 partition size ...

    Hello,
    I did some searching around and found several opinions / answers to the subject. I'm not sure which direction to take - one option being Paragon's CapTune software, and the other being the Disk Management Utility in Windows 7, which should allow you to expand the disk.
    The reason for me posting is two-fold. I attempted the Disk Management option, and found that the 'Extend Volume' option is not available for my Bootcamped drive. I was wondering if anyone had any thoughts on this and why it wouldn't work.
    Also, any opinions on CampTune would be great.
    What about VMWare Fusion? Could this alone expand a partition size?
    I know this is a topic that has been spoken about (probably quite a bit), but I was hoping to get some more personal feedback. What is the best way to do this? If the answer is start over with Bootcamp, that is also acceptable.
    Thank you,
    n
    Message was edited by: daidalos62

    Paragon. I'd buy the full Paragon Hard Disk Manager™ 2011 Suite
    That includes cloning Windows to SSD, to another drive, CampTune (resizing) and HFS access. I used it over the weekend to move Windows to an SSD (but that was using dedicated drives, not hybrid Windows on Mac setup. Took all of 20 minutes and able to boot.
    So you can always try backup both OS systems, and then restore each.
    I'd say post on their forum, but threads and questions seem to go unanswered or very little activity and input, and of course quiet for 2 weeks holidays.
    VMs are a totally different animal and nothing to help or do with partitioning.
    http://www.paragon-software.com/home/hdm-personal/

  • Best practice for setting or detecting screen size?

    Hi All,
    Trying to determine a best practice for setting or detecting the screen size. For playbook and iOS, I can set them. But for Android, the number of devices is too large so I'd rather detect. My first choice is to use the stage.stageHeight and stage.stageWidth. This works fine if I set my stage properties with standard meta data:
    [SWF(height="320", width="480", frameRate="64", backgroundColor="#010101")]
    However, if I use the application descriptor file to set the stage dimentions (as suggested by Christian Cantrell here http://www.adobe.com/devnet/flash/articles/authoring_for_multiple_screen_sizes.html)
    <initialWindow>
    <aspectRatio>landscape</aspectRatio>
    <autoOrients>false</autoOrients>
    <width>320</width>
    <height>480</height>
    <fullScreen>true</fullScreen>
    </initialWindow>
    Then the stage.stageHeight and stage.stageWidth are not the correct numbers when my main class is added to the stage. Sometime after the main class is added to the stage, then those numbers are fine. Is there an event I can wait for to know that the stage.stageHeight and stage.stageWidth are correct?
    Thanks in advance!

    Hi Lee,
    Thanks for the quick response! However, for some reason the heightPercent & widthPercent metadata tags are not working as expected for me.
    I have a wrapper class that I target for compiling, WagErgApplePhone.as where I've got my metadata
    [SWF(heightPercent="100%", widthPercent="100%", frameRate="64", backgroundColor="#010101")]
    sets some stage properties
    stage.quality=StageQuality.LOW;
    stage.scaleMode = StageScaleMode.NO_SCALE;
    stage.align = StageAlign.TOP_LEFT;
    and instantiates my main class
    var main:Main = new Main();
    addChild(main);
    my main class constructor even waits for the stage
    public function Main(){
    if (stage) init();
    else addEventListener(Event.ADDED_TO_STAGE, init);
    in my init function, stage.stageHeight traces out as 375 (expecting 320).
    i have a function which is called via a button press event by the user, and stage.stageHeight traces out correctly (320) there. that's what makes me think that if i wait long enough, i can get the correct stageHeight before init/drawing. but i'm not sure what event to listen for, or if there's another trick.
    if i use Capabilities.screenResolutionX and Capabilities.screenResolutionY the correct values are provided for mobile, but these values are not useful for the desktop and web version of the app. if there's no other solution, i'll execute different code depending on platform.
    again, for reference, my app descriptor:
    <initialWindow>
    <aspectRatio>landscape</aspectRatio>
    <autoOrients>false</autoOrients>
    <width>320</width>
    <height>480</height>
    <content>bin-iOS/WagErgApplePhone.swf</content>
    <title>WAG ERG</title>
    <fullScreen>true</fullScreen>
    <renderMode>cpu</renderMode>
    </initialWindow>
    looking forward to any other ideas to try out & thanks so much for your thoughts! if you want to really dig in, this is an opensource project at code.google.com/p/wag-erg/

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • Lun Size best practice for UC apps and VMWare?

    Hi,
    We have UCS manager v2.1 with FI 6248 direct FC attached to NetApp with plenty of storage.
    Per following doc, Lun size for UC apps should be 500GB - 1.5TB and 4 to 8 VMs per Lun.
    http://docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements#Best_Practices_for_Storage_Array_LUNs_for_Unified_Communications_Applications
    We have four B200M3 blades and 3 to 4 UC apps (CUCM, Unity, UCCX) will be hosted on each blade. May add more VM the blades in the future.
    I am thinking four 1 TB Luns and one for each blades. (actually 8 Luns in toal, 4 boot luns for ESXi and 4 for UC apps).
    What is the best practice (or common deployment) to create Lun size and design?
    Thanks,
    Harry

    UC apps need low IO,nothing special,Reference vmware LUN design is ok.

  • What is the best Practice to improve MDIS performance in setting up file aggregation and chunk size

    Hello Experts,
    in our project we have planned to do some parameter change to improve the MDIS performance and want to know the best practice in setting up file aggregation and chunk size when we importing large numbers of small files(one file contains one record and each file size would be 2 to 3KB) through automatic import process,
    below is the current setting in production:-
    Chunk Size=2000
    No. Of Chunks Processed In Parallel=40
    file aggregation-5
    Records Per Minute processed-37
    and we made the below setting in Development system:-
    Chunk Size=70000
    No. Of Chunks Processed In Parallel=40
    file aggregation-25
    Records Per Minute processed-111
    after making the above changes import process improved but we want to get expert opinion making these changes in production because there is huge number different between what is there in prod and what change we made in Dev.
    thanks in advance,
    Regards
    Ajay

    Hi Ajay,
    The SAP default values are as below
    Chunk Size=50000
    No of Chunks processed in parallel = 5
    File aggregation: Depends  largely on the data , if you have one or 2 records being sent at a time then it is better to cluster them together and send it at one shot , instead of sending the one record at a time.
    Records per minute Processed - Same as above
    Regards,
    Vag Vignesh Shenoy

Maybe you are looking for

  • Installing windows XP on a mac mini

    Hi All, I have installed windows XP on four macs now and have never experienced a problem whatsoever however on my 5th attempt the following is happening and i wondered if anybody can help? I partition the hard drive using boot camp no problem then i

  • Raw files uploaded to photo stream not showing on devices

    I have Aperture set to automatically upload all the photos I import from my Canon 50D. In the past month the photos in RAW format, have not been showing up in my photo stream. When I look in ~/Library/Application Support/iLifeAssetManagement/assets/s

  • Shopping Cart not updated in bbp_pd.

    Hi Folks, We have recently upgraded to SRM 7.0 from SRM 4.0. We are using customized portal application for the creation of the shopping cart. Whenever we create a shopping cart using the portal interface, the shopping cart number is generated and we

  • My iCal doesn't communicate with iCloud

    I have a new MacBook Air with Lion (purchased last week). iCal does not synch with my iCloud calendar. Does anybody know how to fix this? Thanks, M.

  • Alarm clock - wake up to music

    On my good old iPod I can set the alarm clock to wake me up to one of my playlists or anything else in my music library. This is not the case with the iPhone. Can someone get word back to Apple that future versions of the alarm clock should allow thi