Large VMFS Datastore

HI All,
 I need to setup DR Server to keep our production replicas. Also need to run 5 Productions VM's.
I got DL 380 G7 with 900GB x 16 Drives. 
So can i create one RAID 6 or two RAID 6 ?  (12600 GB) vs
(2 x 5400 GB )
Pros and cons?
As
This topic first appeared in the Spiceworks Community

HI All,
 I need to setup DR Server to keep our production replicas. Also need to run 5 Productions VM's.
I got DL 380 G7 with 900GB x 16 Drives. 
So can i create one RAID 6 or two RAID 6 ?  (12600 GB) vs
(2 x 5400 GB )
Pros and cons?
As
This topic first appeared in the Spiceworks Community

Similar Messages

  • Two ESX 3.x hosts mounting to same VMFS datastore on SAN

    I'm running two ESX 3.x servers with virtual center. I know that it is possible to create redundancy or high availability by configuring a clustered environment. For HA to work, both ESX hosts must mount to the same datastore on SAN. When one ESX host goes down, the other ESX can take over the VM. How do I configure the second ESX 3.5 server to mount on to an existing VMFS datastore, but the primary ESX host must still maintain its master status? Could someone explain the configuration steps?
    Thanks

    hello and welcome!
    first question: do you have vCenter Server?
    If not--> how can you create a VMWare cluster? No need to have shared storage between hosts as you may not leverage features such as vMotion and HA ( and all the dependant features)
    if yes:
    create datacenter(optional)
    create cluster.
    insert the two hosts in  the cluster.
    present external storage wwn or iscsi iqn to the two hosts the hosts (remember to zone / mask on the fabric)
    create a VMFS volume.
    also the other host will get the same partition(s)
    If the external storage is NFS, repeat Mount NFS Mount point for each host; there's no need to format it as it's NFS
    PS: there is no master status: VMWare cluster is not a "Failback" cluster by design. VMFS is a clustered FS that's designed for small big file and supports the "VMWare Way " of doing Clustering.
    with HA you can achieve the three 9s availability. Now with FT on 5.5 you can achive 9s availability on critical VMs (max 4 per host)
    Aleph
    remember to give points and set the question as answered..

  • MaxDB  (7.8.02.27) installation error: Invalid parameter Data Volumes

    Hi all,
    i get errors during installation procedure of MaxDB 7.8.02.27 at the point where i define the Database Volume paths (step 4b).
    If i use the default values the database gets created without errors.
    But if i do changes e.g. to the size of the data volume, the error appears when i click next:
    "Invalid value for data volume size: data size of 0KB does not make sense Specify useful sizes for your log volumes".
    If i create 2 data files with different names (DISKD0001, DISKD0002), i get an error mesage that i have used one filename twice.
    Now its getting strange: If i use the previous button to move one step back and then use the next button again, it sometimes
    accepts the settings and i´m able to start the installation and the database gets created.
    I´m remote on a VMWare server 2008 R2 (EN) and i´m using the x64 package of MaxDB.
    Any ideas?
    Thanks
    Martin Schneider

    Hi Martin,
    A general system error occurrs if file *.vmdk is larger than the maximum size supported ... It has to be replaced with the nearest acceptable value associated with the various block sizes so that  you can use to create a datastore.
    You may need to resize your block size while choosing VMFS datastore.
    Hope this is useful.
    Regards,
    Deepak Kori

  • Dedupe on NetApp and disk reclaim on VMware. Odd results

    Hi I am currently in the process of reclaiming disk space back from our NetApp FAS8020 Array running 7-mode 8.2.1. All of our flexvols are VMware datastores using VMFS which are all thin provisioned volumes. NONE of our datastores are presented using NFS.  On the VMware layer we have a mixture of VMs using thin and thick provisioned disk, any new VMs created are normally creating using thin provisioned disks.  Our VMware environment is ESXi 5.0.0 U3 and we also use VSC 4.2.2. This has been quite a journey for us and after a number of hurdles we are now able to see reclaim of volume space on the NetApp, this resulting in the free space returning to the aggregate. To get this all working we had to perform a few steps provided by NetApp and VMware. If we used NFS we could have used the disk reclaim feature in VSC but because that only works with NFS volumes this wasn't an option for us. NETAPP - Enable lun set space_alloc to enabled - https://kb.netapp.com/support/index?page=content&id=3013572. This is disabled by default on any version of ONTAP.VMWARE - Enable BlockDelete to value 1 on each ESXi host in cluster - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007427. This is disabled by default on the version of ESXi we are running.VMWARE - Rescan the VMFS datastores in VMware and update the VSC settings for each host. Set recommended host settings. Once performed check delete status is showing as 'supported' esxcli storage core device vaai status get -d naa - http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2014849VMWARE - login to ESXi host, go the /vmfs/volumes and datastore where you want to run disk reclaim and run vmkfstools -y percentage_of_deleted_blocks_to_reclaimNETAPP Run sis start -s -d -o /vol/lun - this will rerun deduplication and delete the existing checkpoints and start afresh. Whilst I believe we are seeing savings on the volumes we are not seeing the savings at the LUN layer in NetApp. The volume usage comes down and with dedupe on I would expect the volume usage to be lower than the datastore usage but the LUN usage doesnt go down.   Does anyone know why this might be the case. Both our flexvols and LUNs are created using thin provisoned and space reserved in unchecked on the LUN.

    Hi,
    Simple answer is yes. It's just the matter of visibility of the disks on the virtual servers. You need to configure the disks appropriately so that some of them are accessible from both nodes e.g. OCR or Voting disks and some are local, but many of the answers depend on the setup that you are going to choose.
    Regards,
    Jarek

  • Keeping two very large datastores in sync

    I'm looking at options for keeping a very large (potentially 400GB) TimesTen (11.2.2.5) datastore in sync between a Production server and a [warm] Standby.
    Replication has been discounted because it doesn't support compressed tables, nor the types of table our closed-code application is creating (without non-null PKs)
    I've done some testing with smaller datastores to get indicative numbers, and a 7.4GB datastore (according to dssize) resulted in a 35GB backup set (using ttBackup -type fileIncrOrFull). Is that large increase in volume expected, and would it extrapolate up for a 400GB data store (2TB backup set??)?
    I've seen that there are Incremental backups, but to maintain our standby as warm, we'll be restoring these backups and from what I'd read & tested only a ttDestroy/ttRestore is possible, i.e. complete restore of the complete DSN each time, which is time consuming. Am I missing a smarter way of doing this?
    Other than building our application to keep the two datastores in sync, are there any other tricks we can use to efficiently keep the two datastores in sync?
    Random last question - I see "datastore" and "database" (and to an extent, "DSN") used apparently interchangeably - are they the same thing in TimesTen?
    Update: the 35GB compresses down with 7za to just over 2.2GB, but takes 5.5 hours to do so. If I take a standalone fileFull backup it is just 7.4GB on disk, and completes faster too.
    thanks,
    rmoff.
    Message was edited by: rmoff - add additional detail

    This must be an Exalytics system, right? I ask this because compressed tables are not licensed for use outside of an Exalytics system...
    As you note, currently replication is not possible in an Exalytics environment, but that is likely to change in the future and then it will definitely be the preferred mechanism for this. There is not really any other viable way to do this other than through the application.
    With regard to your specific questions:
    1.   A backup consists primarily of the most recent checkpoint file plus all log files/records that are newer than that file. So, to minimise the size of a full backup ensure
         that a checkpoint occurs (for example 'call ttCkpt' from a ttIsql session) immediately prior to starting the backup.
    2.   No, only complete restore is possible from an incremental backup set. Also note that due to the large amount of rollforward needed, restoring a large incremental backup set may take quite a long time. Backup and restore are not really intended for this purpose.
    3.   If you cannot use replication then some kind of application level sync is your only option.
    4.   Datastore and database mean the same thing - a physical TimesTen database. We prefer the term database nowadays; datastore is a legacy term. A DSN is a different thing (Data Source Name) and should not be used interchangeably with datastore/database. A DSN is a logical entity that defines the attributes for a database and how to connect to it. It is not the same as a database.
    Chris

  • ODI Datastore Length differs with the DB length -IKM throws value too large

    ODI datastore when reverse engineered shows different length to that of the datalength in the actual db.
    ODI Datastore column details: char(44)
    Target db column : varchar2(11 char)
    The I$ table inserts char44 into varchar2(11char) in the target. As the source column value is empty ODI throws
    "ORA-12899: value too large for column (actual: 44, maximum: 11).

    Yes. I have reverse engineered the target also.
    source datatype     varchar2(11 char)
    After Reverse Engineering
    odi datstore datatype-Source :  char(44)
    target datatype: varchar2(11 char)
    after Reverse Engineering
    odi datstore datatype-Target :  char(44)
    Since the target datastore is char(44) in ODI Datastore and the values in the source column are null/spaces, the IKM inserts them into the target Column which is of 11 Char and the above mentioned value too large error occurs.
    There are no junk values seen on the column and I tried with substr(column,1,7) and
    Trim functions too and it does not help.

  • Loading time into memory for a large datastore ?

    Is there some analysis/statistics about what would be the loading time for a timesten data store according to the size of the data store.
    We have a problem with one of our clients where loading of datastore into memory takes a long time. but only certain instances it takes this long.. maximum size for data store is set to be 8GB (64bit AIX with 45GB physical memory), is it something to do with transactions which are not committed?
    Also is it advisable to have multiple smaller datastores or one single large datastore...

    When a TimesTen datastore is loaded into memory it has to go through the following steps. If the datastore was shut down (unloaded from memory) cleanly, then the recovery steps essentially are no-ops; if not then they may take a considerable time:
    1. Allocate appropriately sized shared memory segment from the O/S (on some O/S this can take a significant time if the segment is large)
    2. Read the most recent checkpoint file into the shared memory segment from disk. The time for this step depends on the size of the checkpoint file and the sustained read performance of the storage subsystem; a large datastore, slow disks or a lot of I/O contention on the disks can all slow down this step.
    3. Replay all outstanding transaction log files from the point corresposnding to the checkpoint until the end of the log stream is reached. Then rollback any still open transactions. If there is a very large amount of log data to replay then this can take quite some time. This step is skipped if the datastore was shut down cleanly.
    4. Any indices that would have been modified during the log replay are dropped and rebuilt. If there are many indices, on large tables, that need to be rebuilt then this step can also take some time. This phase can be done in parallel (see the RecoveryThreads DSN attribute).
    Once these 4 steps have been done the datastore is usable, but if recovery had to be done then we will immediately take a checkpoint which will happen in the background.
    As you can see from the above there are several variables and so it is hard to give general metrics. For a clean restart (no recovery) then the time should be very close to size of datastore divided by disk sustained read rate.
    The best ways to minimise restart times are to (a) ensure that checkpoints are occurring frequently enough and (b) ensure that the datastore(s) are always shutdown cleanly before e.g. stopping the TimesTen main daemon or rebooting the machine.
    As to whether it is better to have multiple smaller stores or one large one - that depends on several factors.
    - A single large datastore may be more convenient for the application (since all the data is in one place). If the data is split across multiple datastores then transactions cannot span the datastores and if cross-datastorestore queries/joins are needed they must be coded in the application.
    - Smaller datastores can be loaded/unloaded/recovered faster than larger datastores but the increased number of datastores could make system management more complex and/or error prone.
    - For very intensive workloads (especially write workloads) on large SMP machines overall better throughput and scalability will be seen from multiple small datastores compared to a single large datastore.
    I hope that helps.
    Chris

  • ODI File Datastore

    Hi All,
    I have a confusion regarding physical and logical length in ODI file datastore.
    I have a fixed width file where a col c2 had datatype as string(30).
    I defined that column in datastore as string>Physical length 30 >Logical length 30
    My interface failed with error as"
    ORA-12899: value too large for column "S0_IDM"."C$_0S0_EAGLE_DWHCDC_CHRG_MST"."C2_CHARGE_DESC" (actual: 31, maximum: 30)"
    When I increased the logical length to 255,the interface worked fine.
    Physical length still being the same 30.
    How different is this?
    Any help on this will be appreciated.
    Thanks and Regards
    Reshma

    This is not from any official documentation, but here is my take after a few moments thought
    Everything you do in the ODI designer is based on the logical architecture. Only at runtime is this manifested into a physical implementation i.e. connection strings are materialized etc. When you perform an integration ODI generally creates a few temporary tables prefixed with C$, I$ etc to be able to perform the necessary data movement and transformations required to, for example, populate a target datastore (table). In your example, your flat file will be materialized into such a temporary table before its contents are manipulated (or not) and loaded to the target datastore. When ODI generates this code it is using the logical length issued in the DDL that generates the temporary table column lengths, the physical column is ignored.
    Now in your scenario this is not a problem as constraints such as these do not matter to the physical version of the file i.e. if you were to write back to the file it would not matter if you wrote back 255 characters or 31. This could be a problem if you were using database tables and varying the logical vs. physical lengths but usually you reverse engineer database tables using ODI rather than doing it manually so this mitigates that.
    Anyway, in short, I think the logical lengths should be taken as representing what will be manifested in the materialization of the temporary objects used to manged / transform data from the source models (C$tables) and target models (I$tables)  whereas the physical lengths indicate what the underlying physical representation of those models actually are.
    EDIT: After reading a bit of documentation logical actually represents the length whereas physical is related to the number of bytes required to store the data. Therefore you could have a situation with multi-byte characters where the physical length could be greater than the logical length but not really the other way around. 

  • Target File Size Too Large?

    Hello,
    We have an interface, that takes a Source File-> Transforms it in the Staging Area ->and outputs a Target File.
    The problem is the target file is way too large than what it is supposed to be.
    We do have the 'Truncate Option' turned ON, so its not duplicate records..
    We think its the Physical and Logical Lengths that are defined for the Target files.
    We think the logical length is way too large causing substantial 'spaces' between the data columns thereby increasing the file size.
    We initially had the Logical Length for the data columns as 12 and we got the following error:
    Arithmetic Overflow error converting numeric to data type numeric.
    When we increased the Logical Length from 12 to 20 the interface executed fine without errors. But now the target file size is just way too large 1:5
    Any suggestions to prevent these additional spaces in the target columns??
    Appreciate your inputs!
    Thanks

    Since 'File-system' does not have a property 'column length', ODI will automatically set a standard 'column length' according to the datatype of the column. In your case, as both your source and target is File, check the max length of each column in your source ( Ex: if your file is huge then open the file in excel and verify the lengths) and set the same 'Logical length' for your target file datastore.
    drop the temporary tables (set the 'delete temp objects' option to 'true' in the KMs) and re-run the Interface. hope this helps.
    Thanks,
    Parasuram.

  • With large attachments, TB seems to be "loading message" over and over.

    I'm using TB with as POP with a gmail account. I used to use Outlook. When I receive large (20MB) emails with attachments such as .jpg or videos, it APPEARS that TB keeps going out to get the email from the Internet instead of having it locally on the PC as Outlook used to do. If I have several emails in my inbox of this type clicking on one or the other shows "loading" at the bottom of the screen. Hasn't it already been "loaded" once and stored locally? Also, when "saving attachment" or "save all" it appears that it is "downloading" the file. Again, isn't this already "downloaded" in its entirety?
    My laptop is brand new, with Widows 8.1, low end, not a terribly fast processor. Even deleting the email seems to show a "progress bar" as if it is going out to the Internet.
    The reason for my concern is that I have a 3GB/mo. account thru AT&T and it just seems like it's loading this data each time rather than just once as POP3 is supposed to do.
    Is it all an illusion?
    Thanks.

    Now I see. I'll paste it here:
    Application Basics
    Name: Thunderbird
    Version: 24.5.0
    User Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0
    Profile Folder: Show Folder
    (Local drive)
    Application Build ID: 20140424091057
    Enabled Plugins: about:plugins
    Build Configuration: about:buildconfig
    Crash Reports: about:crashes
    Memory Use: about:memory
    Mail and News Accounts
    account1:
    INCOMING: account1, , (pop3) pop.googlemail.com:995, SSL, passwordCleartext
    OUTGOING: smtp.googlemail.com:465, SSL, passwordCleartext, true
    account2:
    INCOMING: account2, , (none) Local Folders, plain, passwordCleartext
    account3:
    INCOMING: account3, , (pop3) pop.googlemail.com:995, SSL, passwordCleartext
    OUTGOING: smtp.googlemail.com:465, SSL, passwordCleartext, true
    Extensions
    Lightning, 2.6.5, true, {e2fda1a4-762b-4020-b5ad-a41df1933103}
    Important Modified Preferences
    Name: Value
    browser.cache.disk.capacity: 358400
    browser.cache.disk.smart_size.first_run: false
    browser.cache.disk.smart_size.use_old_max: false
    browser.cache.disk.smart_size_cached_value: 358400
    extensions.lastAppVersion: 24.5.0
    font.internaluseonly.changed: true
    font.name.monospace.el: Consolas
    font.name.monospace.tr: Consolas
    font.name.monospace.x-baltic: Consolas
    font.name.monospace.x-central-euro: Consolas
    font.name.monospace.x-cyrillic: Consolas
    font.name.monospace.x-unicode: Consolas
    font.name.monospace.x-western: Consolas
    font.name.sans-serif.el: Calibri
    font.name.sans-serif.tr: Calibri
    font.name.sans-serif.x-baltic: Calibri
    font.name.sans-serif.x-central-euro: Calibri
    font.name.sans-serif.x-cyrillic: Calibri
    font.name.sans-serif.x-unicode: Calibri
    font.name.sans-serif.x-western: Calibri
    font.name.serif.el: Cambria
    font.name.serif.tr: Cambria
    font.name.serif.x-baltic: Cambria
    font.name.serif.x-central-euro: Cambria
    font.name.serif.x-cyrillic: Cambria
    font.name.serif.x-unicode: Cambria
    font.name.serif.x-western: Cambria
    font.size.fixed.el: 14
    font.size.fixed.tr: 14
    font.size.fixed.x-baltic: 14
    font.size.fixed.x-central-euro: 14
    font.size.fixed.x-cyrillic: 14
    font.size.fixed.x-unicode: 14
    font.size.fixed.x-western: 14
    font.size.variable.el: 17
    font.size.variable.tr: 17
    font.size.variable.x-baltic: 17
    font.size.variable.x-central-euro: 17
    font.size.variable.x-cyrillic: 17
    font.size.variable.x-unicode: 17
    font.size.variable.x-western: 17
    mail.openMessageBehavior.version: 1
    mail.winsearch.firstRunDone: true
    mailnews.database.global.datastore.id: 61b3273d-7b24-4ff8-bc33-2f1299efbf8
    mailnews.database.global.indexer.enabled: false
    network.cookie.cookieBehavior: 2
    network.cookie.prefsMigrated: true
    places.database.lastMaintenance: 1411654561
    places.history.expiration.transient_current_max_pages: 92721
    plugin.importedState: true
    privacy.donottrackheader.enabled: true
    Graphics
    Adapter Description: AMD Radeon HD 8210
    Vendor ID: 0x1002
    Device ID: 0x9834
    Adapter RAM: 512
    Adapter Drivers: aticfx64 aticfx64 aticfx64 aticfx32 aticfx32 aticfx32 atiumd64 atidxx64 atidxx64 atiumdag atidxx32 atidxx32 atiumdva atiumd6a atitmm64
    Driver Version: 13.152.1.3000
    Driver Date: 9-25-2013
    Direct2D Enabled: false
    DirectWrite Enabled: false (6.3.9600.17111)
    ClearType Parameters: ClearType parameters not found
    WebGL Renderer: false
    GPU Accelerated Windows: 0
    AzureCanvasBackend: skia
    AzureFallbackCanvasBackend: cairo
    AzureContentBackend: none
    JavaScript
    Incremental GC: 1
    Accessibility
    Activated: 0
    Prevent Accessibility: 0
    Library Versions
    Expected minimum version
    Version in use
    NSPR
    4.10.2
    4.10.2
    NSS
    3.15.4 Basic ECC
    3.15.4 Basic ECC
    NSS Util
    3.15.4
    3.15.4
    NSS SSL
    3.15.4 Basic ECC
    3.15.4 Basic ECC
    NSS S/MIME
    3.15.4 Basic ECC
    3.15.4 Basic ECC

  • Java.sql.BatchUpdateException: ORA-12899[ value too large for column.......

    Hi All,
    I am using SOA 11g(11.1.1.3). I am trying to insert data in to a table coming from a file. I have encountered the fallowing error.
    Exception occured when binding was invoked.
    Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'insert' failed due to: DBWriteInteractionSpec Execute Failed Exception.
    *insert failed. Descriptor name: [UploadStgTbl.XXXXStgTbl].*
    Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "XXXX"."XXXX_STG_TBL"."XXXXXX_XXXXX_TYPE" (actual: 20, maximum: 15)
    *The invoked JCA adapter raised a resource exception.*
    *Please examine the above error message carefully to determine a resolution.*
    The data type of the column errored out is VARCHAR2(25). I found related issue in metalink, java.sql.BatchUpdateException (ORA-12899) Reported When DB Adapter Reads a Row From a Table it is Polling For Added Rows [ID 1113215.1].
    But the solution seems not applicable in my case...
    Can anyone encountered same issue?? Is this a bug? If it is a bug, do we have patch for this bug??
    Please help me out...
    Thank you all...
    Edited by: 806364 on Dec 18, 2010 12:01 PM

    It didn't work.
    After I changed length of that column of the source datastore (from 15 to 16), ODI created temporary tables (C$ with I$) with larger columns (16 instead of 15) but I got the same error message.
    I'm wondering why I have to extend length of source datastore in the source model if there are no values in the source table with a length greather than 15....
    Any other idea? Thanks !

  • Error: value too large for column ?!?

    I use LKM SQL to Oracle and IKM Oracle Incremental Update. When I execute the interface I get an error message:
    12899 : 72000 : java.sql.BatchUpdateException: ORA-12899: value too large for column "SAMPLE"."C$_0EMP"."C4_EMP_NM" (actual: 17, maximum: 15)
    I checked the source and maximum length is 15 and I don't understand where ODI can find 17 characters. Both columns (source and targer) are Varchar2(15).
    Then I replaced the mapping with Substr(EMP_NM,1,15) to make sure ODI will get only 15 characters in stage area but... it didn't work. I tried even Substr(EMP_NM,1,10) but no luck.
    Why this ODI error occurred ? It doesn't have any sense....

    It didn't work.
    After I changed length of that column of the source datastore (from 15 to 16), ODI created temporary tables (C$ with I$) with larger columns (16 instead of 15) but I got the same error message.
    I'm wondering why I have to extend length of source datastore in the source model if there are no values in the source table with a length greather than 15....
    Any other idea? Thanks !

  • Datastore for CLOB

    What datastore should be used for a context index on a CLOB column? I used Java to load large text file into Oracle 8i in a CLOB column, then created a context index on it. But the searching does not work. Please help!

    The default one should be fine if you store the text in database.
    Something like this:
    1. create table quick_clob (
    quick_id number primary key,
    filename varchar2(2000),
    text clob
    2. The load text with sqlldr:
    sqlldr userid=test/test control=load.ctl log=load.log
    Or loading text using any other mechanism.
    3. create index text_index on quick_clob(text)
    indextype is ctxsys.context ;
    You should be able to search then.

  • File_To_RT data truncation ODI error ORA-12899: value too large for colum

    Hi,
    Could you please provide me some idea so that I can truncate the source data grater than max length before inserting into target table.
    Prtoblem details:-
    For my scenario read data from source .txt file and insert the data into target table.suppose source file data length exceeds max col length of the target table.Then How will I truncate the data so that data migration will be successful and also can avoid the ODI error " ORA-12899: value too large for column".
    Thanks
    Anindya

    Bhabani wrote:
    In which step you are getting this error ? If its loading step then try increasing the length for that column from datastore and use substr in the mapping expression.Hi Bhabani,
    You are right.It is for Loading SrcSet0 Load data.I have increased the column length for target table data store
    and then apply the substring function but it results the same.
    If you wanted to say to increase the length for source file data store then please tell me which length ?Physical length or
    logical length?
    Thanks
    Anindya

  • Move VM from one datastore to another

    I have connected iSCSI target to ESXi server and now i want to move some VM`s from local datastore to iSCSI one. Which is the correct way of doing this?

    try this
    $vmne= "vm name (41330147-837a-455e-a607-453e5cbdeb23)" # vm to exclude for example very large or heavy  io machine
    $lt_size="200" #  vm to exclude larger than size in GB
    $cds=  "vCloud_XIV_Store5_L3"  #  storage vmotion current datastore
    $tds=  "ds_vcloud1_pvdc1_lun1" # storage vmotion to datastore
    $vms= Get-Vm -datastore $cds| where {$_.Name -ne $vmne } | Where { $_.UsedSpaceGB -lt $lt_size }
    ($vms).count
    $res=Get-ResourcePool -VM $vms
    $res | select name,MemExpandableReservation,CpuExpandableReservation | ft -AutoSize
    $res | Set-ResourcePool -CpuExpandableReservation:$true -MemExpandableReservation:$true
    $vms | Move-VM -Datastore $tds -RunAsync
    Start-Sleep 10
    $res | Set-ResourcePool -CpuExpandableReservation:$false -MemExpandableReservation:$false

Maybe you are looking for

  • Adobe Acrobat Pro 9 Lion Printer Error

    My MacBook Pro came with Lion already installed. I installed a new copy of Acrobat Pro 9 and it worked fine... including printing to both my Canon Pixma and Brother Laser printers. I installed Lion 7.1.2 update and when I tried to print an Acrobat fi

  • How to solve the problem: ipod doesn't  sort tracks by track number

    After lots of attempts, I think I've found the solution that solves this problem (is it a BUG of the ipod software last update?). PROBLEM DESCRIPTION =================== For some albums, ipod seams to be not able to sort songs by track number, no mat

  • Links will not display in new window

    I have just install the new version of Thunderbird 31.4.0 and now find when I click on a Link it does not display in a new window. To display the link I have to click on the Explorer tab near the start button. I would like the link to open immediatel

  • Hyper Editor Bug

    Hi, everythings been fine but over the last few days, i have noticed that i will be working away then all of a sudden the volume slider of the channel i'm on suddenly jumps to +2.6 db. (always 2.6) I have no idea why this has started happening and it

  • How to get a Class object with a generic type of list

    Hi, I want to get an instance of a class which is of type List<ABC>. How can I get that ? What I tried was Class c = List<ABC>.class , bu this is not compiling. I can get new ArrayList<ABC>().getClass() , but the type of that class would be Class<? e