Unble to delete VHD due to lease on the blob.

Receiving this message when attempting VHD delete in remaining container of storage account.  Wanting to delete the storage account, not using the storage.
"There is currently a lease on the blob and no lease ID was specified in the request. RequestId:115de659-0001-00a1-22aa-7f0856000000 Time:2015-04-25T22:55:20.4852095Z"
I'm unable to find a way to complete the delete, all VMs have been deleted.  The lease status is locked but by what?

Answered my own question!
I had 2 storage accounts with each having a lease locked VHD.  I went to add a new VM using the images option and found the 2 VHDs associated there with the images.  I was able to delete the VHDs there and then delete the storage accounts!

Similar Messages

  • Error deleting VHD: There is currently a lease on the blob and no lease ID was specified in the request

    When attempting to delete a VHD's blob you may receive the following error:
    There is currently a lease on the blob and no lease ID was specified in the request
    While these errors are expected if a VHD is still registered as a disk or image in the portal, we have identified an issue where a lease remains even if the blob is not registered as a disk or image in the portal.
    If you receive one of these errors, first make sure the VHD is not in use:
    In the Windows Azure management portal, if the disk shows up under Virtual Machines,
    Disks, and the Attached To column is not blank, you should first remove that VM in the
    Attached To column by going to VM Instances, selecting the VM, then clicking
    Delete.
    If Attached To is blank, or the VM in the Attached To column was already removed, try removing the disk by highlighting it under
    Disks and clicking Delete Disk (this will not physically delete the VHD from blob storage, it only removes the disk object in the portal). If you have multiple pages of disks it can be easier to search for a specific disk by
    clicking the magnifying glass icon at the top right.
    If Delete Disk is grayed out, or the disk is not listed under Disks, but you still cannot reuse it or delete it, review the options below.
    Breaking the lease
    You can use the
    Lease Blob API to break the lease in this scenario, which is also available in the Windows Azure PowerShell assembly
    Microsoft.WindowsAzure.StorageClient.dll using
    LeaseAction Enumeration.
    To use the BreakLease.ps1 script to break the lease:
    Download Azure PowerShell by clicking Install under Windows here:
    http://www.windowsazure.com/en-us/manage/downloads/
    Start, Search, type Windows Azure PowerShell and open that console.
    Run Get-AzurePublishSettingsFile to launch a browser window to
    https://windows.azure.com/download/publishprofile.aspx to download the management certificate in a
    .publishsettings file in order to manage your subscription with PowerShell.
    Get-AzurePublishSettingsFile
    Run Import-AzurePublishSettingsFile to import the certificate and subscription information. Replace the path below with the full path to the .publishsettings file if you didn't save it to your
    Downloads folder. If you saved it to Downloads you can run it as-is, otherwise replace the path with the full path to the
    .publishsettings file.
    Import-AzurePublishSettingsfile $env:userprofile\downloads\*.publishsettings
    Copy the script below into a text editor such as Notepad and save it as
    BreakLease.ps1.
    Run Set-ExecutionPolicy to allow script execution:
    Set-ExecutionPolicy unrestricted
    Run BreakLease.ps1 with the URL to the VHD in order to break the lease. The script obtains the necessary storage account information, checks that the blob is not currently registered as a disk or as an image, then proceeds to break the
    current lease (if any).
    Sample output:
    BreakLease.ps1 -Uri 'http://clstorage.blob.core.windows.net/vhds/testvm1-testvm1-2012-06-26.vhd'
    Processing http://clstorage.blob.core.windows.net/vhds/testvm1-testvm1-2012-06-26.vhd
    Reading storage account information...
    Confirmed - storage account 'clstorage'.
    Checking whether the blob is currently registered as a disk or image...
    Confirmed - the blob is not in use by the Windows Azure platform.
    Inspecting the blob's lease status...
    Current lease status: Locked
    Unlocking the blob...
    Current lease status: Unlocked
    Success - the blob is unlocked.
    BreakLease.ps1
    Param([string]$Uri = $(Read-Host -prompt "Please specify a blob URL"))
    $ProgressPreference = 'SilentlyContinue'
    echo "Processing $Uri"
    echo "Reading storage account information..."
    $acct = Get-AzureStorageAccount | ? { (new-object System.Uri($_.Endpoints[0])).Host -eq (new-object System.Uri($Uri)).Host }
    if(-not $acct) {
    write-host "The supplied URL does not appear to correspond to a storage account associated with the current subscription." -foregroundcolor "red"
    break
    $acctKey = Get-AzureStorageKey ($acct.StorageAccountName)
    $creds = "DefaultEndpointsProtocol=http;AccountName=$($acctKey.StorageAccountName);AccountKey=$($acctKey.Primary)"
    $acctobj = [Microsoft.WindowsAzure.CloudStorageAccount]::Parse($creds)
    $uri = $acctobj.Credentials.TransformUri($uri)
    echo "Confirmed - storage account '$($acct.StorageAccountName)'."
    $blobclient = New-Object Microsoft.WindowsAzure.StorageClient.CloudBlobClient($acctobj.BlobEndpoint, $acctobj.Credentials)
    $blobclient.Timeout = (New-TimeSpan -Minutes 1)
    $blob = New-Object Microsoft.WindowsAzure.StorageClient.CloudPageBlob($uri, $blobclient)
    echo "Checking whether the blob is currently registered as a disk or image..."
    $disk = Get-AzureDisk | ? { (new-object System.Uri($_.MediaLink)) -eq $blob.Uri }
    if($disk) {
    write-host "The blob is still registered as a disk with name '$($disk.DiskName)'. Please delete the disk first." -foregroundcolor "red"
    break
    $image = Get-AzureVMImage | ? { $_.MediaLink -eq $blob.Uri.AbsoluteUri }
    if($image) {
    write-host "The blob is still registered as an OS image with name '$($image.ImageName)'. Please delete the OS image first." -foregroundcolor "red"
    break
    echo "Confirmed - the blob is not in use by the Windows Azure platform."
    echo "Inspecting the blob's lease status..."
    try {
    $blob.FetchAttributes()
    } catch [System.Management.Automation.MethodInvocationException] {
    write-host $_.Exception.InnerException.Message -foregroundcolor "red"
    break
    echo "Current lease status: $($blob.Properties.LeaseStatus)"
    if($blob.Properties.LeaseStatus -ne [Microsoft.WindowsAzure.StorageClient.LeaseStatus]::Locked) {
    write-host "Success - the blob is unlocked." -foregroundcolor "green"
    break
    echo "Unlocking the blob..."
    $request = [Microsoft.WindowsAzure.StorageClient.Protocol.BlobRequest]::Lease($uri, 0, [Microsoft.WindowsAzure.StorageClient.Protocol.LeaseAction]::Break, $null)
    $request.Timeout = $blobclient.Timeout.TotalMilliseconds
    $acctobj.Credentials.SignRequest($request)
    try {
    $response = $request.GetResponse()
    $response.Close()
    catch {
    write-host "The blob could not be unlocked:" -foregroundcolor "red"
    write-host $_.Exception.InnerException.Message -foregroundcolor "red"
    break
    $blob.FetchAttributes()
    echo "Current lease status: $($blob.Properties.LeaseStatus)"
    write-host "Success - the blob is unlocked." -foregroundcolor "green"
    Alternate method: make a copy of the VHD in order to reuse a VHD with a stuck lease
    If you have removed the VM and the disk object but the lease remains and you need to reuse that VHD, you can make a copy of the VHD and use the copy for a new VM:
    Download CloudXplorer. This will work with other
    Windows Azure Storage Explorers but for the sake of brevity these steps will reference CloudXplorer.
    In the Windows Azure management portal, select Storage on the left, select the storage account where the VHD resides that you want to reuse, select
    Manage Keys at the bottom, and copy the Primary Access Key.
    In CloudXplorer, go to File, Accounts,
    New, Windows Azure Account and enter the storage account name in the
    Name field and the primary access key in the Secret Key field. Leave the rest on the default settings.
    Expand the storage account in the left pane in CloudXplorer and select the
    vhds container (or if the VHD in question is one uploaded to a different location, browse to that location instead).
    Right-click the VHD you want to reuse (which currently has a stuck lease), select
    Rename, and give it a different name. This will throw the error
    could not rename…there is currently a lease on the blob… but click
    Yes to continue, then View, Refresh (F5) to refresh and you will see it did make a copy of the VHD since it could not rename the original.
    In the Azure management portal, select Virtual Machines,
    Disks, then Create Disk at the bottom.
    Specify a name for the disk, click the folder icon under VHD URL to browse to the copy of the VHD you just created, check the box for
    This VHD contains an operating system, select the drop-down to specify if it is
    Windows or Linux, then click the arrow at the bottom right to create the disk.
    After the portal shows Successfully created disk <diskname>, select
    New at the bottom left of the portal, then Virtual Machine,
    From Gallery, My Disks, and select the disk you just created, then proceed through the rest of the wizard to create the VM.
    Thanks,
    Craig

    Just to add an update to this, it looks like the namespaces have changed with the latest version of the SDK. I have updated the script to use the new namespaces, namely: Microsoft.WindowsAzure.Storage.Blob.CloudPageBlob and Microsoft.WindowsAzure.Storage.CloudStorageAccount.
    Param([string]$Uri = $(Read-Host -prompt "Please specify a blob URL"))
    $ProgressPreference = 'SilentlyContinue'
    echo "Processing $Uri"
    echo "Reading storage account information..."
    $acct = Get-AzureStorageAccount | ? { (new-object System.Uri($_.Endpoints[0])).Host -eq (new-object System.Uri($Uri)).Host }
    if(-not $acct) {
    write-host "The supplied URL does not appear to correspond to a storage account associated with the current subscription." -foregroundcolor "red"
    break
    $acctKey = Get-AzureStorageKey ($acct.StorageAccountName)
    $creds = "DefaultEndpointsProtocol=http;AccountName=$($acctKey.StorageAccountName);AccountKey=$($acctKey.Primary)"
    $acctobj = [Microsoft.WindowsAzure.Storage.CloudStorageAccount]::Parse($creds)
    $uri = $acctobj.Credentials.TransformUri($uri)
    echo "Confirmed - storage account '$($acct.StorageAccountName)'."
    $blob = New-Object Microsoft.WindowsAzure.Storage.Blob.CloudPageBlob($uri, $creds)
    echo "Checking whether the blob is currently registered as a disk or image..."
    $disk = Get-AzureDisk | ? { (new-object System.Uri($_.MediaLink)) -eq $blob.Uri }
    if($disk) {
    write-host "The blob is still registered as a disk with name '$($disk.DiskName)'. Please delete the disk first." -foregroundcolor "red"
    break
    $image = Get-AzureVMImage | ? { $_.MediaLink -eq $blob.Uri.AbsoluteUri }
    if($image) {
    write-host "The blob is still registered as an OS image with name '$($image.ImageName)'. Please delete the OS image first." -foregroundcolor "red"
    break
    echo "Confirmed - the blob is not in use by the Windows Azure platform."
    echo "Inspecting the blob's lease status..."
    try {
    $blob.FetchAttributes()
    } catch [System.Management.Automation.MethodInvocationException] {
    write-host $_.Exception.InnerException.Message -foregroundcolor "red"
    break
    echo "Current lease status: $($blob.Properties.LeaseStatus)"
    if($blob.Properties.LeaseStatus -ne [Microsoft.WindowsAzure.Storage.StorageClient.LeaseStatus]::Locked) {
    write-host "Success - the blob is unlocked." -foregroundcolor "green"
    break
    echo "Unlocking the blob..."
    $request = [Microsoft.WindowsAzure.Storage.StorageClient.Protocol.BlobRequest]::Lease($uri, 0, [Microsoft.WindowsAzure.Storage.StorageClient.Protocol.LeaseAction]::Break, $null)
    $request.Timeout = $blobclient.Timeout.TotalMilliseconds
    $acctobj.Credentials.SignRequest($request)
    try {
    $response = $request.GetResponse()
    $response.Close()
    catch {
    write-host "The blob could not be unlocked:" -foregroundcolor "red"
    write-host $_.Exception.InnerException.Message -foregroundcolor "red"
    break
    $blob.FetchAttributes()
    echo "Current lease status: $($blob.Properties.LeaseStatus)"
    write-host "Success - the blob is unlocked." -foregroundcolor "green"

  • Error when trying to remove blob: There is currently a lease on the blob and no lease ID was specified in the request.

    I'm having this error when I try to remove a blob that was used for a VM in the pass, the VM was already removed and there's nothing using that blob nor even the storage account wh ere the blob is stored.
    There is currently a lease on the blob and no lease ID was specified in the request. RequestId:0a441667-0001-0044-7861-bc17ef000000 Time:2014-12-18T15:53:04.5315752Z
    I'm trying to delete the blob from the current web interface, and I haven't found a way to specify a lease ID or check what lease could it has.
    Thanks in advance

    Hi  jruiz,
    When you delete a Virtual Machine on the Management Portal, the "Disk" resource used to mount the Virtual Machines VHD is kept.The "Disk" resource is responsible for mounting the blob for the VHD file, so it can be attached to virtual
    machines as an OS Disk or Data Disk. It will continue to hold a lease on the blob for as long as it exists.
    So you need to delete the "Disk" resource to break the lease on the blob.
    Please refer to the link about how to delete disk resource to break lease on the blob.
    http://blogs.msdn.com/b/mast/archive/2013/02/05/iaas-unable-to-delete-vhd-there-is-currently-a-lease-on-the-blob.aspx
    Best Regards,
    Kevin Shen.

  • How To delete VHD in Windows 8

    Hi.
    I Created The VHD in my C drive. Now I want To recover my 400 GB's how can i delete VHD. when I try to delete this it display a message that the it is in use by the processor. Please guide me.

    Have you detached the VHD?
    Right click on the start button, "Disk Management", then right click on the VHD drive and "Detach VHD"
    If that isn't it, it could be your antivrius or backup software locking it, so you would have to temprarily disable them to delete the VHD.
    Bob Comer

  • The Managed servers are going down  frequently due to leasing renewal.

    We have 3 SOA Managed Servers in our Production Environment running each of them on 3 different Machines.
    The servers are going down frequently due to leasing renewal issue.
    Please see the errors below. SOA_MS3 was the Cluster Leader.
    From SOA_MS1 logs we see -
    <Jun 15, 2011 12:11:24 AM MDT> <Warning> <Cluster> <WL-000147> <Server "SOA_MS1" failed to renew lease in the leasing basis hosted by SOA_MS3.>
    <Jun 15, 2011 12:11:24 AM MDT> <Error> <Cluster> <WL-000150> <Server failed to get a connection to the leasing basis hosted by SOA_MS3 in the past 30 seconds for lease renewal. Server will shut itself down.>
    <Jun 15, 2011 12:11:24 AM MDT> <Critical> <Health> <WL-310006> <Critical Subsystem ServerMigration has failed. Setting server state to FAILED.
    Reason: ServerSOA_MS1 failed to renew lease in the leasing basis hosted by SOA_MS3>
    <Jun 15, 2011 12:11:24 AM MDT> <Critical> <WebLogicServer> <WL-000385> <Server health failed. Reason: health of critical service 'ServerMigration' failed>
    <Jun 15, 2011 12:11:24 AM MDT> <Notice> <WebLogicServer> <WL-000365> <Server state changed to FAILED>
    <Jun 15, 2011 12:11:24 AM MDT> <Error> <com.bea.weblogic.kernel> <BEA-000000> <cannot load libary 'stackdump': java.lang.UnsatisfiedLinkError: no stackdump in java.library.path
    >
    <Jun 15, 2011 12:11:24 AM MDT> <Error> <WebLogicServer> <WL-000383> <A critical service failed. The server will shut itself down>
    <Jun 15, 2011 12:11:24 AM MDT> <Notice> <WebLogicServer> <WL-000365> <Server state changed to FORCE_SHUTTING_DOWN>
    <Jun 15, 2011 12:11:24 AM MDT> <Notice> <Cluster> <WL-000163> <Stopping "async" replication service>
    From SOA_MS2 logs we see -
    15, 2011 12:11:16 AM MDT> <Warning> <Cluster> <WL-000147> <Server "SOA_MS2" failed to renew lease in the leasing basis hosted by SOA_MS3.>
    <Jun 15, 2011 12:11:16 AM MDT> <Error> <Cluster> <WL-000150> <Server failed to get a connection to the leasing basis hosted by SOA_MS3 in the past 30 seconds for lease renewal. Server will shut itself down.>
    <Jun 15, 2011 12:11:16 AM MDT> <Critical> <Health> <WL-310006> <Critical Subsystem ServerMigration has failed. Setting server state to FAILED.
    Reason: ServerSOA_MS2 failed to renew lease in the leasing basis hosted by SOA_MS3>
    <Jun 15, 2011 12:11:16 AM MDT> <Critical> <WebLogicServer> <WL-000385> <Server health failed. Reason: health of critical service 'ServerMigration' failed>
    <Jun 15, 2011 12:11:16 AM MDT> <Notice> <WebLogicServer> <WL-000365> <Server state changed to FAILED>
    <Jun 15, 2011 12:11:16 AM MDT> <Error> <com.bea.weblogic.kernel> <BEA-000000> <cannot load libary 'stackdump': java.lang.UnsatisfiedLinkError: no stackdump in java.library.path
    >
    <Jun 15, 2011 12:11:16 AM MDT> <Error> <WebLogicServer> <WL-000383> <A critical service failed. The server will shut itself down>
    <Jun 15, 2011 12:11:16 AM MDT> <Notice> <WebLogicServer> <WL-000365> <Server state changed to FORCE_SHUTTING_DOWN>
    Also, In SOA_MS3 logs we see the followi error -
    ####<Jun 15, 2011 11:43:18 PM MDT> <Error> <Cluster> <soaprdi3> <SOA_MS3> <[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1308202998662> <WL-000168> <Failed to restart/migrate server "SOA_MS3" because of Failed to start the migratable server on one of the candidate machines
    Failed to start the migratable server on one of the candidate machines
    at weblogic.cluster.singleton.MigratableServerState.serverUnresponsive(MigratableServerState.java:95)
    at weblogic.cluster.singleton.MigratableServersMonitorImpl.timerExpired(MigratableServersMonitorImpl.java:164)
    at weblogic.timers.internal.TimerImpl.run(TimerImpl.java:273)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Thanks,

    Its been many days support working on this, they do not have any clue. we escalated, not much of clue they have yet.
    So, wanted put it across industry experts on SOA.
    Any idea on what causes this Leasing renewal issue and the potential resolution?
    Thanks

  • I had purchased and downloaded a tv series to my ipad. Then deleted it due to space constraints. Now I wanted to download it permanently to my MacBook but itunes does not show this  purchases. It worked, however, for another series. What can I do?

    I had purchased and downloaded a tv series to my ipad. Then deleted it due to space constraints. Now I wanted to download it permanently to my MacBook but itunes does not show this  purchases. It worked, however, for another series. What can I do?
    Thanks for any help!

    You may need to put the iPad into recovery mode : http://support.apple.com/kb/ht1808 - you should then be able to reset the iPad.
    If you haven't synced to a computer them (from here) :
    If you restore on a different computer that was never synced with the device, you will be able to unlock the device for use and remove the passcode, but your data will not be present.

  • Viewing Dynamic Leases Under The DNS/DHCP Management Console

    My dhcp-server is on oes2 sp3 / sles 10 sp3 and i am trying to set it up for viewing dynamic leases in the DNS/DHCP Management Console.
    I am following TID 7006450 and right away i get stuck.
    From the TID
    1. Launch the DNS/DHCP management console and login to the server
    2. Click on the 'DHCP (OES Linux) tab toward the top of the console
    3. Find the DHCP Server object at the bottom of the console and select it
    4. Toward the top-right hand side of the console select the GENERAL tab and find the section called DHCP SERVER
    ---- There is no section called DHCP SERVER only Services
    5. Click the ADD button and enter the IP address of the DHCP server
    NOTE: If doing this step on OES 2 SP 2 the buttons may be greyed out. This is due to the schema not being extended. This option is officially supported on OES 2 SP 3
    ---- There is no ADD button
    6. Click on the SAVE button toward the top, left-hand side of the console (looks like a floppy disk)
    ---- There is no SAVE button
    I have downloaded the latest DNS/DHCP Management Console Version OES2 SP2 october 2009
    Is ther a newer version to download

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1
    Any error on the failed start that could help troubleshoot it? A coworker
    confirmed the TID worked for him so that's a little comforting, though of
    course no idea how your two systems differ. A Service Request (SR) with
    Novell is always an option to work on this but it'd be useful to know what
    your system showed when it wouldn't start dhcpd.
    Good luck.
    On 01/31/2011 01:06 PM, RPummel wrote:
    >
    > That is great advice about making sure to get the software from your
    > OES2SP3 box.
    >
    > Being able to view dynamic leases has been a MAJOR feature I have been
    > looking for. We have a bunch of NetWare servers still and are just
    > starting to install OES2 Linux boxes, and I have had no visibility via
    > the console into the leases hosted by our new Linux boxes.
    >
    > So, I just installed the console and followed the instructions in TID
    > 7006450. However, something must be wrong. Not only does it still refuse
    > to show me the dynamically assigned addresses, but if I stop the dhcpd
    > service on the server and attempt to start it again, it FAILS! After a
    > short panic, I deleted the TSIG Key object which the TID directed me to
    > create. This allowed me to successfully start the dhcpd service. (Whew!)
    > But, of course, I still can't see the dynamic addresses. I went through
    > the directions again and got the same results.
    >
    > In case you are wondering, I made sure to set the secret for the TSIG
    > Key to something divisible by 4, as instructed. I am at a loss. Ideas?
    >
    > Rick P
    >
    >
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v2.0.15 (GNU/Linux)
    Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
    iQIcBAEBAgAGBQJNRxqsAAoJEF+XTK08PnB5m4gP/2XmWBPd2fCbuahlRMik5fbN
    37pYvKFrLsHLlUQMnDJc6Oh6AHxJ2yKmDz3Tr/n7MjKYWKX/oc4JcJK2zZ45wqUS
    cuJi+ffwbXOOfPYj0zL4k9oTxm6YDYUZFtnsf8+idqm3BUd3k+ t3QzFjcmlGHBJY
    l28V0kr0NvwQlyu8csPix+j5lwVEed33WE3AYP5myKPBxmPyAN Q4qHhyfHFNIKyH
    9EAFG+ySpvAx1Q8+6m8XuQC5GUS7StO+hOytWh1SAIvqyx1Lfg loq5FdJxzQxidK
    q8AA2TB3866gykMSXxIg07l9wZIDrVPSYeJrZfmBjLuac+ZRsa pIGl+K34HKxyqx
    EAQ+frV95aOpfcPMFV5C7C+WttkiDHPZWGtKBPxdcx17W0S69N uhxDoswwiNZ/5g
    hWqPatuW7HK5Adp6nUdsSwJKYWTr/a/ZCTFL/9zRBImafGygiD6Yt3oyD/BotiQM
    Wy5WW4uayW7Wbfn9R5BmMi+sUDGfNS2k79BcM0uMTmoA3UZX3j UP04P7/dp6kANh
    zKnHBGg6WSORi7/xYDvB9VX/Xm2nrBoyl0mA0hdnm+PmMtvm01BQ1tiAWfOQF8iw
    7KRABNvH5lxvhVITsE7d9yuyAuMuFc8N+bZW5W09C+bqWGMn8M 5xudLNFZ4N6nb+
    kVciE7BUkpeNvjOFNjBx
    =lA0E
    -----END PGP SIGNATURE-----

  • I have the latest Mac Air 13" and I have a Seagate 500MB external hard disk I cant copy or Cut any files on the hard disk and I cant even delete any files. I have the settings as Read and Write in the get info tab. Please help

    I have the latest Mac Air 13" and I have a Seagate 500MB external hard disk I cant copy or Cut any files on the hard disk and I cant even delete any files. I have the settings as Read and Write in the get info tab. Please helpand also note that my hard drive was formatted on a Windows 7 Laptop.

    thats the problem, its in MSDos (Fat) or NTFS for Windows.
    Options.....
    1. offload all that data on the HD onto your PC, THEN format the HD in EXFAT for use on BOTH PC and Mac for read/write.....then reload all (or as you need) that data back onto the HD
    2. get another HD, and format it for Mac OSX extended journaled.
    FAT32 (File Allocation Table)
    Read/Write FAT32 from both native Windows and native Mac OS X.
    Maximum file size: 4GB.
    Maximum volume size: 2TB
    You can use this format if you share the drive between Mac OS X and Windows computers and have no files larger than 4GB.
    NTFS (Windows NT File System)
    Read/Write NTFS from native Windows.
    Read only NTFS from native Mac OS X
    To Read/Write/Format NTFS from Mac OS X, here are some alternatives:
    For Mac OS X 10.4 or later (32 or 64-bit), install Paragon (approx $20) (Best Choice for Lion)
    Native NTFS support can be enabled in Snow Leopard and Lion, but is not advisable, due to instability.
    AirPort Extreme (802.11n) and Time Capsule do not support NTFS
    Maximum file size: 16 TB
    Maximum volume size: 256TB
    You can use this format if you routinely share a drive with multiple Windows systems.
    HFS+     ((((MAC FORMAT)))  (Hierarchical File System, a.k.a. Mac OS Extended (Journaled) Don't use case-sensitive)
    Read/Write HFS+ from native Mac OS X
    Required for Time Machine or Carbon Copy Cloner or SuperDuper! backups of Mac internal hard drive.
    To Read HFS+ (but not Write) from Windows, Install HFSExplorer
    Maximum file size: 8EiB
    Maximum volume size: 8EiB
    You can use this format if you only use the drive with Mac OS X, or use it for backups of your Mac OS X internal drive, or if you only share it with one Windows PC (with MacDrive installed on the PC)
    EXFAT (FAT64)    ------Can read/write from both PC and Mac
    Supported in Mac OS X only in 10.6.5 or later.
    Not all Windows versions support exFAT. 
    exFAT (Extended File Allocation Table)
    AirPort Extreme (802.11n) and Time Capsule do not support exFAT
    Maximum file size: 16 EiB
    Maximum volume size: 64 ZiB
    You can use this format if it is supported by all computers with which you intend to share the drive.  See "disadvantages" for details.

  • Deleted WLC from its folder under the Device work center of Cisco prime 1.2

    I kindly need  your help as regarding cisco prime infrastructure.
    I added the wireless LAN controller to the prime. I later had to troubleshoot the WLC because the reachability status showed UNREACHABLE.
    Due to my troubleshooting, I synched the WLC a couples of times and the collection status has been showing SYNCHING since then.
    I also tried deleting the WLC from its folder under device work center and it deleted but it still reflects under the ALL folder.
    Please would I have to wait for the SYNCHRONIZATION of the WLC to stop before I can completely delete it and re-add??
    Also I noticed that after deploying ''Interface Health'' under Monitoring Configurations, the CPU and memory utilization did not reflect for the devices cisco
    prime is managing.
    What have I missed out?
    Kindly help.

    Prime Infrastructure won't support those legacy models.

  • My friend wanted to create a new iCloud account that was different to  her iTunes and app one when she deleted her iCloud account that was the same as her iTunes it won't let her log into her iTunes one now

    My friend wanted to create a new iCloud account that was different to  her iTunes and app one when she deleted her iCloud account that was the same as her iTunes it won't let her log into her iTunes one now what can I do to fix it she has paid or apps and songs and of she makes another account everything will be lost.
    Need help please :) thanks

    Ah thanks Razmee however there is NO option to delete the iCloud account in settings!

  • Note: Due to heavy load, the latest workflow operation has been queued. It will attempt to resume at a later time

    Dear all,
    sorry for opening another thread on this.
    I think I have a performance issue with workflows attached to document sets in SharePoint. And I say “I think” because people keep telling me that this is the way it just is.
    The user creates a new document set, which triggers a workflow in which the user has to confirm/review/approve a series of tasks. The time it takes from clicking the OK button on those task form to the workflow status moving to the next step is about 4 seconds.
    And visiting that status page within those 4 seconds brings up the infamous “Note: Due to heavy load, the latest workflow operation has been queued. It will attempt to resume at a later time.” message.
    Hitting Refresh in the browser after those 4 seconds will make the new workflow status appear and the red text go away.
    Is that normal? Is that the performance that everyone else is seeing as well?
    I struggle to see why simply moving a workflow from one task to another should take that on a machine that isn’t doing anything else at the time.
    (1)   
    I have a standalone (non-clustered) SharePoint box, 4 CPUs, 8 GB of memory, more than half of that available, acting as application server and wfe; only the database is on different box.
    (2)   
    The CPU only goes up to 18 or 19%, so CPU does not seem to be the bottleneck. Half the RAM is also still free.
    (3)   
    The workflow is designed with Nintex, and has about 9 flexi and review tasks – the last 2 of them in a loop iterating over typically 3 or 4 items.
    (4)   
    Looking at the logs it looks like the processing in Nintex only takes about 1 second – I don’t know where the other 3 seconds are going.
    (5)   
    There is nothing obvious in the logs.
    (6)   
    We’ve looked at all the “theoretical” improvements around throttling and batch sizes etc. – none of them appeared make any difference. And the workflow is so small that it looks like my tasks gets executed straight away. The problem appears
    to be that the execution takes too long(?) and therefore has not finished by the time the page get redrawn.
    (7)   
    I am running perfmon and I can e.g. see one(!) workflow being loaded into memory – as expected as I am the only user.
    (8)   
    I am seeing a total of 3(?) SQL queries being executed(?). I get the Bytes Sent/sec spiking at 25K, and Bytes received at 18K. But is this good or bad or a bottleneck?
    Where do I take it from here?
    I have been told that “[…] most customers have no issue with this as they are used to the way SP operates and it can be slow at times.” Is it really that bad?
    If it is worth watching more performance counters then I’d need to know what to compare them to.
    Is there something else I am missing?
    Thanks
    Martin

    Hi,
    Before considering an additional hardware try to change following configurations for workflow:
    Increase Throttle Size
    Increase Batch Size
    Time Out
    Workflow Timer Interval
    AutoCleanUpDays
    Increase Throttle Size
    The Workflow throttle setting controls how many Workflows can be processing at any one time on the entire server farm. By increasing the throttle it will allow the number of Workflows execution or can be initiated at a time.
    Use below PowerShell command to get the current Throttle Size:
    Get-SPFarmConfig | Select WorkflowPostponeThreshold
    Use below PowerShell command to set new Throttle Size:
    Set-SPFarmConfig -WorkflowPostponeThreshold 100
    Increase Batch Size
    This is the size that determines number of events processed for a single Workflow instance. Default value is 100, but it can be range from 1 to any number.
    Use below PowerShell command to get the current Batch Size:
    Get-SPFarmConfig | Select WorkflowBatchSize
    Use below PowerShell command to set new Batch Size:
    Set-SPFarmConfig -WorkflowBatchSize 200
    Time Out
    This decides the time out of the Workflow event. The default value is 5 and can be any integer. The time is in minute.
    Use below STSADM command to get the current Time Out value:
    stsadm -o getproperty -pn workflow-eventdelivery-timeout
    Use below STSADM command to get the current Time Out value:
    stsadm -o setproperty -pn workflow-eventdelivery-timeout -pv “15″
    Workflow Timer Interval
    This setting is applicable at Web Application level and not the farm level. The workflow timer interval specifies how often the workflow SPTimer job fires to process pending workflow tasks. This interval also represents the granularity of delay timers within
    your workflow. If a timer is set to delay for one minute, but the interval timer fires only every five minutes, the workflow delays for five minutes, not one minute.
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o getproperty -pn job-workflow -url <Web Application Url>
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o setproperty -pn job-workflow -pv “Every 10 minutes between 0 and 30″ -url <Web Application Url>
    Here is the url for reference :
    http://praveenkasireddy.wordpress.com/2013/06/14/workflow-due-to-heavy-load-the-latest-workflow-operation-has-been-queued-it-will-attempt-to-resume-at-a-later-time/

  • Due to heavy load, the latest workflow operation has been queued. It will attempt to resume at a later time.

    I have SharePoint 2010 Enterprise running SP1. Configuration is one SharePoint server in the farm and a SQL 2008 R2 database for the backend. Our user environment is 80 users with very little load on the SharePoint server. I have the workflow timer
    set to 1 minute.
    I have a SPD workflow that starts manually on a form library. Whenever I publish a new version of the workflow, the next time I start the workflow it takes the full minute to finish. If I click on the workflow status before it finishes, I see the message
    "Due to heavy load, the latest workflow operation has been queued. It will
    attempt to resume at a later time.". After the minute completes the workflow finishes.
    Here's the weird thing, the next time I start the workflow, it runs in a couple of seconds - almost instantly. I've tried up to 15 times after the inital publishing and everything seems to work fine on initiation.
    Well, that would be fine for me, however, I intermintantly get this heavy load message during task processes that are running inside the workflow. It's probably less than 5% of the time. It's really frustrating though so I appreciate some help. I'm look
    online and haven't found anything that describes my situation.
    Thank you in advance!

    Hi,
    Before considering an additional hardware try to change following configurations for workflow:
    Increase Throttle Size
    Increase Batch Size
    Time Out
    Workflow Timer Interval
    AutoCleanUpDays
    Increase Throttle Size
    The Workflow throttle setting controls how many Workflows can be processing at any one time on the entire server farm. By increasing the throttle it will allow the number of Workflows execution or can be initiated at a time.
    Use below PowerShell command to get the current Throttle Size:
    Get-SPFarmConfig |
    Select WorkflowPostponeThreshold
    Use below PowerShell command to set new Throttle Size:
    Set-SPFarmConfig -WorkflowPostponeThreshold
    100
    Increase Batch Size
    This is the size that determines number of events processed for a single Workflow instance. Default value is 100, but it can be range from 1 to any number.
    Use below PowerShell command to get the current Batch Size:
    Get-SPFarmConfig |
    Select WorkflowBatchSize
    Use below PowerShell command to set new Batch Size:
    Set-SPFarmConfig -WorkflowBatchSize
    200
    Time Out
    This decides the time out of the Workflow event. The default value is 5 and can be any integer. The time is in minute.
    Use below STSADM command to get the current Time Out value:
    stsadm -o getproperty -pn workflow-eventdelivery-timeout
    Use below STSADM command to get the current Time Out value:
    stsadm -o setproperty -pn workflow-eventdelivery-timeout -pv “15″
    Workflow Timer Interval
    This setting is applicable at Web Application level and not the farm level. The workflow timer interval specifies how often the workflow SPTimer job fires to process pending workflow tasks. This interval also represents the granularity of delay timers within
    your workflow. If a timer is set to delay for one minute, but the interval timer fires only every five minutes, the workflow delays for five minutes, not one minute.
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o getproperty -pn job-workflow -url <Web Application Url>
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o setproperty -pn job-workflow -pv “Every 10 minutes between 0 and 30″ -url <Web Application Url>
    Here is the url for reference :
    http://praveenkasireddy.wordpress.com/2013/06/14/workflow-due-to-heavy-load-the-latest-workflow-operation-has-been-queued-it-will-attempt-to-resume-at-a-later-time/

  • HT1329 if the music that is on the iPod can no longer be accessed through iTunes because it was deleted, is there anyway to recover the music on the iPod if it wasn't purchased?

    if the music that is on an iPod can no longer be accessed through iTunes because it was deleted, is there anyway to recover the music on the iPod if it wasn't purchased?

    See this support article:
    http://support.apple.com/kb/HT1848
    You can also download at least some of your content (audiobooks being a notable exception) again from the iTunes Store:
    http://support.apple.com/kb/ht2519
    For additional instructions, particularly for content not purchased from the iTunes Store, check out this user tip from TuringTest:
    https://discussions.apple.com/docs/DOC-3991
    and this page on "How-to Geek":
    http://www.howtogeek.com/104298/sync-your-ios-device-with-a-new-computer-without -losing-data/
    Regards.
    Forum Tip: Since you're new here, you've probably not discovered the Search feature available on every Communities page, but next time, it might save you time (and everyone else from having to answer the same question multiple times) if you search a couple of ways for a topic, both in the relevant forums and in the Apple Knowledge Base, before you post a question.

  • I upgraded MacbookPro 2009 from 10.6.8 to Yosemite.  Beforehand I made a backup copy of my Iphoto library onto an external hard drive. Lets call it BU. I also deleted half of my photos from the default library on my laptop, lets call it DE. I then in

    I upgraded MacbookPro 2009 from 10.6.8 to Yosemite 2 weeks ago. My iphoto is now a nightmare. i went to our local apple retailer but they could not help either. Perhaps someone can help. Beforehand the OS upgrade I made a backup copy of my Iphoto library onto an external hard drive. Lets call it BU with 50GB. I also deleted half of my photos from the default library on my laptop, lets call it DE, now with about 20GB, to gain space. I then upgraded to Yosemite.
    I have also installed since a long time Iphoto Library Manager 3.8.6. (this may be the cause of the problem as it was not upgraded and perhaps thus not compatible with iphoto 9.6 which I have now after the upgrade to Yosemite) It all worked fine before the upgrade, switching in iphoto between the two libraries DE and BU.
    It also worked fine after the upgrade for a few days. At one stage Iphoto asked me to upgrade the iphoto version as it otherwise could not read the photos when I tried to load the BU library into iphoto. I did click yes to upgrade iphoto. As a result I became a brand new but totally empty library, no photos.
    BU was still on the external disc, however the "Master" file was empty. We fortunately discovered all photos were in another file called Old Master, also under BU. We made another copy of the Old Master file onto the external disc. Lucky we did, because shortly afterwards we could not open anymore the BU file thus had no access to anything in there. Now I have 17000 photos in a file called Old Master on my external disc and another smaller library  DE on my laptop. When I tried to import the photos from Old Master into a new iphoto library it was very messy. All events dates were mixed up, many photos were imported twice. I have no idea whether all my 17000 photos have been imported.
    Can anyone suggest what is the most time efficient way? How do I best import my 17000 into a new library ensuring I don't loose any in the process. I do not wish sorting through 17000 photos for a week or so. Shall i also get a new iphoto library version now? Thank you for anything that may work.

    Yes, the Old Master file has a folder for each year where I find all photos from that specific year. I am attaching a screen shot of the file.
    In the meantime i have managed to download all photos (it did not download any video files though in mpg, avi, 3gp, m4v,mp4 and mov format) to a new iphoto library. Unfortunately the photos are quite mixed and often doubled up. I ma considering to purchase iphoto library which checks all duplicates in iphoto. this will save me a lot of time. What do you think?

  • How do I get my deleted emails to go to the deleted items in Outlook and not the archived folder?

    How do I get my deleted emails to go to the deleted items in Outlook and not the archived folder?

    Who is the email account provider?
    With an Apple iCloud account, there is a preference setting with the account settings on the iPhone to archive messages - save deleted messages in your Archive folder.
    I believe the same is available with a Gmail account.

Maybe you are looking for