Powershell cmdlets to access Azure Storage Analytics

Hi,
Are there PowerShell cmdlets to access Azure Storage Analytics data (Capacity Metrics)?
-Vatsalya
Vatsalya - MSFT The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Microsoft.

Hi Vatsalya,
You could refer to this code sample about blob analytics metrics (https://gist.github.com/RichardSlater/4753866/raw/91a2bf45fb24dff4f770a1384c3e6578ecbd20d5/Get-CapacityMetrics.ps1
) about "StorageAnalyticsMetrics". In this sample, you didn't need to specify the container name.
Please try it.
Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Accessing Azure storage from Windows Phone with REST

    I'm struggling in the attempt to establish a simple httpwebrequest to gather some metadata on a blob,
    my code works perfectly fine on windows but
    gives error 403 on windows phone 7.
    The really strange thing is that as you can see from the HEAD requests
    the same auth that is ok from windows is not ok on windows phone...
    I use a standard httpwebrequest to establish the connection, the code is at the end of the post.
    I lost already 10 hours attempting anything that made sense to me, but with no success at all:
    i ended up establishing a Fiddler tunnel connection to compare the requests,
    WORKING : Windows 8.1 desktop executable
    HEAD https://{MYHOST}.blob.core.windows.net/add-saves/save{etc...} HTTP/1.1
    ContentLength: 0
    x-ms-date: Thu, 05 Feb 2015 00:50:00 GMT
    x-ms-version: 2009-09-19
    Authorization: SharedKey heartbit:YnZwcEoRunuuoK6PC+JRge{etc...}
    Host: {MYHOST}.blob.core.windows.net
    RETURNS
    HTTP/1.1 200 OK
    Content-Length: 21648
    Content-Type: application/octet-stream
    Last-Modified: Wed, 04 Feb 2015 21:19:07 GMT
    ETag: 0x8D20ED7587824FE
    Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
    x-ms-request-id: 767e3faa-0001-0016-1559-eb76ed000000
    x-ms-version: 2009-09-19
    x-ms-lease-status: unlocked
    x-ms-blob-type: BlockBlob
    Date: Thu, 05 Feb 2015 00:54:41 GMT
    and my Windows Phone 7 app (throws with WebResponse 404 NotFound from webserver)
    HEAD https://{MYHOST}.blob.core.windows.net/add-saves/save{etc...} HTTP/1.1
    Accept: */*
    Accept-Encoding: identity
    ContentLength: 0
    x-ms-date: Thu, 05 Feb 2015 00:50:00 GMT
    x-ms-version: 2009-09-19
    Authorization: SharedKey heartbit:YnZwcEoRunuuoK6PC+JRge{etc...}
    Referer: file:///Applications/Install/5FEE5BAA-57A1-4451-B4A5-28E2E46C5598/Install/
    User-Agent: NativeHost
    Host: {MYHOST}.blob.core.windows.net
    Content-Length: 0
    Connection: Keep-Alive
    Cache-Control: no-cache
    RETURNS
    HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
    Transfer-Encoding: chunked
    Server: Microsoft-HTTPAPI/2.0
    x-ms-request-id: b2654a03-0001-0038-1dd9-c4cb58000000
    Date: Thu, 05 Feb 2015 00:55:00 GMT
    byte[] byteArray = null;
    // There is an error where on the android system the clock could be set ahead in time when the request is made to when it arrives to ms's servers
    // Giving a minute of backlogging could be enough
    DateTime now = DateTime.UtcNow - TimeSpan.FromMinutes(1);
    string uri = Endpoint + resource;
    HttpWebRequest request = CreateRequestBody(method, requestBody, headers, ref byteArray, now, uri);
    string messageSign = MessageSignature(method, now, request, ifMatch, md5);
    requestAuthAsync(messageSign, authKey =>
    try
    if (((string)authKey) == "Error")
    throw new Exception("Auth failure");
    String AuthorizationHeader = "SharedKey " + StorageAccount + ":" + authKey;
    request.Headers.Add("Authorization", AuthorizationHeader);
    if (!String.IsNullOrEmpty(requestBody))
    request.GetRequestStream().Write(byteArray, 0, byteArray.Length);
    request.BeginGetResponse((ar) =>
    try
    var r = request.EndGetResponse(ar);
    onCompleted(r);
    catch (WebException e)
    onCompleted.Invoke("Error : " + (e.Response as HttpWebResponse).StatusDescription);
    catch (Exception e)
    onCompleted.Invoke("Error : " + e.Message);
    }, null);
    catch (Exception e)
    onCompleted.Invoke("Error : " + e.Message);
    catch(Exception ioe)
    onCompleted.Invoke("Error : " + ioe.Message);
    Heart Bit Interactive

    It seems like thw windows phone version of HttpWebRequest automatically adds a "Content-Length" header that is different from the one given by me ("ContentLength"). Why is that?
    The auth signing shouldnt depend on that right?
    Since i cannot find a way to remove that header i tried removing the one i was manually adding, but without luck.
    Are the other headers that are  automatically added on windows phone :
    Accept: */*
    Accept-Encoding: identity
    Referer: file:///Applications/Install/5FEE5BAA-57A1-4451-B4A5-28E2E46C5598/Install/
    User-Agent: NativeHost
    Content-Length: 0
    Connection: Keep-Alive
    Cache-Control: no-cache
    Will they create issues with autorization? The auth string in WP is the same found on Windows so there must be something in the WP request to azure that doesn't match the auth string i generated early.
    <a href="http://www.youtube.com/user/EversorITA" target="_new" title="My Games">Heart Bit Interactive</a>

  • Are WebJobs relying on best-effort storage analytics?

    I'm trying to understand how WebJobs works to determine whether it's something we can use. I've only played around with it for an hour so I'm by no means an expert.
    I've created a small simple web job which just processes blobs in a specific container, like this:
    public static void ProcessBlob([BlobTrigger("container/{name}")] TextReader input, string name)
    string s = input.ReadToEnd();
    It runs fine in a console app I use for testing. I use Fiddler to monitor the http traffic it produces.
    In Fiddler I see that it accesses the $logs container in the storage account. If I understand everything correctly, the data in this container is generated by Storage Analytics described here:
    http://msdn.microsoft.com/library/azure/hh343262.aspx
    When reading that page about storage analytics, I see the following statement:
    "Requests are logged on a best-effort basis."
    It sounds to me like there's no guarantee that all requests will be logged and it's not really clear what may
    cause it to fail. And if I understand WebJobs properly, it relies on all requests being logged. So
    the picture I get is that WebJobs are "best-effort" as well and sometimes it might not be executed for blobs.
    Is my understanding correct, or am I missing something?
    Nitramafve

    Thanks Girish!
    Also thanks Frank, but what you write does not seem to be entirely correct. I understand that WebJobs are a piece of code that can be run on demand. But before starting to use it, I wanted to understand exactly how WebJobs actually works. I've been
    working with Azure Storage for 4 years and know that there is no documented robust way to get notifications when a change is made to blob storage. Based on that, it seemed to me that implementing WebJobs on top of Azure Storage would be hard, unless there
    were some undocumented Azure services it was relying on.
    So I digged into the Azure WebJobs source code and monitored its request towards Azure Storage, and saw that it was relying on Azure Storage Analytics logs to determine what blobs have been created/updated (so that it can trigger the code execution). Internally
    within the Azure WebJobs SDK assemblies, there's code which is designed to parse Azure Storage Analytics (the class StorageAnalyticsLogParser)
    This is what does not add up. Azure WebJobs appars to be relying on Azure Storage Analytics, but Azure Storage Analytics is documented as best-effort, which would mean that Azure WebJobs would not always work properly.
    But I'm sure I'm missing something here. Maybe Storage Analytics is actually "robust" and not best-effort.
    Nitramafve

  • Access blob storage files by specific domain. (Prevent hotlinking in Azure Blob Storage)

    Hi,
    My application deployed on azure, and I managed all my file to blob storage.
    When i created container with public permission then it accessible for all anonymous users. When i hit URL of file (blob) from different browser, then i will get that file.
    In Our application we have some important file and images that we don't want to expose. When we render HTML page then in <img> tag we define src="{blob file url}" when i mention this then public file are accessible, but same URL i copied
    and hit to anther browser then still it is visible. My requirement is my application domain only able to access that public file in blob storage.
    Amazon S3 which provide bucket policy where we define that for specific domain only file will accessible. see http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
    Restricting Access to a Specific HTTP Referrer

    hi Prasad,
    Thanks for your post back.
    All of SAS and CORS could work, but not comprehensive.
    For your requirement, " My requirement is my application domain only able to access that public file in blob storage.", If you want to stop the other domain site access your blob, you may need set the CORS for your blob. When
    the origin domain of the request is checked against the domains listed for the
    AllowedOrigins element. If the origin domain is included in the list, or all domains are allowed with the wildcard character '*', then rules evaluation proceeds. If the origin domain is not included, then the request fails. So other domain didn't access
    your resource. You also try the Gaurav's blog:
    http://gauravmantri.com/2013/12/01/windows-azure-storage-and-cors-lets-have-some-fun/
    If you access CROS resource, you also need use SAS authenticated.
    However SAS means that you can grant a client limited permissions to your blobs, queues, or tables for a specified period of time and with a specified set of permissions, without having to share your account access keys. The SAS is a URI that encompasses
    in its query parameters all of the information necessary for authenticated access to a storage resource (http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
    ).  So if your SAS URI is available and not expired ,this URI could be used to other domain site. I think you can try to test it.
    If I misunderstood, please let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Access SAS URI through tool like Azure Storage Explorer

    Is there any tool available where we are able to access the Storage Account container using SAS URI instead of Access Keys. I have verified that Azure Storage Explorer CloudBerry Explorer don't support SAS URI. The client wants to download files using
    SAS URI and we don't want to share the Storage Account Keys.

    Hi,
    As my reply in this thread:
    https://social.msdn.microsoft.com/Forums/en-US/6219ae6c-bb2c-4e8f-a49c-4a590a450faa/storage-account-access-using-roles?forum=windowsazuredata, please try to give some feedback to the tool author or submit a feedback at Azure feedback forum.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • FSRM Powershell Cmdlets not working in Azure on Server 2012

    I am attempting to use the FSRM Powershell cmdlets on a 2012 server to configure auto quotas.  However I guess I am doing something wrong.
    Even though there are built-in templates and I have added the one I want manually and have created an auto quota from it, they do not appear to the cmdlets and the cmdlets keep throwing CimException when I try to use them to create templates and auto quotas.
    For example get-fsrmquotatemplate returns nothing and new-fsrmquotatemplate -name mydemo -size 10MB resulst in 
    new-fsrmquotatemplate : Not found
    At line:1 char:6
    + $res=new-fsrmquotatemplate mydemo -size 10MB
    +      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : ObjectNotFound: (MSFT_FSRMQuotaTemplate:Root/Microsoft/...RMQuotaTemplate) [New-fsrmQuot
       aTemplate], CimException
        + FullyQualifiedErrorId : HRESULT 0x80041002,New-FsrmQuotaTemplate
    Is there something I am supposed to do first to set the system up?

    Hi,
    From the result it seems that the FSRM templates cannot be found. So please check if both FSRM and Windows PowerShell 3.0 are installed in Features.
    Specifically you can test to remove and reinstall FSRM to see if issue will be fixed.
    If you have any feedback on our support, please send to [email protected]

  • How to supply an end point to powershell cmdlet Rename-Blob

    [cross posted from http://stackoverflow.com/questions/21352295/azure-storage-cmdlet-rename-blob-wants-an-endpoint]
    I'm attempting to rename a blog to all lower case:
    Rename-Blob -BlobUrl "https://ttseast.blob.core.windows.net/images/Add.png" -NewName "https://ttseast.blob.core.windows.net/images/add.png"
    I've verified the blog URI by plugging it into a browser - however, attempting to execute the command tosses:
    Rename-Blob : Blob URI does not correspond to storage account end point. A Blob URI must contain blob storage end point.
    The arguments for Rename-Blob don't reference anything 'endpoint' - I've loaded the subscription so I should be authenticated and not forced to include AccountName/Key.
    How to I determine (or set) the required end point?
    thx

    If I'm not mistaken, I believe you're using Cerebrata Azure Management Cmdlets as Windows Azure PowerShell Cmdlets do not have a Rename-Blob Cmdlet (Cerebrata has it).
    To use Rename-Blob cmdlet, please try the following:
    Rename-Blob -BlobUrl "https://ttseast.blob.core.windows.net/images/Add.png" -NewName "https://ttseast.blob.core.windows.net/images/add.png" -AccountName "ttseast" -AccountKey "<your accountkey>"
    Thanks to Jaydonli for the alternate approach - turns out that while I thought I was using the native MS cmlets, in fact, I was using 3rd party CLI from Redgate. As per Gaurav Mantri at http://stackoverflow.com/questions/21352295/azure-storage-cmdlet-rename-blob-wants-an-endpoint

  • Can't Access Azure VM after IP Address Change

    I recently logged into my Azure VM (Windows 2012 R2) and did the following:
    went to "Network Connections", selected properties
    Selected IPV4 and changed the "obtain IP Address automatically" option to "Use the following IP Address".
    I set the IP Address to the address that Azure gave me in the Azure Portal.
    However, since then - the machine is running but I can't access it.  Through some investigating, I realised that I shouldn't have done that.  Lesson Learnt.
    However, my question is - is there a way to change this in Powershell so that I can gain access again?
    or do I need to delete and recreate the server?
    Please help.
    Thanks in Advance.

    Hi,
    Based on my exprience, if you change the TCP/IP settings on the Azure VM. Next time you reboot the VM from the Azure management portal, the settings you saved on the VM would be changed to the default status(obtain IP Address automatically).
    I recommend you to restart the VM in the Azure management portal and then check the quick glance of the VM to make sure that all parameters are displaying. After that, you can connect(using xxx.cloudapp.net with RDP port)to it without any problems.
    If the above suggestion is not helpful, please share us the detailed error message.
    By the way, did you mean the internal IP address in the portal? Do you want to assign a static internal IP address to that VM? If yes, the VM should be in a virtual network and you can use Azure PowerShell cmdlets to do that.
    Configure a Static Internal IP Address for a VM
    Best regards,
    Susie
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Azure Storage - How to tell what vhd is attached to what?

    On our storage account in our vhd container we have vhd blobs. The issue is that it looks like some where renamed from what I manually named to them to some incomprehensible name. I'm not sure when this took place but sometime in the last month. So I now
    have no idea what base OS vhd goes where.
    How can tell which vhd is attached to which VM?
    How do I rename these VHD's without taking down the server?
    How can I find out which ones are not being used? It looks like 2 vhd's (30GB) haven't been modified for a month but how do I actually know they are not being attached to anything and being used? (Note: we only have vhd's no other blobs.)
    How do I get Azure to stop renaming the blobs to some meaningless name?
    Thanks,
    Chris

    How can tell which vhd is attached to which VM?
    Since a VHD is essentially a page blob and is stored as a disk (OS disk or Data disk), you can list all disks and the result will include the URL of the blob. If you're using
    Windows Azure PowerShell Cmdlets, you can find this information by executing Get-AzureDisk cmdlet. Also see this link for more details:
    http://msdn.microsoft.com/en-us/library/windowsazure/jj157176.aspx
    How do I rename these VHD's without taking down the server?
    I don't think you can do that. Again because these VHDs are page blobs and blob storage do not support "Rename" operation natively, you would need to copy the VHD first and once the copy is complete, you can delete the original one.
    How can I find out which ones are not being used? It looks like 2 vhd's (30GB) haven't been modified for a month but how do I actually know they are not being attached to anything and being used? (Note: we only have vhd's no other blobs.)
    When a VHD is attached to a VM, Windows Azure puts a lock (lease) on the blob holding the VHD. Easiest way for you to check is by listing the blobs in that container and find out which blob has active lease on it. Other way obviously is to get all disks
    and get the URLs and comparing them with all blobs in that container.
    How do I get Azure to stop renaming the blobs to some meaningless name?
    Not sure what you mean. Could you please explain?
    Hope this helps.

  • How to set same domain name for Azure Storage and Hosted Service

    I have a web application running on azure and using azure storage with blob. My application allows to edit html files that are in the azure storage to save with another name and publish them later. I am using javascript to edit the html content that
    I display in an Iframe but as the domain of my web application and the html that I try to edit are not the same, I got and this error "Unsafe JavaScript
    attempt to access frame with URL "URL1" from frame with URL "URL2". Domains, protocols and ports must match".
    I have been doing some research about it and the only way to make it work is to have the web application and the html that I want to access using javascript under the same domain. 
    So my question is: is it possible to have the same domain name in azure for the hosted service and the storage.
    Like this:
    hosted service: example.com
    storage: example.com
    By the way I already customize the domain names so they looks like this:
    hosted service <mydomainname>.com
    storage <blob.mydomainname>.com
    Actually I have my application running in another hosting and I have no problem there since I have a folder where I am storing the files that I am editing so they are in the same domain as the application. I have been thinking in to do the same with Azure,
    at least to have a folder where I can store the html file meanwhile I am editing it but I am not sure how much space I have in the hosted service to store temporary files.
    let me know if you have a good tip or idea about how to solve this issue.

    Hi Rodrigo,
    Though both Azure Blob and Azure applications support custom domain, one domain could have only one DNS record (in this case is CNAME record) at one time. For Steve's case, he has 3 domains, blog.smarx.com, files.blog.smarx.com and cdn.blog.smarx.com.
    > I would like to find a way to storage my html page that I need to edit under the same domain.
    For this case, a workaround will be adding a http handler in your Azure application to serve file requests. That means we do not use the actual blob url to access blob content but send the request to a http handler then the http handler gets the content
    from blob storage and returns it.
    Please check
    Accessing blobs in private container without Shared Access Secret key for a code sample.
    Thanks.
    Wengchao Zeng
    Please mark the replies as answers if they help or unmark if not.
    If you have any feedback about my replies, please contact
    [email protected]
    Microsoft One Code Framework

  • Azure Storage Blob error - AnonymousClientOtherError and AnonymousNetworkError (why cannot see images)

    I have an mobile app and I put images in Azure Storage Blob. when tested by several of our own people (on test and beta), it is all good.
    But when we released it to beta and had hundreds (maybe above one thousand) of users to use, lots of them report that they cannot see images. It happens on their iPhones and also on many different brands of Android phones. Sometimes, for the same image,
    on one phone it is good, but on another it doesn't show.
    When I check blob log, I saw a lot of errors, mainly these two:
    AnonymousClientOtherError; 304
    "Anonymous request that failed as expected, most commonly a request that failed a specified precondition. The total number of anonymous requests that failed precondition checks (like If- Modified etc.) for GET requests. Examples: Conditional GET requests
    that fail the check."  (From Microsoft)
    AnonymousNetworkError; 200
    "The most common cause of this error is a client disconnecting before a timeout expires in the storage service. You should investigate the code in your client to understand why and when the client disconnects from the storage service. You can also use
    Wireshark, Microsoft Message Analyzer, or Tcping to investigate network connectivity issues from the client. " (From Microsoft) - a question here, this is an error, but why it is 200?
    I wonder if these are the reasons that caused my problem?
    For the first one, from my understanding, it is not actually an error, it just says that the version client cached is the same as server version. But when my client side sees this response, it thinks it is an error and throw an exception and thus no image
    is shown? (I actually outsourced my client side, so I can only guess). I later tested with my browser to access these images and found that if I refresh the browser with the same URL of the image, I saw error 304 on the log but I still see the image. So I
    guess this is not actually a reason for my problem.
    For the second one, is it because my client side's timeout is shorter than the server side's timeout? But is the timeout a connection timeout or a socket timeout? what are the default values on client side and on Azure Blob? Or is it because the network
    is not good? My Azure server is located in East Asia (Hongkong), but my users are in mainland China. I wonder if that could cause problem? But when a few users tested in China it is all good.
    Many of the images are actually very small,  just one to two hundred k. Some are just 11k.
    I cannot figure out what is the reason...

    Hi,
    Does any people encounter this error when they access the small picture, if this issue is only caused by large picture, please try to improve the timeout, you can also monitor your storage account from the Azure Management Portal, this may be help us
    to find out the detailed error information. see more at:
    http://azure.microsoft.com/en-gb/documentation/articles/storage-monitor-storage-account/, hope it helps.
    Regards

  • PowerBI & Azure Stream Analytics jobs login issue

    Hello team,
    We are working as early-adopter partner for Azure Stream Analytics along with azure IoT suite, we recently have got 'PowerBI' services enabled as 'output' connector of stream analytics job on our corporate subscription & accessing our same org. id to
    login into Azure Stream analytics & powerBI services.
    But, to the great surprise, after creating SA job, configured 'powerbi' as output , it's getting redirect for authorization , applied powerbi 'dataset' & 'table' name. But, after logged into the app.powerbi.com portal, not able to see the 'stream analytics
    job dataset' & 'table'.
    Note: We are using same Org id for login & creating SA jobs & login into powerbi preview portal.
    Would be great if there's a specific instructions/guide for connecting powerbi with ASA job apart from this. Any pointer will be appreciated.
    Thanks,
    Anindita Basak
    MAX451, Inc. 
    Anindita

    Hello there,
    Thanks for the reply.
    No , we're not able to see any event status as 'Failed' on SA operation logs. Attached the relevant screenshot of event logs.
    The jobs are running fine, if we use SQL Azure tables as 'output' connector, the data is available. Only using PowerBI output connector, 'datasets' are not visible though we're using same org id (i.e
    [email protected]) for creation of ASA jobs & login into powerbi subscription.
    Thanks for your help!.
    Anindita Basak
    MAX451, Inc
    Anindita

  • Azure Storage Used by non-Azure Website in PHP

    Dear sirs,
    We are first time trying to use Azure video and image storage on non-azure PHP website. How we can do to access, save and get files from Azure blob account to non-azure PHP website?
    Please somebody can help me on it?
    Thanks & Regards,
    Pedro

    Hi Pedro,
    Thanks for your posting!
    In your scenarios, two approaches could meet your requirements.
    >1 you could download Azure SDK for PHP as Magamalare said,
    Also, you could see this blog and sample:
    http://blogs.msdn.com/b/brian_swan/archive/2010/07/08/accessing-windows-azure-blob-storage-from-php.aspx
    >2 You could use azure storage REST API to operate the storage:
    http://msdn.microsoft.com/en-us/library/azure/dd135733.aspx
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • 403 Error when access Table Storage using SAS token

    I have Azure Mobile Service which has a custom API to generate a sas token for accessing Table Storage from Windows Store app.
    I get following error in Windows Store app while accessing table storage using sas token:
    Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
    Example of sas token generated:
    se=2014-09-12T03%3A10%3A00Z&sp=rw&spk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&epk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&sv=2014-02-14&tn=Folders&sig=91c7S1QM0byNdM80JncwRribXqsWS1iKmOH8cRvHWhQ%3D
    Azure Mobile Services API Code that generates sas token:
    exports.get = function(request, response) {
    var azure = require('azure-storage');
    var accountName = 'myAccountName';
    var accountKey = 'myAccountKey';
    var host = accountName + '.table.core.windows.net';
    var tableService = azure.createTableService(accountName, accountKey, host);
    var sharedAccessPolicy = {
    AccessPolicy: {
    Permissions: 'rw', //Read and Write permissions
    Expiry: dayFromNow(1),
    StartPk: request.user.userId,
    EndPk: request.user.userId
    var sasToken = tableService.generateSharedAccessSignature('myTableName', sharedAccessPolicy);
    response.send(statusCodes.OK, { sasToken : sasToken });
    function dayFromNow(days){
    var result = new Date();
    result.setDate(result.getDate() + days);
    return result;
    Windows Store app code that uses sas token:
    public async Task TestSasApi()
    try
    var tableEndPoint = "https://myAccount.table.core.windows.net";
    var sasToken = await this.MobileService.InvokeApiAsync<Azure.StorageSas>("getsastoken", System.Net.Http.HttpMethod.Get, null);
    StorageCredentials storageCredentials = new StorageCredentials(sasToken);
    CloudTableClient tableClient = new CloudTableClient(new Uri(tableEndPoint), storageCredentials);
    var tableRef = tableClient.GetTableReference("myTableName");
    TableQuery query
    = new TableQuery().Where(TableQuery.GenerateFilterCondition("PartitionKey",
    QueryComparisons.Equal,
    this.MobileService.CurrentUser.UserId));
    TableQuerySegment seg = await tableRef.ExecuteQuerySegmentedAsync(query, null);
    foreach (DynamicTableEntity ent in seg)
    string str = ent.ToString();
    catch (Exception ex)
    string msg = ex.Message;
    Exception:
    Any help is appreciated.
    Thanks in advance!
    Thanks, Vinod Shinde

    Hi Mekh,
    Thanks for the links. I checked them and mostly they are due to date time on client and server.
    But this is not the case in this scenario.
    here is the Request and Response from Fiddler.
    Request:
    GET
    https://myaccount.table.core.windows.net/Folders?se=2014-09-13T02%3A33%3A26Z&sp=rw&spk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&epk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&sv=2014-02-14&tn=Folders&sig=YIwVPHb2wRShiyE2cWXV5hHg0p4FwQOGmWBHlN3%2FRO8%3D&api-version=2014-02-14&$filter=PartitionKey%20eq%20%27MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2%27
    HTTP/1.1
    Accept: application/atom+xml, application/xml
    Accept-Charset: UTF-8
    MaxDataServiceVersion: 2.0;NetFx
    x-ms-client-request-id: b5d9ab61-5cff-498f-94e9-437694e9256c
    User-Agent: WA-Storage/4.2.1 (Windows Runtime)
    Host: todoprime.table.core.windows.net
    Response:
    HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
    Content-Length: 437
    Content-Type: application/xml
    Server: Microsoft-HTTPAPI/2.0
    x-ms-request-id: 22c0543b-0002-0049-7337-da39f4000000
    Date: Thu, 11 Sep 2014 02:33:28 GMT
    <?xml version="1.0" encoding="utf-8" standalone="yes"?>
    <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
      <code>AuthenticationFailed</code>
      <message xml:lang="en-US">Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
    RequestId:22c0543b-0002-0049-7337-da39f4000000
    Time:2014-09-11T02:33:29.6520060Z</message>
    </error>
    Do you see anything different in this request/response?
    Thanks, Vinod Shinde

  • How do i connect to SCVMM powershell cmdlet ??

    Hi all,
    I am trying to connect to scvmm powershell smdlet from SCVMM console but what i am getting is this:-
    Get-SCVMMServer : You cannot access VMM management server localhost. (Error ID: 1604)
    Contact the Virtual Machine Manager administrator to verify that your account is a member of a valid user role and
    then try the operation again.
    At line:1 char:409
    + ... $vmmserver_VAR=Get-SCVMMServer localhost -UserRoleName 'Administrator';
    +                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : ReadError: (:) [Get-SCVMMServer], CarmineException
        + FullyQualifiedErrorId : 1604,Microsoft.SystemCenter.VirtualMachineManager.Cmdlets.ConnectServerCmdlet
    Please help.
    Thanks,
    Pranay.

    As was mentioned, SCVMM has its own security separate from Windows.
    To use the PowerShell cmdlet you must be an SCVMM Administrator.  No lower SCVMM user security level can use PowerShell.
    With the information provided that is the only guess.
    Since you are attempting to access localhost, I assume that you are launching the console on the SCVMM Server itself?  Which means it is running in your logged on user security context.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

Maybe you are looking for