Caching for Azure Files

Hello,
New Azure Files service (currently in preview) is going to replace Cloud Drive feature in 2015.
I would like to know if there are any plans to provide a caching features (read / write) like the ones currently available for Cloud Drives.
The performance of Azure Files is currently limited by network speed restrictions based on VM size. Cloud Drives caching improves the performance and allows for achieving higher speeds than the network limit.
Depreciating Cloud Drives without proving a caching features means that the actual performance will drop for users not using X-Large instances, as the network speed limits for smaller sizes are below the limit of 480Mbps for Azure Files.
Jakub

hi Jakub,
Thanks for your posting!
Base on my experience, new Azure File service is to give us network-connected cloud storage that we can use seamlessly from inside a cloud service. So Azure File storage is like the VM local disk. Azure Cloud Drive allowed us to use the node’s local
temporary disk as a read cache. And the space for a drive’s is allocated form our web role or worker role temporary disk. About the network limit, it seems that Azure File Service used the Azure inside network. About Azure File service feature , I recommend
you could refer to this blog:http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
If I am misunderstanding, please feel free let me know.
Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Azure Files - not useful (for me) in its present form

    Not sure where to offer feedback for Azure Files so I thought I'd put it here.
    Azure Files looks like an interesting technology but unfortunately doesn't suit my particular scenario. We have files sitting on a VM that we need to get into BLOB Storage so that they can be processed via a HDInsight cluster. I had hoped that Azure Files
    would simply be an SMB interface over the top of existing BLOB Storage but it turns out that's not the case, its another area of storage entirely.
    What I'd like is an SMB interface over BLOB Storage so that I could simply drop files into BLOB Storage using SMB. Either that or give HDinsight the ability to access files in File Storage as easily as as it currently accesses file in BLOB Storage.
    Hope that is useful.

    Hi,
    Thanks for Posting here and that is helpful.
    However, you may post your concern at http://feedback.azure.com/forums/217298-storage which is a feedback dedicated forum.
    Regards,
    Shirisha Paderu.

  • GIF files not being cached for toolbar (WEB Forms)

    I realized that the GIF files we are using
    in our toolbars are not being cached by the
    browser. I analyzed the file XLF.LOG and
    the files always have access code "200"
    (not cached). The file REGISTRY.DAT has
    the same problem. Only the JAR file has
    access code "304" (cached). Although these
    GIF files are small (~900 bytes), it takes
    many seconds to open each form when someone
    executes our application using a low speed
    line (many of our forms have toolbars
    with more than 30 icons). Is there any
    special configuration I have to do or this
    will be corrected in future versions?
    XLF.LOG (first and second execution):
    #Version: 1.0
    #Software: Oracle WRB Log Server
    #Fields: clf
    1.0.5.57 - - [24/Jan/2000:07:45:48 -0300] "GET /htm/sales HTTP/1.0" 200 1440
    1.0.5.57 - - [24/Jan/2000:07:45:51 -0300] "GET /htm/ HTTP/1.0" 200 760
    1.0.5.57 - - [24/Jan/2000:07:45:51 -0300] "GET /forms_code/f60all.jar HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:45:53 -0300] "GET /forms_code/javax/swing/JInternalFrame.class HTTP/1.0" 404 99
    1.0.5.57 - - [24/Jan/2000:07:45:55 -0300] "GET /forms_code/oracle/forms/registry/Registry.dat HTTP/1.0" 200 4122
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/PASSWORD.gif HTTP/1.0" 200 853
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/HELP.gif HTTP/1.0" 200 898
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/KEYS.gif HTTP/1.0" 200 864
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/EXIT.gif HTTP/1.0" 200 888
    1.0.5.57 - - [24/Jan/2000:07:47:28 -0300] "GET /htm/sales HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:47:30 -0300] "GET /htm/ HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:47:30 -0300] "GET /forms_code/f60all.jar HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:47:32 -0300] "GET /forms_code/javax/swing/JInternalFrame.class HTTP/1.0" 404 99
    1.0.5.57 - - [24/Jan/2000:07:47:32 -0300] "GET /forms_code/oracle/forms/registry/Registry.dat HTTP/1.0" 200 4122
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/PASSWORD.gif HTTP/1.0" 200 853
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/HELP.gif HTTP/1.0" 200 898
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/KEYS.gif HTTP/1.0" 200 864
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/EXIT.gif HTTP/1.0" 200 888
    null

    Have you tried turning on the "File Caching" setting under the "Server" branch for you Listener under OAS?

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • Creating cache for multiple property files run time/dynamically.

    Hi,
    I have a requirement, where in I need to create cache for each property file present in a folder at server side or in the lib or resources directory. Please help me how I can do this?
    Thanks.

    ok thank you.
    I follwed this method implementation:
    static HashMap<String, HashMap<Object, String>> cacheHolder = new HashMap<String, HashMap<Object, String>>();
         static HashMap<Object, String>[] cache = new HashMap[2];
         static Integer fileCount = 0;
         static int incrementSize = 2;
    public method1(Map<Object,String>map){
    File file = new File((new StringBuilder(
                             "ABC/XYZ/")).append(value)
                             .toString()); // where value is the file name returned from the external method
                   int newSize = existingMapLength+incrementSize;
                   if (someVal== null) {                    
                        synchronized (fileCount) {
                             int oldSize = cache.length;
                             if(fileCount==cache.length){
                                  HashMap[] oldData = new HashMap[oldSize];
                                  oldData = cache;                              
                                  cache = new HashMap[newSize];
                                  LOGGER.info("New Size added:==>"+cache.length);
                                  for(int i=0;i<oldSize;i++){                                   
                                       cache[i] = oldData;
                                  cache[fileCount] = readExternalPropertiesFile(file); // external method which returns the properties of the file in hashmap
                                  cacheHolder.put(value, cache[fileCount]);                    
                                  keys = cache[fileCount].keySet();
                             else{                         
                             cache[fileCount] = readPropertiesFile(file);
                             cacheHolder.put(value, cache[fileCount]);                    
                             keys = cache[fileCount].keySet();
                             someVal= cache[fileCount];
    fileCount = fileCount + 1;
    Please let me know if any improvemnets are possible.

  • SSD for Media Cache and Preview files ...bad idea?

    Hi guys!
    Today I killed my 3rd SSD.I used both of them for store Media cache and preview files with Premiere Pro and After Effects CS5.5.
    1st was OCZ Vertex 3 240GB than I replaced to a new one.After few weeks later the 2nd OCZ Vertex 3 dead.
    And now Corsair Force 3 is gone.I dont worry because all of them still in warranty period, and I havebackup from my projects.
    Just asking maybe these SSD-s is not stable for this thing? I mean often write and read datas?
    Maybe better to replace to a simple HDD (maybe 10000rpm)?
    Or just I was that lucky guy who catch 3 defected ssd-s in the last six months..:-(

    No, I don't think that you killed them; the OCZ Vertex 3 series, and many other SSDs using the same Sandforce controller, are having way too many issues where Adobe is not being used at all.
    Do check your firmware on these drives too; there have been lots of updates trying to make these seemingly seriously flawed drives work more reliably.
    Regards,
    Jim

  • I'm using the latest version of Firefox V29.0.1 Firefox cache settings no longer working for SWF files. can you help on

    Firefox caching on SWF files (Adobe Flash files) are not caching at browser level. But the same SWF Files are caching at other browsers like Google Chrome and IE.
    But when i see the about:cache service information, I can see SWF Files are there in the disk cache device. When ever i hit the same page, the SWF files are downloaded from the server not from the browser, but the fetch count for that SWF file will get increase in disk cache device. So it's taking time to load the SWF files every time when i hit the page.
    I'm requesting your help on this SWF File caching.

    <i>Moderator Comment<br>
    (Duplicate thread closed. Continue at [/questions/1000178]) -m) </i>

  • Using Windows Azure File Service for Symbols Server

    Hi,
    I am trying to understand how to use the Windows Azure File Service (http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx) for a Symbols Service.  I have not been able to find any documentation
    to help guide me with this specifically.  Most of the guides I have found discuss creating a VM and File Server in Azure.
    Thanks!

    Hi,
    Azure Files can only be used from an Azure VM within the same region as the storage account.
    it will work when setup a symbol server for azure VMs, if you trying to setup a symbol server for external client machines then this won’t work.
    Regards
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Hard Drive Setup for Source Files, Media Cache, Project, Render, etc...

    I am working to set up the following drives:
    C: SSD for OS and Programs
    D: RAID 0 (Three SATA II F3's) for Premiere Projects, where I render too, and Encore Projects
    E: RAID 0 (Two SATA III WD Caviar Blacks) for Source Files, media cache, and preview files (if i can get the Marvel controllers/SATA III to RAID that is)
    External HD USB 3.0 for backing up projects
    Should I be doing this different?
    Does anyone think it would be better to use SDD drive(s) instead of HDDs? I

    We're almost there Harm, thanks! I'm about to indicate this as being "Answered"
    More explicitly, are you suggesting it would look like this?
    C: SSD for:
    OS and Programs
    D: RAID 0 (Three SATA II F3's) for:
    Premiere Project
    Where I render project too
    Encore Project
    Source Files (Per Harm's suggestion)
    E:  RAID 0 (Two SATA III WD Caviar Blacks) for:
    Media cache
    Preview files
    Pagefile
    Is two RAIDs more ideal? That is (provided the above configuration is correct); does the pagefile, media cache, and preview files need to be on a RAID?
    Should I partition a piece of my E and use that for the pagefile or pagefile, media cache, and preview files? If yes, which? and how much of a partition should I create?
    When setting the pagefile, should I set all the other drives to "none" (except the E of course)?
    Lastly, regarding my E: drive for media cache, preview files, and pagefile (assuming all this is supposed to be on a different drive). Would I be better off using an SSD for that? Or should I keep my E: drive HDD with RAID 0? I suppose it will depend on what's more important, read or write speed. If read speed is more important than I can get an SSD for under $200 that will be really fast reads. If it's about writing then my WD's in a RAID will be better. So, which is more important for these things, writing speed or reading speed? If the SSD would be a solution for the extra files then I’ll have this set up:
    C: SSD for:
    OS and Programs
    D: RAID 0 (Three SATA II F3's) for:
    Premiere Project
    Where I render project too
    Encore Project
    Source Files (Per Harm's suggestion)
    E:  SSD for:
    Media cache
    Preview files
    Pagefile
    F:  One SATA III WD Caviar Black for:
    Extra Backups and stuff
    Thanks Harm, I'm about to have an unprecedented editing experience I think! Finally, after more than 7 years!

  • Is Azure File Service available for User who have MSDN subscritpion

    is  Azure File Service available for User who have MSDN subscritpion

    Hi Mohit,
    I responded to your other post as well, please see the approach there:
    https://social.msdn.microsoft.com/Forums/azure/en-US/f789cbeb-b01b-4cf8-ac97-513340ae7a5c/azure-file-services-not-available-in-preview?forum=windowsazuredata
    Thanks,

  • Azure Management REST API for Azure Cache ?

    Can’t find restful azure management API to create Azure Cache ? looks like only way to create Azure Cache via Azure Portal ?
    Max

    Yes ,  I think you are right。
    My Blog
    Please use Make as Answer if my post solved your problem and use
    Vote As Helpful if a post was useful.

  • How to create a cache for JPA Entities using an EJB

    Hello everybody! I have recently got started with JPA 2.0 (I use eclipseLink) and EJB 3.1 and have a problem to figure out how to best implement a cache for my JPA Entities using an EJB.
    In the following I try to describe my problem.. I know it is a bit verbose, but hope somebody will help me.. (I highlighted in bold the core of my problem, in case you want to first decide if you can/want help and in the case spend another couple of minutes to understand the domain)
    I have the following JPA Entities:
    @Entity Genre{
    private String name;
    @OneToMany(mappedBy = "genre", cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<Novel> novels;
    @Entity
    class Novel{
    @ManyToOne(cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Genre genre;
    private String titleUnique;
    @OneToMany(mappedBy="novel", cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<NovelEdition> editions;
    @Entity
    class NovelEdition{
    private String publisherNameUnique;
    private String year;
    @ManyToOne(optional=false, cascade={CascadeType.PERSIST, CascadeType.MERGE})
    private Novel novel;
    @ManyToOne(optional=false, cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Catalog appearsInCatalog;
    @Entity
    class Catalog{
    private String name;
    @OneToMany(mappedBy = "appearsInCatalog", cascade = {CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<NovelEdition> novelsInCatalog;
    The idea is to have several Novels, belonging each to a specific Genre, for which can exist more than an edition (different publisher, year, etc). For semplicity a NovelEdition can belong to just one Catalog, being such a Catalog represented by such a text file:
    FILE 1:
    Catalog: Name Of Catalog 1
    "Title of Novel 1", "Genre1 name","Publisher1 Name", 2009
    "Title of Novel 2", "Genre1 name","Pulisher2 Name", 2010
    FILE 2:
    Catalog: Name Of Catalog 2
    "Title of Novel 1", "Genre1 name","Publisher2 Name", 2011
    "Title of Novel 2", "Genre1 name","Pulisher1 Name", 2011
    Each entity has associated a Stateless EJB that acts as a DAO, using a Transaction Scoped EntityManager. For example:
    @Stateless
    public class NovelDAO extends AbstractDAO<Novel> {
    @PersistenceContext(unitName = "XXX")
    private EntityManager em;
    protected EntityManager getEntityManager() {
    return em;
    public NovelDAO() {
    super(Novel.class);
    //NovelDAO Specific methods
    I am interested at when the catalog files are parsed and the corresponding entities are built (I usually read a whole batch of Catalogs at a time).
    Being the parsing a String-driven procedure, I don't want to repeat actions like novelDAO.getByName("Title of Novel 1") so I would like to use a centralized cache for mappings of type String-Identifier->Entity object.
    Currently I use +3 Objects+:
    1) The file parser, which does something like:
    final CatalogBuilder catalogBuilder = //JNDI Lookup
    //for each file:
    String catalogName = parseCatalogName(file);
    catalogBuilder.setCatalogName(catalogName);
    //For each novel edition
    String title= parseNovelTitle();
    String genre= parseGenre();
    catalogBuilder.addNovelEdition(title, genre, publisher, year);
    //End foreach
    catalogBuilder.build();
    2) The CatalogBuilder is a Stateful EJB which uses the Cache and gets re-initialized every time a new Catalog file is parsed and gets "removed" after a catalog is persisted.
    @Stateful
    public class CatalogBuilder {
    @PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
    private EntityManager em;
    @EJB
    private Cache cache;
    private Catalog catalog;
    @PostConstruct
    public void initialize() {
    catalog = new Catalog();
    catalog.setNovelsInCatalog(new ArrayList<NovelEdition>());
    public void addNovelEdition(String title, String genreStr, String publisher, String year){
    Genre genre = cache.findGenreCreateIfAbsent(genreStr);//##
    Novel novel = cache.findNovelCreateIfAbsent(title, genre);//##
    NovelEdition novEd = new NovelEdition();
    novEd.setNovel(novel);
    //novEd.set publisher year catalog
    catalog.getNovelsInCatalog().add();
    public void setCatalogName(String name) {
    catalog.setName(name);
    @Remove
    public void build(){
    em.merge(catalog);
    3) Finally, the problematic bean: Cache. For CatalogBuilder I used an EXTENDED persistence context (which I need as the Parser executes several succesive transactions) together with a Stateful EJB; but in this case I am not really sure what I need. In fact, the cache:
    Should stay in memory until the parser is finished with its job, but not longer (should not be a singleton) as the parsing is just a very particular activity which happens rarely.
    Should keep all of the entities in context, and should return managed entities form mehtods marked with ##, otherwise the attempt to persist the catalog should fail (duplicated INSERTs)..
    Should use the same persistence context as the CatalogBuilder.
    What I have now is :
    @Stateful
    public class Cache {
    @PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
    private EntityManager em;
    @EJB
    private sessionbean.GenreDAO genreDAO;
    //DAOs for other cached entities
    Map<String, Genre> genreName2Object=new TreeMap<String, Genre>();
    @PostConstruct
    public void initialize(){
    for (Genre g: genreDAO.findAll()) {
    genreName2Object.put(g.getName(), em.merge(g));
    public Genre findGenreCreateIfAbsent(String genreName){
    if (genreName2Object.containsKey(genreName){
    return genreName2Object.get(genreName);
    Genre g = new Genre();
    g.setName();
    g.setNovels(new ArrayList<Novel>());
    genreDAO.persist(t);
    genreName2Object.put(t.getIdentifier(), em.merge(t));
    return t;
    But honestly I couldn't find a solution which satisfies these 3 points at the same time. For example, using another stateful bean with an extended persistence context (PC) would work for the 1st parsed file, but I have no idea what should happen from the 2nd file on.. Indeed, for the 1st file the PC will be created and propagated from CatalogBuilder to Cache, which will then use the same PC. But after build() returns, the PC of CatalogBuilder should (I guess) be removed and re-created during the succesive parsing, although the PC of Cache should stay "alive": shouldn't in this case an exception being thrown? Another problem is what to do when the Cache bean is passivated. Currently I get the exception:
    "passivateEJB(), Exception caught ->
    java.io.IOException: java.io.IOException
    at com.sun.ejb.base.io.IOUtils.serializeObject(IOUtils.java:101)
    at com.sun.ejb.containers.util.cache.LruSessionCache.saveStateToStore(LruSessionCache.java:501)"
    Hence, I have no Idea how to implement my cache.. Can you please tell me how would you solve the problem?
    Many thanks!
    Bye

    Hi Chris,
    thanks for your reply!
    I've tried to add the following into persistence.xml (although I've read that eclipseLink uses L2 cache by default..):
    <shared-cache-mode>ALL</shared-cache-mode>
    Then I replaced the Cache bean with a stateless bean which has methods like
    Genre findGenreCreateIfAbsent(String genreName){
    Genre genre = genreDAO.findByName(genreName);
    if (genre!=null){
    return genre;
    genre = //Build new genre object
    genreDAO.persist(genre);
    return genre;
    As far as I undestood, the shared cache should automatically store the genre and avoid querying the DB multiple times for the same genre, but unfortunately this is not the case: if I use a FINE logging level, I see really a lot of SELECT queries, which I didn't see with my "home made" Cache...
    I am really confused.. :(
    Thanks again for helping + bye

  • Diagnostics not working in web role for Azure SDK 2.5.1

    I am working with Azure SDK 2.5.1, mainly on the new designed diagnostics stuffs. However, I found I cannot get it run for my web role.
    So, I created a cloud service project, added a web role. Then, I appended one Trace message at the end of Application_Start in Global.asax.cs:
    Trace.TraceInformaction("Application_Start end.");
    After that, I right clicked the WebRole and opened the Properties tab.
    In the diagnostics config window:
    General: I choose 'Custom
    plan', also specified the storage account, keep the 'Disk
    Quota in MB' as default '4096'
    Application Logs: 'Log
    level' switch to 'All',
    others kept as default
    Other tabs are in default config settings
    After I deployed the project to cloud, I found some unexpected things:
    There is no WADLogsTable exists
    in Table storage. That's very strange, if I use a Worker Role, it would work as expected. So in web role, I just cannot find the Trace logging?
    For the performance counters, since I am using the default config with 8 counters, I can only see 8 in WADPerformanceCountersTable table
    storage. In my assumption, over time, there would be more and more values of this 8 counters transferred to this table. But it was just not happened, after several hours, it still had that 8 counter values.
    I just thought the diagnostics for Web Role just crashed or not working at all. Meanwhile, I checked the logs located at "C:\Logs\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\1.4.0.0\DiagnosticsPlugin.log" in
    server, at the end of this file there is an exception said some diagnostics process exit:
    DiagnosticsPlugin.exe Error: 0 : [4/25/2015 5:38:21 AM] System.ArgumentException: An item with the same key has already been added.
    at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)
    at Microsoft.Azure.Plugins.Diagnostics.dll.PluginConfigurationSettingsProvider.LoadWadXMLConfig(String fullConfig)
    DiagnosticsPlugin.exe Error: 0 : [4/25/2015 5:38:21 AM] Failed to load configuration file
    DiagnosticsPlugin.exe Information: 0 : [4/25/2015 5:38:21 AM] DiagnosticPlugin.exe exit with code -105
    From MSDN,
    the code -105 means:
    The Diagnostics plugin cannot open the Diagnostics configuration file.
    This is an internal error that should only happen if the Diagnostics plugin is manually invoked, incorrectly, on the VM.
    It doesn't make sense, I did nothing to the VM.
    As I said above, I just did very tiny changes to the scaffold code generated by Visual Studio 2013. Did I do something wrong or it's a bug for Azure SDK 2.5?
    By the way, it seems everything is ok for Worker Role.

    Hi,
      This issue could be an internal issue, I do not have any update on this as of now. There was a similar issue due to a regression last month that has been resolved, however i dont think this issue is related.
      Please follow the below article on how to enable Diagnostics and let us know if it works.
    http://azure.microsoft.com/en-in/documentation/articles/cloud-services-dotnet-diagnostics/
      I will let you know if I have any update on this issue from my side.
    Regards,
    Nithin Rathnakar.

  • Error ACLContainer: 65315 does NOT EXIST in  the Object Cache for parentID:

    Expert,
    I did the following steps to upgrade the OWB repository from 10g to 11g
    • Created a dummy workspace in the 11.2 repository
    • Created users in the destination environment
    • Run Repository Assistant against the 11.1 source database
    • Then selected *“Export entire repository to a file”* and selected the MDL(10g) to import
    After 99% of completing I have got the below error
    Error ACLContainer: 65315 does NOT EXIST in  the Object Cache for parentID: 65314
    Please let me know the solution.
    Thanks,
    Balaa...

    Hi I had the same error and worked on it for almost a week with no success but there is few work around for this try it it might work for you.
    Step 1>Run a health check on OWB 10.2 enviroment(make sure you clone your OWB 10.2 enviroment and then do this healthcheck ,these checks are tricky )
    refer
    Note 559542.1 Health Check of the Oracle Warehouse Builder 10.2 Metadata Repository
    This will give you info about your missing ACL
    Step 2> download these two scripts fixLostACLContainer.sql ,fixAllACLContainers.sql
    please refer :
    Note 559542.1 Health Check of the Oracle Warehouse Builder 10.2 Metadata Repository
    OWB 10.2 - Internal ERROR: Can not find the ACL container for object (Doc ID 460411.1)
    Note 754763.1 Repository Cleanup Script for OWB 10.2 and OWB 11.1
    Note 460411.1 Opening Map Returns Cannot find ACL Containter for Object
    Note 1165068.1 Internal Error: Can Not Find The ACL Containter For for object:CMPPhysical Object
    It might resolve this ACL issue but it did not work for me.
    If none of these work then
    Perform export from design center of OWB 10.2 and import through design center of OWB 11.2.(ONLY OPTION)
    It worked for me.
    Varun

  • Use Redis Cache for Multiple Applications

    Hi,
    I did some searches and could not come up with a straight forward answer so I am hoping someone here can clarify this for me.
    We have two different products that we have built in .NET that we host in Azure as websites. For each of these two products we have multiple clients. Each product and client pair has its own SQL database and its own website setup in Azure.
    We would like to start using Redis Cache for the session of these products. Do I need to:
    1) Create 1 Redis Cache and use it for all clients / products?
    2) Create 2 Redis Caches and use one per product (redis caches needed = # of products)?
    3) Create an individual Redis cache for each client / product pair (redis caches needed = # of products x # of clients)?

    You can do either of the above. It would really depend on how much load are you expecting on the Cache
    For separation and load balancing it might be better to have a Cache per Website
    For a Cache to be useful it should be closer to the Web tier, so ensure that you provision the Cache in the same region as the Website.

Maybe you are looking for

  • Needed classes for sap connector framework

    Hello, when i installed sap netweaver developer studio 2.0.5 i become all needed classes for sap connector framework? Or i have to install the pdk? What are in the pdk for developer studio because the needed wizards are there (portal, sap connector .

  • Fault! error packaging media.http request error

    when i click the package media ,the screen gives me the tip on title. i check the flashaccess.log found  [com.adobe.flashaccess.refimpl.util.ParamsReader] Error loading key encryption certificate java.io.IOException: Unable to load license server cer

  • Registering Photshop Album Starter Edition

    I received a copy of Adobe Photoshop Album Starter Edition 3.0 with a new computer and I am unable to register it, I have Windows XP Media Center 2005. I have DSL that normally works fine & I use Mozilla as my browser. I fill out the registration for

  • Permissions to modify SQL Agent Jobs

    Permissions on SQL Server Agent I would like to assign the permission to user ,who can edit all SQL agent job(even own by others) without assigning the sysadmin role. is it possible? regards EA

  • Windows cannot find file

    Hi!  I'm new to LabVIEW (been using it for 2 months).  For some reason, as time goes by (after I install it), LabVIEW crashes more and more often.  At the beginning, it crashes (just quit itself and ask if I want to send an error report) once in a wh