Co-Loc versus Dedicated In-Role Cache - both available to all instances?

Trying to confirm that this is true. The only real difference between the co-located and dedicated in-role cache is that the co-located shares the memory with the role whereas the dedicated uses all the memory of the role. I'm assuming that even in a co-located
cache the cache is synced and not per instance? Meaning if I set up a co-located cache on my web role and that role has multiple instances, the cache will be the same for all instances (objects added to cache from one instance would be available in all instances). 
Dan

Hi,
We can see this from Role Cache FAQ (Windows Azure Cache).
Here is a snippet.
What is the difference between co-located and dedicated Caching topologies?
There are two main ways a role can host In-Role Cache: co-located and dedicated. In the co-located topology, the role that hosts In-Role Cache also hosts other web role or worker role functionality. The memory and resources of the role are shared between
caching and non-caching application code and services. In the dedicated topology, which is supported for worker roles, the worker role only hosts caching. These Cache topologies differ primarily in the percentage of memory that is dedicated to Cache. For more
information, see the topics on co-located Caching roles and
dedicated Caching roles.
Best Regards
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Azure Dedicated Role Cache does not fire notification to client when call DataCache.Clear method

    Hi,
    I'm using Azure Dedicated Role cache and  Enabled the local cache and notification.
    <localCache isEnabled="true" sync="NotificationBased" objectCount="20000" ttlValue="3600" />
    <clientNotification pollInterval="1" maxQueueLength="10000" />
    but when I call DataCache.Clear method in one client, the notification does not send to all clients, the client's local cached content still been used.
    is this a bug of Azure SDK 2.5?

    I assume that you have correctly configured and enabled Cache notifications, instead of using the DataCache's clear method - have you tried clearing all regions in your cache because I am not sure if DataCache.Clear triggers the cache notification or not.
    so your code should look something like this
    foreach (string regionName in cache.GetSystemRegions())
    cache.ClearRegion(regionName)
    You can refer the link which explains about Triggering Cache Notifications  https://msdn.microsoft.com/en-us/library/ee808091(v=azure.10).aspx
    Bhushan | Blog |
    LinkedIn | Twitter

  • R/3 Security roles versus ESS Security Roles?

    Hello Experts,
    I am not a security person, but we are in the process of testing ESS and having some conflicts with a users GUI (r/3) role versus the ESS role!  For example, a user will not have access to Bank Information (Infotype 9) in the GUI, but will need access to edit this Infotype through ESS on their own record.
    Our problem is how do we set the roles up under this scenario? If this cannot be done, how do other companies handled this scenario?
    Any direction will be highly appreciated.
    ECC 6.0
    EP 7.0
    ESS 1.0
    Thanks for your time,
    Mike

    Hi Mike,
    My suggestion would be to make use of P_PERNR in your ESS role only - and not P_ORGIN or P_ORGINCON.
    We added all the info types that the ESS users are supposed to maintain or view in P_PERNR.
    We did eventually need to add display and matchcode search for info types 0000 - 0002 so that the ESS users could make use of the Who's Who functionality in order to search for emloyees across the organisation.
    Without info type 0009 in a P_ORGIN or P_ORGINCON auth object, the users will not be able to maintain in PA30.
    Hope this helps.
    Regards
    Lucille

  • Azure In Role Caching Doesn't start on server with "Not running in a hosted service or the Development Fabric."

    I started to use Azure in-role caching and everything works great in Azure compute emulator on my local machine, but not on server. I've already dealed with some problems, like lack of msshrtmi.dll on server, but now can't understand why get this error:
    Not running in a hosted service or the Development Fabric.
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
    Exception Details: System.InvalidOperationException: Not running in a hosted service or the Development Fabric.
    Source Error:
    An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
    Stack Trace:
    [InvalidOperationException: Not running in a hosted service or the Development Fabric.]
    Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitor.GetDefaultStartupInfoForCurrentRoleInstance() +535
    Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener..ctor() +34
    [ConfigurationErrorsException: Could not create Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35.]
    System.Diagnostics.TraceUtils.GetRuntimeObject(String className, Type baseType, String initializeData) +1588
    System.Diagnostics.TypedElement.BaseGetRuntimeObject() +103
    System.Diagnostics.ListenerElement.GetRuntimeObject() +825
    System.Diagnostics.ListenerElementsCollection.GetRuntimeObject() +261
    System.Diagnostics.TraceInternal.get_Listeners() +256
    System.Diagnostics.Trace.get_Listeners() +79
    Microsoft.ApplicationServer.Caching.DataCacheServerLogManager..cctor() +97
    [TypeInitializationException: The type initializer for 'Microsoft.ApplicationServer.Caching.DataCacheServerLogManager' threw an exception.]
    Microsoft.ApplicationServer.Caching.DataCacheServerLogManager.ChangeLogLevel(TraceLevel traceLevel) +0
    Microsoft.ApplicationServer.Caching.ServiceConfigurationManager..cctor() +24
    [TypeInitializationException: The type initializer for 'Microsoft.ApplicationServer.Caching.ServiceConfigurationManager' threw an exception.]
    Microsoft.ApplicationServer.Caching.ServiceConfigurationManager.GetHostDefaults() +0
    Microsoft.ApplicationServer.Caching.OMCacheNodeProperties..ctor(IHostConfiguration props, Int32 maxNC, Boolean perfCounterRequired) +69
    Microsoft.ApplicationServer.Caching.LocalCacheStore..ctor(EvictionParametrs evictionParams) +50
    Microsoft.ApplicationServer.Caching.DataCacheFactory..ctor(DataCacheFactoryConfiguration configuration) +555
    Microsoft.Web.DistributedCache.DataCacheFactoryWrapper.CreateDataCacheFactoryFromConfiguration(DataCacheFactoryConfiguration config) +35
    Microsoft.Web.DistributedCache.CacheHelpers.RunCacheCreationHooks(CacheConnectingEventArgs fetchingEventArgs, IDataCacheFactory dataCacheFactory, Object sender, EventHandler`1 fetchingHandler, EventHandler`1 fetchedHandler) +7
    All I found about this error is that it occurs if you start local Azure compute emulator with in-role caching and without administrative rights. But in emulator everything works fine and problem occurs only after I publish to staging environment. (if switch
    to production, error remains).
    I use cache for resolving routes like /username and /countryname etc. So available usernames and country names will be cached and updated on changes in database. I have static class with static DataCache object which is created on first request to cache. But
    even home page doesn't start so error occurs before I try to create cache object.

    Hi Jambor, I don't have reference errors. I had them before but already fixed with copy local = true for few references. 
    I receive error only on azure host service. In local everything works great, app starts, cache works and I can see data in cache.  Yes, in local mode cloud project is set as startup and VS runs as administrator.
    I've added TraceListener creating code you suggested:
    protected void Application_Start(object sender, EventArgs e)
    System.Diagnostics.Trace.Listeners.Add(new Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener());
    and error code changed a little. But as I see problem is the same but caused when creating TraceListener from Application_Start instead of when initalizing cache:
    Not running in a hosted service or the Development Fabric.
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. 
    Exception Details: System.InvalidOperationException: Not running in a hosted service or the Development Fabric.
    Source Error: 
    An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
    Stack Trace: 
    [InvalidOperationException: Not running in a hosted service or the Development Fabric.]
    Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitor.GetDefaultStartupInfoForCurrentRoleInstance() +518
    Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener..ctor() +34
    [ConfigurationErrorsException: Could not create Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35.]
    System.Diagnostics.TraceUtils.GetRuntimeObject(String className, Type baseType, String initializeData) +1588
    System.Diagnostics.TypedElement.BaseGetRuntimeObject() +103
    System.Diagnostics.ListenerElement.GetRuntimeObject() +825
    System.Diagnostics.ListenerElementsCollection.GetRuntimeObject() +261
    System.Diagnostics.TraceInternal.get_Listeners() +256
    System.Diagnostics.Trace.get_Listeners() +79
    sub2o.MvcApplication.Application_Start() +497
    [HttpException (0x80004005): Could not create Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35.]
    System.Web.HttpApplicationFactory.EnsureAppStartCalledForIntegratedMode(HttpContext context, HttpApplication app) +558
    System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) +179
    System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) +322
    System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) +384
    System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext) +397
    [HttpException (0x80004005): Could not create Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35.]
    System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +630
    System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +159
    System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +763

  • Cache does not show all config items

    SXI_CACHE is not showing all my config items. 
    When looking at SXI_CACHE, I noticed that only the config items pertaining to actual SAP services (Business Systems) seem to show up??  Is this normal?  I've refreshed the cache (both delta, and full). 
    Also, when I double-click on Services, all the services show up, but the ones that are not actual SAP systems have an 'X' in the 'Flag' column ("Event has occurred")  What does that mean?
    Background:
    We are in the prototype stage; I am the only one in the box. I created a simple File-BAPI scenario just to demonstrate that things worked.  For the last couple months, the comm channel was set to Inactive.  Attempted to reActivate the comm channel to get back in to it, and it's not working.  I've double-checked everything I can think of, from the obvious (is there a file to be pulled), to the not-so-obvious (was the scenario changed by someone else in my absence).
    Also, there are not sender agreements listed in SXI_CACHE?
    Thanks in advance,
    Brian

    Actually, I wasn't using Business Service, I was using Business Systems from the SLD as Services without Party.
    However, I just created a Business Service, filled in the Inbound and Outbound Interfaces, created Comm Channels, RD, ID, SA, RA, and activated all - still no luck.  This time, some of the pieces DO show up in the cache.  The ID, RA, and the Receiver CC show up in the cache, the RD, SA, and the Sender CC do not.
    This has got to be a cache problem.  We are using a VERY old and slow machine to run XI on.  Each time I activate something in the DR or CD, I have to go to SXI_CACHE and Start Delta Refresh.  I'm thinking there may be a timeout setting or something.  What's frustrating is that it used to work just fine.
    Any other cache related ideas?
    ***EDIT - just rechecked - getting messages from newly created Business Service, but they are failing in mapping....I guess I need to delete and recreate everything and try again.  Maybe that will clear up the cache issue....
    Message was edited by: Brian Vanderwiel

  • Is there a way to turn off using ram as intermediary when transfering files between 2 internal hdds? (write cache is off on all drives)

    right forum?  first time here. none of the options seem perfect. so i guess this applies to 'setup.' i tried to describe what's happening verbosely from the start of a file transfer to completion of writing file to 2nd hard drive.
    win 7 64bit home - i2600k 3.4ghz - 8gb of ram - 2hdd 1 ssd
    I have this problem when transferring files between 2 internal hard drives. one is unhealthy (slow write speeds but not looking for advice to replace it because it serves its unimportant purpose). so, the unhealthy drive drops into PIO mode - that's acceptable.
    however, a little over 1gb of any file transfer is cached in RAM during the file transfer. after the file transfer window closes, indicating the transfer is "complete" (HA!), it still has 1gb to write from RAM which takes about 30minutes. This would
    not be a problem if it did not also earmark an additional 5gb of ram (never in use), leaving 1gb or less 'free' for programs. this needlessly causes my pc to be sluggish - moreso than a typical file transfer file between 2 healthy drives. i have windows write
    caching turned off on all drives. so this is a different setting i can't figure out nor find after 2 hours of google searches.
    info from taskmanager and resource monitor.
    idle estimates: total 8175, cached 532, available 7218, free 6771 and the graph in taskman shows about 1gb memory in use.
    at the start of a file transfer:  8175,  2661,  6070,  4511free and ~2gb of ram used in graph. No problems, yet.
    however, as the transfer goes on, the free ram value drops to less than 1gb (1gb normal +1gb temporary transfer cache +5gb of unused earmarked space = ~7gb),  cached value increases, but the amount of used ram remains relatively unchanged (2gb
    during transfer and slowly drops to idle values as remaining bits are written to the 2nd hard drive).  the free value is even slower to return to idle norms after the transfer completes writing data to 2nd hard drive from RAM. so, it's earmarking an addition
    5gb of ram that are completely unused during this process.  *****This is my problem*****
    Is there any way to turn this function off or limit its maximum size. in addition to sluggishness, it poses risk to data integrity from system errors/power loss, and it's difficult to discern the actual time of transfer completion, which makes it difficult
    to know when it's safe to shutdown my pc (any data left in ram is lost, and the file transfer is ruined - as of now i have to use resmon and look through what's writing to disk2 -> sometimes easy to forget about it when the transfer window closed 20-30minutes
    ago and the file is still in the process of writing to the 2nd disk).
    Any solution would be nice, and a little extra info like whether it can be applied to only 1 hard drive would be excellent.

    Thanks for the reply.
    (Although i have an undergrad degeee in computers, it's been 15years and my vocab is terrible. so, i will try my best. keep an open mind. it's not my occupation, so i rarely have to communicate ideas regarding PCs)
    It operates the same way regardless of the write-cache option being enabled. It's not using the 5gb for read/write buffer - it's merely bloating standby memory during the transfer process at a rate similar to the write speed of destination (for my situation).
    at this point i don't expect a solution. i've tried to look through lists of known memory leaks but i dont have the vocabulary to be 100% certain this problem is not documented. as of now it can't affect many people - NAS's with low bandwidth networks, usb
    attached storage etc. do bugs get forwarded from these forums? below i can outline the consistent and repeatable nature not only on my pc but on others' pcs as well (2012 forum post i found).
    I've been testing and paying a little more attention and i can describe it better:
    Just the Facts
    Resmon Memory Info: "In Use" stays consistent ~1gb (idle amount and roughly the same when nothing else is running during file transfer)
                                     "Modified" contains file transfer
    data (meta data?) which remains consistent at little over 1gb (minor fluctuations due to working as a buffer). After the file transfer window closes "Modified" slowly diminishes as it finishes lazy writing (i believe that's the term). I forget idle
    pc amount, but after transfer this is ony 58mb)
    "Standby" as the transfer starts it immediately rises to ~2gb. I'm sure this initial jump is normal. However, it will bloat well over 5gb over time with a large enough transfer increasing at a consistent rate during the entire transfer
    process. the crux of the matter.
    "Free" will drop as far as 35-50megabytes over time.
    as the transfer starts, the "Standby" increases by an immediate chunk then at a slow rate throughout entire transfer process(~1mb/s). once writing metadata to RAM no longer occurs, the "Modified" ram slowly (@500kb/s matching resmon disk
    activity for that file write) diminishes as it finishes lazy writing, After file is 100% written to destination drive, "Standby" remains a bloated figure long after.
    a 1.4gb transfer filled 3677MB of "Standby" by the time writing finished and modified ram cleared. after 20minutes, it's still bloated at 3696MB. after 30-40mins it's down to 2300mb - this is about what it jumps immediately to when transfer starts
    - it now remains at this level until i reboot.
    I notice the "standby" is available to programs. but they do not operate well. e.g. a 480p trailer on IMDB.com will stop-and-go every 2-3seconds (stream buffers fine/fast) - this would be during the worst case scenario of 35-50mb "Free"
    ram. my pc isn't and never was the latest and greatest, but choppy video never happens even with 1 or 2 resource hogs running (video processing/encoding, games, etc).
    Conjecture Below
    i think it's a problem when one device is significantly slower at writing than the source device - this is the factor that i share with others having this problem. when data is written to modified ram then sent to destination, standby memory is expanded
    until it completely or nearly fills all available RAM - If the transfer size is large enough relative to how slow the write speed of destination device is. otherwise it fills it up as much as the file size/write speed issue allows. the term "memory leak"
    is used below but may not technically be one, but it's an apt description in layman's terms.
    i saw a similar post in these forums (link at end). My problem is repeatable and consistent with others' reports. I wasn't sure if i should revive it with a reply. some of these online message boards (maybe not this one) are extremely picky
    and sensitive, lol.the world will end if an old thread revives - even if for a good reason.
    i can answer some of the ancillary issues. one person (Friday, September 21, 2012 8:33 PM) mentions not being able to shutdown, i asume he means stuck on the shutdown screen - this is because lazy writing has not completed - his nas write speed is significantly
    slower than reading from source - the last bits of data left in ram still needs to be writen to the destination. shutdown will stall for as long as needed until the data finishes writing to destination to prevent data loss.
    another person (Monday, September 24, 2012 6:31 PM) mentions the rate of the leak, but the rate is more likely a function of read speed from source relative to write speed of destination. which explains why my standby expands closer to a 1:1 ratio compared
    to his 1:100 (he said 10mb per 1000mb)
    we all have the same exact results/behaviour, but slightly different rates of bloating/leaking. as the file is written from from the ram to the destination, standby increases during this time - not a problem if read and write speeds are roughly equal (unless
    your transfering a terabytes then i bet the problem could rear its head). when writing lags, it gives the opportunity for standby ram to bloat with no maximum except the amount of installed ram. slower the write speed, the worse the problem.
    The reply on Wednesday, September 26, 2012 3:04 AM has before and after pictures of exactly what i described in
    "Just the Facts". Specifically the resmon image showing the Memory Tab.
    The kb2647452 hotfix seems to do some weird things relative to this problem. in the posts that mention they've applied it: after file completes it looks like the "standby" bloat is now "in use" bloat. as per info from Tuesday, October
    09, 2012 10:36 PM - bobmtl in an earlier post applies the patch. compare images from earlier posts to his post on this date. seems like a worse problem. Also, his process list indicates it's very unlikely they add up to ~4gb as listed in the color coded bar.
    wheres the extra gb's coming from? likely the same culprit of what filled up "standby" memory for me and others. it looks like this patch relative to this problem merely recategorizes the bloat - just changes the title it falls under.
    Link:
    https://social.technet.microsoft.com/Forums/windows/en-US/955b600f-976d-4c09-86dc-2ff19e726baf/memory-leak-in-windows-7-64bit-when-writing-to-a-network-shared-disk?forum=w7itpronetworking

  • Since the upgrade to CC 2014 via Creative Cloud, I'm ending up with both version of all my Adobe CC products, and I lost my plugins in Photoshop CC 2014.

    Hello
    Since the upgrade to CC 2014 via Creative Cloud, I'm ending up with both version of all my Adobe CC products, and I lost my plugins in Photoshop CC 2014.
    How can I fix this ?

    I did it by going to my old Photoshop CC Plugins folder and copying it aand then pasteing it in the Phtoshop CC 2014 folder. All mine came back and worked.

  • I have two iphones , i lost one  and entered to find my iphone and as mistake i block both and delate all the data of both. how can i get it on, im trying to restore it , but its not working ...

    i have two iphones , i lost one  and entered to find my iphone and as mistake i block both and delate all the data of both. how can i get it on, im trying to restore it , but its not working ...

    Hi Roger
    Thank you for your reply.
    My original feed is: http://casa-egypt.com/feed/
    However, because I modified the feed http://feeds.feedburner.com/imananddinasbroadcast and nothing changed, I redirected it to another feed and then I deleted this feed.
    Is there any way to change the feed in itunes? The only feed I have now is  http://feeds.feedburner.com/CasaEgyptStation
    I tried to restore the feed http://feeds.feedburner.com/imananddinasbroadcast but feedburner refused.
    I know that I missed things up but I still have hope in working things out.
    Thanks is advance.
    Dina
    Message was edited by: dinadik

  • Do the Cache, CacheStore, and CacheLoader all need to run in the same JVM?

    do the Cache, CacheStore, and CacheLoader all need to run in the same JVM?
    Any help is appreciated.
    Thanks.

    they can be in differnt DP with no problem.
    Now question:
    1-MOH multicast/Unicast? (Multicast MOH ONLY support G711, so that could explain changing the DP fixnig the problem)
    [You can create a DP/region for MOH if multicast to talk G711 to everybody and the problem will get fix]
    Please Kudos/rate if this help!

  • Security Scopes: All instances of the objects that are related to the assigned security roles greyed out

    So the guy who built our SCCM server is no longer in the company and his AD account no longer exists.  I noticed in SCCM however his account as the "All instances of the objects that are related to the assigned security roles"
    is selected. however the option is greyed out for everyone else.
    This option is the one found under Administration/Security/Administrative Users select the user and open properties then select the Security Scopes tab.
    Is there a way we can provide another user this same level access when we can no longer access through the original build account?
    Already looked into tombstone resurrection of his account thats a no go.
    

    Hi,
    I recommend you rebuild SCCM or open a case with Microsoft.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How can I deploy the setting of clear cache on exit for all users?

    How can I deploy the setting of clear cache on exit for all users?

    Note that Firefox disables the disk cache if you use "Clear history when Firefox closes" to clear the cache (see about:cache), so you can either disable the disk cache via its related pref or set the prefs related to clearing this data,but then other items that have a check-mark by default are cleared as well.
    *browser.cache.disk.enable
    *privacy.clearOnShutdown.cache
    *privacy.sanitize.sanitizeOnShutdown
    You can use a mozilla.cfg file in the Firefox program folder to lock prefs or specify new (default) values.
    Place a local-settings.js file in the defaults\pref folder where also the channel-prefs.js file is located to specify using mozilla.cfg.
    pref("general.config.filename", "mozilla.cfg");
    These functions can be used in the mozilla.cfg file:
    defaultPref(); // set new default value
    pref(); // set pref, but allow changes in current session
    lockPref(); // lock pref, disallow changes
    See also:
    *http://kb.mozillazine.org/Locking_preferences
    *http://mike.kaply.com/2012/03/16/customizing-firefox-autoconfig-files/
    *http://mike.kaply.com/2014/01/08/can-firefox-do-this/

  • Business Role Cache

    Hi
    I have added and removed some Business Roles for a particular user. But when that person logs into CRM UI (CRM 7.0), old business role is being displayed.
    Please help me in resolving this.
    Regards
    Hitsm

    Hi,
    have you added the business role directly to the user in SM04 with parameter CRM_UI_PROFILE?
    Or have you added the business role to the org-unit where the user is assigned in PPOMA_CRM?
    Please check both points.
    As i know there isn´t a cache for business role assignements.
    If this does not help check if your new business role has been transported correctly.
    Best regards
    Manfred

  • Shared versus dedicated connections in the fragmentation of shared pool mem

    Hi,
    I have a old Oracle 8.1.7 database server.
    I have a legacy application with no source code. This application don't use memory efficiently (no bind variables, etc.) , ie memory becomes fragmented.
    I know that exists two ways to connect the database (dedicated and shared)
    Based on this, I want to know which of the two options creates more fragmentation. I know that recommendation is to use dedicated connection, but I'm not sure if this is recomendation is applicable in this particular environment.
    Thanks in advance.

    Whether you use shared or dedicated connections makes no difference for fragmentation in the shared pool. Whether your hard parse or do not hard parse does matter.
    Measures you can take
    - make sure often used packages like dbms_standard are pinned in the shared pool using a startup trigger
    - set session_cached_cursors to 50 or 100. This will reduce parsing.
    Sybrand Bakker
    Senior Oracle DBA

  • WPA versus WPA2, must you support both?

    Is it generally a requirment to support both WPA and WPA2 for enterprise?
    Sent from Cisco Technical Support Android App

    I would say no. I might suggest supporting one or the other. Some clients when seeing both supported on the same WLAN will have issues in negotiation.
    Sent from Cisco Technical Support iPhone App

  • Roles missing in available role list at import role option

    We have sap boxi 3.1 and sap integration kit both with SP2.
    we configure sap authentication in the CMC.
    The entitlement system is configured with  a user that has no restriction in sap (SAP ALL user).
    We have approximately 1400 roles in our BW client
    In the available roles list at the import role tab we can see only 220-230 roles.
    The list is randomly.
    When we configure another bw client as a different entitlement system,
    we receive approximately 280 roles out of 1380.
    It looks like we are missing some buffer size configuration.
    Any ideas ??
    thanks.

    Hi Ingo,
    Thanks a lot.
    Yes, the user in the CMC has SAP ALL credential, and is allowed to see all BW roles.
    The missing roles are indeed those that are not assigned to any user.

Maybe you are looking for