Distributed cache and Windows AppFabric

got some issues with the Dist. cache 
followed this in order to remove the servicenstance
Run Get-SPServiceInstance to find the GUID in the ID section of the Distributed Cache Service that is causing an issue.
$s = get-spserviceinstance GUID 
$s.delete()
It deletes fine but when I try to add:  
Add-SPDistributedCacheServiceInstance
I get this: 
Add-SPDistributedCacheServiceInstance : Could not load file or assembly 'Microsoft.ApplicationServer.Caching.Configuration, Version=1.0.0.0, Cul
ture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
how do I resolve this? 

Hi JmATK,
Regarding this issue, we don’t recommend to delete without stopping any service gracefully, because there may be a data/stack that is still intact one and another.
The recommendation from Stacy is good, and if the issue is about zombie process that causing unresponsive or hang process, we may need to reset the process by re-attach database / farm.
Best regards.
Victoria
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected]

Similar Messages

  • Cache config for distributed cache and TCP*Extend

    Hi,
    I want to use distributed cache with TCP*Extend. We have defined "remote-cache-scheme" as the default cache scheme. I want to use a distributed cache along with a cache-store. The configuration I used for my scheme was
    <distributed-scheme>
      <scheme-name>MyScheme</scheme-name>
      <backing-map-scheme>
        <read-write-backing-map-scheme>
          <internal-cache-scheme>
            <class-scheme>
              <class-name>com.tangosol.util.ObservableHashMap</class-name>
            </class-scheme>
          </internal-cache-scheme>
          <cachestore-scheme>
            <class-scheme>
              <class-name>MyCacheStore</class-name>
            </class-scheme>
            <remote-cache-scheme>
              <scheme-ref>default-scheme</scheme-ref>
            </remote-cache-scheme>
          </cachestore-scheme>
          <rollback-cachestore-failures>true</rollback-cachestore-failures>
        </read-write-backing-map-scheme>
      </backing-map-scheme>
    </distributed-scheme>
    <remote-cache-scheme>
      <scheme-name>default-scheme</scheme-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>XYZ</address>
              <port>9909</port>
            </socket-address>
          </remote-addresses>
        </tcp-initiator>
      </initiator-config>
    </remote-cache-scheme>I know that the configuration defined for "MyScheme" is wrong but I do not know how to configure "MyScheme" correctly to make my distributed cache the part of the same cluster to which all other caches, which uses the default scheme, are joined. Currently, this ain't happening.
    Thanks.
    RG
    Message was edited by:
    user602943

    Hi,
    Is it that I need to define my distributed scheme with the CacheStore in the server-coherence-cache-config.xml and then on the client side use remote cache scheme to connect to get my distributed cache?
    Thanks,

  • Distributed cache and MapListener

    Do the MapListeners receive all events on a distributed cache, when the cache is updated on a cluster that is different of the cluster on which the MapListener is located?
    I ask this because, I've been testing the following configuration:
    A distributed cache started on 2 machines.
    When listening to an event on a cache (using a getCache(String,ClassPath) to get the cache, and addMapListener() to connect my implementation of the MapListener to the cache), nothing is received when the other node is updated.
    Am I misusing MapListeners?

    Sorry. I have been testing using an unappropriate configuration (-Dtangosol.coherence.ttl=0)
    Therefore the 2 machine could not see each other.
    The MapListener works as expected.
    Pedro

  • How can i configure Distributed cache servers and front-end servers for Streamlined topology in share point 2013??

    my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
    be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
    Thanks in Advanced!

    For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
    the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Query from Distributed Cache

    Hi
    I am newbie to Oracle Coherence and trying to get a hands on experience by running a example (coherence-example-distributedload.zip) (Coherence GE 3.6.1). I am running two instances of server . After this I ran "load.cmd" to distribute data across two server nodes - I can see that data is partitioned across server instances.
    Now I run another instance(on another JVM) of program which will try to join the distributed cache and try to query on the loaded on server instances. I see that the new JVM is joining the cluster and querying for data returns no records. Can you please tell me if I am missing something?
         NamedCache nNamedCache = CacheFactory.getCache("example-distributed");
         Filter eEqualsFilter = new GreaterFilter("getLocId", "1000");
         Set keySet = nNamedCache.keySet(eEqualsFilter);
    I see here that keySet has no records. Can you please help?
    Thanks
    sunder

    I got this problem sorted out - the was problem cache-config.xml.. The correct one looks as below.
    <distributed-scheme>
    <scheme-name>example-distributed</scheme-name>
    <service-name>DistributedCache1</service-name>
    <backing-map-scheme>
         <read-write-backing-map-scheme>
         <scheme-name>DBCacheLoaderScheme</scheme-name>
         <internal-cache-scheme>
         <local-scheme>
         <scheme-ref>DBCache-eviction</scheme-ref>
         </local-scheme>
         </internal-cache-scheme>
              <cachestore-scheme>
              <class-scheme>
                   <class-name>com.test.DBCacheStore</class-name>
                   <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>locations</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                   </init-params>                     
                   </class-scheme>
              </cachestore-scheme>
              <cachestore-timeout>6000</cachestore-timeout>
              <refresh-ahead-factor>0.5</refresh-ahead-factor>     
         </read-write-backing-map-scheme>
         </backing-map-scheme>
         <thread-count>10</thread-count>
    <autostart>true</autostart>
    </distributed-scheme>
    <invocation-scheme>
    <scheme-name>example-invocation</scheme-name>
    <service-name>InvocationService1</service-name>
    <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
    </invocation-scheme>
    Missed <class-scheme> element inside <cachestore-scheme> of <read-write-backing-map-scheme>.
    Thanks
    sunder

  • Object locking in Distributed cache

    Hi,
         I have gone through some of the posts on the forum regarding the locking issues.
         Thread at http://www.tangosol.net/forums/thread.jspa?messageID=3416 specifies that
         ..locks block locks, not gets or puts. If one member locks a key, it prevents another member from locking the key. It does not prevent the other member from getting/putting that key.
         What exactly do we mean by the above statement?
         I'm using distributed cache and would like to lock an object before "put" or "remove" operations. But I want "read" operation to be without locks. Now my questions are,
         1) In a distributed cache setup, if I try to obtain a lock before put or remove and discover that i cannot obtain it (i.e false is returned) because someone else has locked it, then how do I discover when the other entity releases lock and when should i retry for the lock?
         2) Again, if i lock USING "LOCK(OBJECT KEY)" method, can i be sure that no other cluster node would be writing/reading to that node until I release that lock?
         3) The first post in the thread http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588 suggests that in distributed setup locks are of no effect to other cluster nodes. But i want locks to be valid across cluster. If item locked, then no one else shud be able to transact with it. How to get that?
         Regards,
         Mohnish

    Hi Mohnish,
         >> 1) In a distributed cache setup, if I try to obtain
         >> a lock before put or remove and discover that i cannot
         >> obtain it (i.e false is returned) because someone else
         >> has locked it, then how do I discover when the other
         >> entity releases lock and when should i retry for the
         >> lock?
         You may try to acquire a lock (waiting indefinitely for lock acquisition) by calling <tt>cache.lock(key, -1)</tt>; you may try for a specified time period by calling <tt>cache.lock(key, cMillis)</tt>. With either of these approaches, your thread will block until the other thread releases the lock (or the timeout is reached). In either case, if the other node releases its lock your lock request will complete immediately and your thread will resume execution.
         >> 2) Again, if i lock USING "LOCK(OBJECT KEY)" method,
         >> can i be sure that no other cluster node would be
         >> writing/reading to that node until I release that lock?
         If you want to prevent other threads from writing/reading a cache entry, you must ensure that those other threads lock the key prior to writing/reading it. If they do not lock it prior to writing, they may have dirty writes (overwriting your write or vice versa); if they do not lock prior to reading, they may have dirty reads (having the underlying data change after they've read it).
         >> 3) The first post in the thread
         http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588
         >> suggests that in distributed setup locks are of no
         >> effect to other cluster nodes. But i want locks to be
         >> valid across cluster. If item locked, then no one else
         >> shud be able to transact with it. How to get that?
         The first post in that thread states that if the second thread doesn't lock, then it will overwrite the first thread (even if the first thread has locked). However, there is an inconsistency in that the Replicated cache topology has a stronger memory model than the Distributed/Near topologies. In the Replicated topoplogy, locks not only block out locks from other nodes, they also prevent puts from other nodes.
         Jon Purdy
         Tangosol, Inc.

  • Different distributed caches within the cluster

    Hi,
    i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
    i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
    The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
    from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
    my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
    can i configure local-storage specific to a cache rather than to a node.
    i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.

    Hi Jigar,
    i've three machines n1 , n2 and n3 respectively that
    host tangosol. 2 of them act as the primary
    distributed cache and the third one acts as the
    secondary cache.First, I am curious as to the requirements that drive this configuration setup.
    i would like to ensure that the data directly coming
    from weblogic should only be distributed across n1
    and n2 and NOT n3. for e.g. i do not start an
    instance of tangosol on node n3. and an object gets
    pruned from either n1 or n2. so ideally i should get
    a storage not configured exception which does not
    happen.
    The point is the moment is say
    CacheFactory.getCache("Dist:n3") in the cache
    listener, tangosol does populate the secondary cache
    by creating an instance of Dist:n3 on either n1 or n2
    depending from where the object has been pruned.
    from my understanding i dont think we can have a
    config file on n1 and n2 that does not have a scheme
    for n3. i tried doing that and got an illegalstate
    exception.
    my next step was to define the Dist:n3 scheme on n1
    and n2 with local storage false and have a similar
    config file on n3 with local-storage for Dist:n3 as
    true and local storage for the primary cache as
    false.
    can i configure local-storage specific to a cache
    rather than to a node.
    i also have an EJB deployed on weblogic that also
    entertains a getData request. i.e. this ejb will also
    check the primary cache and the secondary cache for
    data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the
    bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
    Later,
    Rob Misek
    Tangosol, Inc.

  • Foundation 2013 Farm and Distributed Cache settings

    We are on a 3 tier farm - 1 WFE + 1APP + 1SQL - have had many issues with AppFab and Dist Cache; and an additional issue with noderunner/Search Services.  Memory and CPU running very high.  Read that we shouldn't be running Search
    and Dist Cache in the same server, nor using a WFE as a cache host.  I don't have the budget to add another server in my environment. 
    I found an article (IderaWP_CachingFormSharePointPerformance.pdf) saying "To make use of SharePoint's caching capabilities requires a Server version of the platform." because it requires the publishing feature, which Foundation doesn't have. 
    So, I removed Distributed Cache (using Powershell) from my deployment and disabled the AppFab.  This resolved 90% of server errors but performance didn't improve. Now, not only I'm getting errors now on Central Admin. - expects Dist Cache
    - but I'm getting disk operations reading of 4000 ms.
    Questions:
    1) Should I enable AppFab and disable cache?
    2) Does Foundation support Dist Cache?  Do I need to run Distributed Cache?
    3) If so, can I run with just 1 cache host?  If I shouldn't run it on a WFE or an App server with Search, do I have to stop Search all together?  What happens with 2 tier farms out there? 
    4) Reading through the labyrinth of links on TechNet and MSDN on the subject, most of them says "Applies to SharePoint Server".
    5) Anyone out there on a Foundation 2013 production environment that could share your experience?
    Thanks in advance for any help with this!
    Monica
    Monica

    That article is referring to BlobCache, not Distributed Cache. BlobCache requires Publishing, hence Server, but DistributedCache is required on all SharePoint 2013 farms, regardless of edition.
    I would leave your DistCache on the WFE, given the App Server likely runs Search. Make sure you install
    AppFabric CU5 and make sure you make the changes as noted in the KB for
    AppFabric CU3.
    You'll need to separately investigate your disk performance issues. Could be poor disk layout, under spec'ed disks, and so on. A detail into the disks that support SharePoint would be valuable (type, kind, RPM if applicable, LUNs in place, etc.).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • I run windows 7/8 and lightroom 5.7  i am getting an error message and here it is " lightroom encountered and error when reading from its preview cache and needs to quit". I am getting this message repeatly.  What do I need to do to fix it.

    I run windows 7/8 and lightroom 5.7  I am getting an error message and here it is  " lightroom encountered an error when reading from its preview cache and needs t quit"    How do I fix this.

    THANKS
    Hank Wilkinson
    Please visit my web site  <http://www.hankwilkinson.com/> www.hankwilkinson.com for the latest of my photos
    For information on any of these photos please email me or call 612-756-9970

  • Lightroom 4.4 keeps crashing saying "Lightroom encountered an error when reading from its preview cache and needs to quit" please help. I've tried reinstalling it and everything  Windows 8.1

    Lightroom 4.4 keeps crashing saying "Lightroom encountered an error when reading from its preview cache and needs to quit" please help. I've tried reinstalling it and everything  Windows 8.1

    I assume you know where your catalogue file (the lrcat file) is located. In the same folder is a previews lrdata folder which you should delete or rename in Explorer, then start LR again.
    If I've made incorrect assumptions above, sorry, and we'll go back over it from the start.
    John

  • How to distribute my windows phone 8 app and windows store app without publishing in the store

    How to distribute my windows phone 8 app and windows store app without publishing in the store
    any business license or enterprise license needed..
    I am a windows developer talking behalf of my company i.e  wipro
    I have a question about the enterprise license   and  we are building an app for the limited users i.e our company employees and users and we do not want to publish in the store
    How to release the app?
    what are licenses etc needed?

    Hi,
    for developers distributing apps without Publishing in the Store, the sideloding Enterprise key license through volume licensing is required.
    Starting May 1, 2014, customers who want to enable sideloading will be able to purchase an Enterprise Sideloading key for $100 through the Open License program. 
    An unlimited number of devices can be enabled for sideloading using this key.
    thanks
    diramoh

  • Firfox 8 is sometimes very slow to load random websites and others it loads quickly, IE8 loads all websites ok. Have cleared cookies and cache and allowed ff as exception in Windows firewall.

    I use FF 8.0 (x86en-US) as default browser, since update to 8.0 FF intermittently takes ages to load some websites, sometimes not loading the site at all. To rectify this I go back one page to the Google results or hyperlink and ask FF to load the webpage again. This usually solves the problem but not always. I have cleared cookies and cache and added FF to exceptions in windows firewall. The websites affected are not always the same and some that load quickly one day can take ages next time. IE8 loads all websites OK. Antivirus and malware scans all clear.

    I use FF 8.0 (x86en-US) as default browser, since update to 8.0 FF intermittently takes ages to load some websites, sometimes not loading the site at all. To rectify this I go back one page to the Google results or hyperlink and ask FF to load the webpage again. This usually solves the problem but not always. I have cleared cookies and cache and added FF to exceptions in windows firewall. The websites affected are not always the same and some that load quickly one day can take ages next time. IE8 loads all websites OK. Antivirus and malware scans all clear.

  • How to put the data into cache and distribute to nodeusing oracle coherence

    Hi Friends,
    i am having some random number data writing into file,from that file i am reading the data and i want to put into cache,how can i put the data into cache and partition this data into different nodes ( machines) to caluculate like S.D,variance..etc..like that.(or how can i implement montecarlo using oracle coherence) if any one know plz suggest me with flow.
    Thank you.
    regards
    chandra

    Hi robert,
    i have some bulk data in some arraylist or object format,i want to put into cache.
    i am not able to put into cache.i am using put method like cache.put(object key ,object value) ,but its not allowing to put into cache.
    can you please help me.i m sending my code.plz go thru and tel me whr i did mistake.
    package lab3;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.NearCache;
    import java.io.File;
    import java.io.FileNotFoundException;
    import java.io.PrintWriter;
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Random;
    import java.util.Scanner;
    import javax.naming.Name;
    public class BlockScoleData {
         * @param args
         * s=The spot market price
         * x=the exercise price of the option
         * v=instantaneous standard deviation of s
         * r=risk free instantaneous rate of interest
         * t= time to expiration of the option
         * n – Number of MC simulations.
         private static String outputFile = "D:/cache1/sampledata2.txt";
    private static String inputFile = "D:/cache1/sampledata2.txt";
    NearCache cache;
    List<Credit> creditList = new ArrayList<Credit>();
         public void writeToFile(int noofsamples) {
              Random rnd = new Random();
              PrintWriter writer = null;
              try {
                   writer = new PrintWriter(outputFile);
                   for (int i = 1; i <= noofsamples; i++) {
                        double s = rnd.nextInt(200) * rnd.nextDouble();
                        //double x = rnd.nextInt(250) * rnd.nextDouble();
                        int t = rnd.nextInt(5);
                        double v = rnd.nextDouble() ;
                        double r = rnd.nextDouble() / 10;
                        //int n = rnd.nextInt(90000);
                        writer.println(s + " " + t + " " + v + " "
                                  + r );
              } catch (FileNotFoundException e) {
                   e.printStackTrace();
              } finally {
                   writer.close();
                   writer = null;
    public List<Credit> readFromFile() {
    Scanner scanner = null;
    Credit credit = null;
    // List<Credit> creditList = new ArrayList<Credit>();
    try {
    scanner = new Scanner(new File(inputFile));
    while (scanner.hasNext()) {
    credit = new Credit(scanner.nextDouble(), scanner.nextInt(),
    scanner.nextDouble(), scanner.nextDouble());
    creditList.add(credit);
    System.out.println("read the list from file:"+creditList);
    } catch (FileNotFoundException e) {
    e.printStackTrace();
    } finally {
    scanner.close();
    credit = null;
    scanner = null;
    return creditList;
    // public void putCache(String cachename,List<Credit> list){
    // cache = CacheFactory.getCache ( "VirtualCache");
    // List<Credit> rand = new ArrayList<Credit>();
    public Object put(Object key, Object value){
    cache = (NearCache)CacheFactory.getCache("mycache");
    String cachename = cache.getCacheName();
    List<Credit> cachelist=new ArrayList<Credit>();
    // Object key;
    //cachelist = (List<Credit>)cache.put(creditList,creditList);
    cache.put(creditList,creditList);
    System.out.println("read to the cache list from file:"+cache.get(creditList));
    return cachelist;
         public static void main(String[] args) throws Exception {
         NearCache cache = (NearCache)CacheFactory.getCache("mycache");
              new BlockScoleData().writeToFile(20);
         //new BlockScoleData().putCache("Name",);
              System.out
                        .println("New file \"myfile.csv\" has been created to the current directory");
         CacheFactory.ensureCluster();
         new BlockScoleData().readFromFile();
    System.out.println("data read from file successfully");
         List<Credit> creditList = new ArrayList<Credit>();
    new BlockScoleData().put(creditList,creditList);
         System.out.println("read to the cache list from file:"+cache.get(creditList));
    //cache=CacheFactory.getCache("mycache");
    //mycacheput("Name",new BlockScoleData());
    //     System.out.println("name of cache is :" +mycache.getCacheName());
    //     System.out.println("value in cache is :" +mycache.get("Name"));
    //     System.out.println("cache services are :" +mycache.getCacheService());
    regards
    chandra

  • Managing the Distributed Cache

    In MS documentation I often see this(or something similar)
    "The Distributed Cache service can end up in a nonfunctioning or unrecoverable state if you do not follow the procedures that are listed in this article. In extreme scenarios, you might have to rebuild the server farm. The Distributed Cache depends
    on Windows Server AppFabric as a prerequisite. Do not administer the AppFabric Caching Service from the
    Services window in Administrative Tools in
    Control Panel. Do not use the applications in the folder named AppFabric for Windows Server on the
    Start menu. "
    In many blogs including technet, I see this command always used
    Restart-Service -Name AppFabricCachingService
    I often see this when updating timeout settings.
    Are these considered the same thing?
    This is an example, how would you perform these steps.
    Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
    $DLTC = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
    $DLTC.requestTimeout = "3000"
    $DLTC.channelOpenTimeOut = "3000"
    $DLTC.MaxConnectionsToServer = "100"
    Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache $DLTC
    Restart-Service -Name AppFabricCachingService

    I haven't seen a clear statement about disabling the DC. It provides many essential caches where there are otherwise no replacements. Using the restart cmdlet isn't likely to cause you to need to rebuild your farm, Microsoft just doesn't want you touching
    the Distributed Cache outside of SharePoint, basically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • OSX and Windows Vista / 7 master browsers

    Hi all.
    I have spent a good deal of time troubleshooting this issue with my OSX machines being unable to browse to my windows machines through the network window of Finder. I currently have 2 Macbooks running 10.5.8 and an iMac running 10.4.11 that all have the same issue (unless I configure OSX to act as a master browser). On the windows side I have 2 XP machines, 1 Vista and 1 Windows 7 RC1. I have configured both Leopard systems to be on the same Windows workground, I am not aware of how to set this on tiger but it doesn't seem to make a difference anyhow.
    I have found that the macs can only browse to my windows machines if one of my XP systems is the master browser. If I allow either my 7 or Vista systems to be the master browser my macs cannot browse my windows workgroup at all, and I get mixed results trying to connect by using "Connect to server" and putting in the windows name, but it always works by IP address. To be sure of what I found I disabled the master browser service on all of my windows machines and then left one enabled at a time until I found that my macs can browse fine with XP as the master browser.
    I also used the "nbtstat -a <computer name>" command to verify which PC was my master browser from a windows command prompt. Also on my macs I used the command "nmblookup -M -- -," command to confirm my XP machine was the master browser.
    I did a little searching on the net and found one site where the person reported similar results:
    http://robmulally.blogspot.com/2009/03/macbook-master-browser-and-my-mate.html
    I know that it is also possible for one of my macs to act as a master browser by configuring samba, but I would prefer to use my windows systems as I leave them running most of the time, where as I mainly use my one macbook and it is only on when I use it. I would much rather know the answer why and possibly how to fix it rather than just doing a work around.
    Does anyone know what the issue is with OSX not being able to utilize a Windows Vista or 7 master browser? Theres not a whole lot of settings to play with on the windows side, the service is either on or off so I wasn't sure what else to try. I am going to be replacing XP on my systems as soon as the full release of Win 7 is out so I would like to resolve this before then.
    If anyone has ideas I would really appreciate it.
    Thanks all,
    Greg

    Well I have found a work around, although it is not a perfect solution, it gets the job done and it definitely works, and should work for anyone else who tries it. If anyone does happen to try it I would definitely be interested in your results. Also if anyone knows a lot more about what changed in Vista / 7 / server 2008 where they no longer respond to OSX 'smbserver' requests (explained more below) or a possible 'correct' was to resolve this I would be interested in knowing. I tested the work around by reversing the changes and repeating the steps on both my Vista and 7 systems and it worked in both cases.
    I did several tests watching packet traffic with Wireshark on Vista and XP while each was the master browser. I don't know 100% why this works this way, and while I would like to share with you all the packet logs I saved, I do not know of an easy way to strip my computer name info and would prefer not to post this on the internet.
    I found that OSX requests a NBSS (NetBIOS Session Service) session by a couple of different 'names' - the IP address, the network class - 192 in this case, and by something called *SMBSERVER. In the case of Windows XP it responds with a 'Positive session response' only when OSX sends the request to *SMBSERVER. In the case of Vista it returns a 'Negative session response, called name not present' to all three recipient names that OSX tries. On XP once the NBSS session is established you then see communication on the LANMAN and SMB protocols.
    I found that there is a registry setting to add a NetBIOS alias that you can add to your Vista or Win7 registry under HKEYLOCALMACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters called OptionalNames - make it a string or multistring, once created open it up and put in the IP address of your master browser computer (note that you must have a static IP configured on that system because if it ever changes this work around will stop working).
    After you have made the change close out of the registry and go to Start and type Services.msc in the quicksearch bar. In Services scroll down to the 'Server' service and restart it, it will ask to restart Computer Browser and Homegroup Listener (the HG Listener will only be on Win 7), just say Yes. Now you will need to refresh your network adapter on OSX, do this by unplugging and plugging back in the network cable or by disconnecting from your wireless access point and then reconnecting. After a few minutes you should see your windows machines start to show up in Finder on your Mac, it could take as long as thirty minutes for them to show up as the Master Browser builds its cache and then distributes it to other computers. The one thing that is 'bad' about this work around is that you will also see the IP address of your Windows Master Browser in your network list of Finder, this is because of the alias that was created in the registry.
    Also note that if you have multiple windows systems on your network you will need to do this work around on EACH computer that you want to be a potential master browser, as well you will need to Stop and Disable the Computer Browser service on any machines you do not want acting as a master browser. Keep in mind to be careful whenever you are editing your registry, always do a full backup of the registry before making changes.
    Some other things which may be important (but I did not test yet) -
    1. I have my Mac's set on a 'custom' network type - go to System Preferences | Network and hit the drop down at the top called 'Locations' and go to 'edit locations' and create a new one.
    2. Select your network adapter from the left and go to Advanced | WINS and type in the name of your windows workgroup. If you use both hardwire and wireless ethernet be sure to repeat this step for both adapters
    3. On my Windows Vista machine I have already changed a network security setting which may prevent this from working. By default Vista will force NTLMv2 communication which causes issues trying to connect to apple computers from a windows machine. Although this should only be an issue when connecting to an OSX system FROM Vista, I did observe in doing packet tracing that an NTLMv1 session was negotiated so this may be helpful for some folks (it should not stop the computer list from displaying but may stop you from connecting to computers). Open up 'gpedit.msc' by going to Start and typing it in the quick search bar. Drill down from Local Security Policy | Computer Configuration | Windows Settings | Security Settings | Local Policies | Security Options and then look for the setting 'Network Security: LAN Manager authentication level' - open it up and change the drop down to 'Send LM & NTLM - use NTLMv2 session security if negotiated' and click Apply. Note that in Win7 this setting is disabled by default.
    4. I also had issues restarting the Server service on my Windows 7 machine, sometimes it would fail to restart or report that one service could not be stopped (I didn't record the exact error message). If you get this just restart the system and it will easily take care of the issue.
    5. You may also want to make your Master Browser system the 'preferred' MB on your network to do this go in the Registry to HKLM\CurrentControlSet\Services\Browser\Parameters, open the MaintainServerList string and change the value data to Yes. Then create a new string named 'IsDomainMaster' open it and enter Yes for value data. This will not force the system to always be the the MB, but will make it much more likely if any other windows machines connect to your network in the future.
    6. If you have a WINS server it makes life 100 times easier, just set the WINS server addy on your network preferences on your mac, or have it pushed down through DHCP and life is good, your computers will display in Finders Network pane. But who wants to have a WinNT/2000/2003 server running in their house?
    As I said before, there is probably a better way to get this working but I have not found it yet. For now I am satisfied as it is, but I may try doing more searching on this just out of curiosity. I also hope that Apple and/or Microsoft will resolve this issue as this is a very annoying pain point for users and it doesn't help anyone by not having it work, it shouldn't take all the above steps just to be able to click on my Windows machines from a list on OSX and access shares. I know the typical answer from both companies would be "well it is because the other company does not support this or that feature", if thats the case, fix it. Ok, not trying to rant to much, I do hope that some of the developers of either product actually read this and see this as a necessary thing to resolve.
    I hope that this helps someone out there looking to resolve this issue as well.
    Greg

Maybe you are looking for

  • Error while installing SOA suite on solaris server

    Hi Experts, I am trying to install SOA on solaris box: SOA: 11.1.1.3.0, i am getting the below error while intalling: *./runInstaller -jreLoc /users/oim/jrockit/jre* Starting Oracle Universal Installer...+ Checking if CPU speed is above 300 MHz.    A

  • Listen for an events for Swing objects in a separate class?

    Hi all, sorry if this is in the wrong section of the forum but since this is a problem I am having with a Swing based project I thought i'd come here for help. Essentially i have nested panels in separate classes for the sake of clarity and to follow

  • HT204365 Unable to open PDF document saved on ibooks.

    Saved a downloaded pdf document on iBooks, and now I am unable to open it. The original is NLA so redownloading is not an option. Any way to recover the document? Any recommendations are welcome.

  • Export only part of project,timeline

    Is there option to export only few clips directly from project,timeline?..  When I go to share I can share only entire project. Or I must duplicate project, and delete everything what I don't want to share?

  • Anyone use an MP3 player besides an Apple?

    Aloha, I am thinking of buying a Sandisk Sansa Fuze or Clip+. I love my iMac but want to try a new MP3 player with better sound quality. Please share your experiences. Mahalo! :]]]