Configuration Question on  local-scheme and high-units

I run my Tangosol cluster with 12 nodes on 3 machines(each machine with 4 cache server nodes). I have 2 important configuration questions. Appreciate if you can answer them ASAP.
- My requirement is that I need only 10000 objects to be in cluster so that the resources can be freed upon when other caches are loaded. I configured the <high-units> to be 10000 but I am not sure if this is per node or for the whole cluster. I see that the total number of objects in the cluster goes till 15800 objects even when I configured for the 10K as high-units (there is some free memory on servers in this case). Can you please explain this?
- Is there an easy way to know the memory stats of the cluster? The memory command on the cluster doesn't seem to be giving me the correct stats. Is there any other utility that I can use?
I started all the nodes with the same configuration as below. Can you please answer the above questions ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
Ravi

I run my Tangosol cluster with 12 nodes on 3
machines(each machine with 4 cache server nodes). I
have 2 important configuration questions. Appreciate
if you can answer them ASAP.
- My requirement is that I need only 10000 objects to
be in cluster so that the resources can be freed upon
when other caches are loaded. I configured the
<high-units> to be 10000 but I am not sure if this is
per node or for the whole cluster. I see that the
total number of objects in the cluster goes till
15800 objects even when I configured for the 10K as
high-units (there is some free memory on servers in
this case). Can you please explain this?
It is per backing map, which is practically per node in case of distributed caches.
- Is there an easy way to know the memory stats of
the cluster? The memory command on the cluster
doesn't seem to be giving me the correct stats. Is
there any other utility that I can use?
Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
I started all the nodes with the same configuration
as below. Can you please answer the above questions
ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviBest regards,
Robert

Similar Messages

  • How to configure HA between local controller and remote controller in DC

    Good day,
    If I have two Cisco 5508 Controllers, running Software version 7.4, how would my failover happen when the AP's run in local mode, and the local controller fail, and you configured your remote controller as your secondary controller.  Question is, will the APs automatically convert to FlexConnect mode when they failover to the remote controller in the DC?  I know you cannot configure HA as the controllers have to be connected with ethernet copper cable on the redundancy port, giving you a distance limitation of 100m.
    Thank you in advance
    Adrian

    Hello ,
    As per your query i can suggest you the following solution-
    In wireless network deployments that run controller versions earlier than 5.0, when a controller goes down, it takes a long time for all the APs and the associated clients to move to a backup controller and for wireless service to resume.
    The features discussed in the document are implemented on the controller CLI in WLC software release 5.0 in order to decrease the time that it takes for access points and their associated clients to move to a backup controller and for wireless service to resume after a controller goes down:
    In order to reduce the controller failure detection time, you can configure the heartbeat interval between the controller and access point with a smaller timeout value.
    In addition to the option to configure primary, secondary, and tertiary controllers for a specific access point, you can now also configure primary and secondary backup controllers for a specific controller. If the local controller of the access point fails, it chooses an available controller from the backup controller list in this order:
    •o primary
    •o secondary
    •o tertiary
    •o primary backup
    •o secondary backup
    The access point maintains a list of backup controllers and periodically sends primary discovery requests to each entry on the list. You can now configure a primary discovery request timer in order to specify the amount of time that a controller has to respond to the discovery request of the access point before the access point assumes that the controller cannot be joined and waits for a discovery response from the next controller in the list.
    Hope this will help you.

  • Still have some configuration questions for SSD/HDDs and Pr Pro (laptop)

    Hi guys,
    So I recently asked a question regarding configurations for a new Toshiba Qosmio (being delivered in 2 days)  I've done a lot of reading and have become somewhat more educated and somewhat more confused.  I'm having the X70-ABT3G22 custom built with an i7-4700MQ, a 256GB mSATA SSD, 2 matching 500GB (7200rpm Serial-ATA) HDDs, a 3GB GeForce GTX 770M, and 32GB DDR3L 1600MHZ SDRAM.
    It was suggested that I RAID 0 the 2 HDDs, which sounds like a great idea.  My question then, is the configuration of Premiere Pro with the - now 2 drives left.  I was reading about disk set-up on http://ppbm7.com/index.php/tweakers-page/84-disk-setup/95-disk-setup  which was very informative.  But the information there suggests that if I have 2 drives, I should consider adding a 3rd.
    At this point, there's no room for a third internal drive because the 2 have been striped to make 1, The SSD is obviosly for the OS and programs. And if Premier Pro functions more efficiently by spreading the workload over at least 3 disks, would I be better off by not striping the 2 matching HDDs and spreading the workload out?  Or should I stripe the 2 internal drives and use a USB 3.0 flash drive for the previews, media cache, or exports? Or stripe the 2 HDDs and just work with the 2 internal drives without anything external?  I'm looking for suggestions from you guys for the best way to configure this particular set-up.  Any help would be greatly appreciated.  Serioulsy.
    Thanks again
    Todd

    I do not know if a USB3 RAID box would be any better than my simple experiment that I tried.  See my thread for item 2 in the first post.  I bought a 2 1/2" external enclosure which I researched extensively for what sounded like it had an optimized SATA III to USB3 interface.  I put one of the fastest SSD drives I am aware of (Samsung 840 Pro 256 GB) in it to see what the performance would be.  When I put this SSD in my laptop directly on an internal SATA III with my PPBM6/7 benchmark it gives me a Premiere write transfer rate of almost 500 MB/second.  When I put the same drive in the external USB3 enclosure the Premiere write transfer is limited to slightly less than 200 MB/second which probably better than a most hard disk drives, but now where near taking advantage of the speed capability. 
    The answer to your question is yes, you can use an external hard drive RAID enclosure effectively but do not expect much more speed from that device.  You will have more reliability if you set it up for the right RAID mode.  Using hard drives you will have much greater capacity than SSD's offer.  The only disadvantages with a large multidisk USB3 enclosure is you will always have to have another AC cord to plugin and dragging a large disk enclosure around is a pain.  The small 2 1/2" enclosure above is self powered and highly portable.
    Do not believe many of the advertised read/write rates of these USB3 devices, specifically the write rates are not what you can achive in real life applications.
    I just checked and your Toshiba laptop does not have eSATA so I think you are limited to only USB3 peripherals. 

  • Question about xml schemas and the use of unqualified nested elements

    Hello,
    I have the following schema:
    <?xml version="1.0" encoding="UTF-8"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
        xmlns="http://xml.netbeans.org/examples/targetNS"
        targetNamespace="http://xml.netbeans.org/examples/targetNS"
        elementFormDefault="unqualified">
        <xsd:element name="name" type="xsd:string"/>
        <xsd:element name="age" type="xsd:int"/>
        <xsd:element name="person">
            <xsd:complexType>
                <xsd:sequence>
                    <xsd:element ref="name"/>
                    <xsd:element ref="age"/>
                   <xsd:element name="height" type="xsd:int"/>
                </xsd:sequence>
            </xsd:complexType>
        </xsd:element>
    </xsd:schema>I am just wondering why would someone have a nested element that is unqualified? here the "height" element.
    Can anyone explain this to me please?
    Thanks in advance,
    Julien.
    here is an instance xml document
    <?xml version="1.0" encoding="UTF-8"?>
    <person xmlns='http://xml.netbeans.org/examples/targetNS'
      xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
      xsi:schemaLocation='http://xml.netbeans.org/examples/targetNS file:/E:/dev/java/XML/WebApplicationXML/web/newXMLSchemaThree.xsd'>
    <name>toto</name>
    <age>40</age>
    <height>180</height>
    </person>

    Don't worry about it.
    There are two different styles of schemas. In one style, you define the elements, attributes, etc. at the top, and then use the "ref" form to refer to them. (I call this the "global" style.) The other style is to define elements inline where they are used. ("local" style)
    Within some bounds, they work the same and which you use is a choice of you and the tools that generate the schemas.
    A warning about the local style. It is possible to define an element in two different locations in the schema that are different. It will get past all schema validation, but it seems wrong to my sense of esthetics. With the global style, this will not happen.
    So, how did this happen? Probably one person did the schema when it only had a name and age. Then, someone else added the height element. They either used a different tool, or preferred the other style. I'm aware of no difference in the document you have described between the two styles.
    Dave Patterson

  • Japanese, Question Marks, Locales, Eclipse, and Windows XP ????

    Hello. I am having some issues localizing JSP to Japanese. I have read a lot of stuff on the topic. I have my .properties file in unicode with native2ascii, etc.
    When I debug under Eclipse 3.0, I see the Japanese characters correctly displayed in my properties file and inside of strings internal to to the program. However, when I try to print them with a System.out.println, I get question marks (??????).
    My reading tells me that the ???? indicate that the characters cannot be displayed. I am somewhat confused because in the same Eclipse context I can clearly see the Japanese characters in the debugging window.
    Thus I am missing the part where I set my standard output to correctly display the characters like Eclipse is displaying them in windows other than the "Console" window.
    My default encoding is CP1252. If I do something like:
    out = new java.io.PrintStream(System.out, true, "UTF-8");
    and print my unicoded resource from the bundle I get the UTF-8 character representation (����&#65533;�����������&#65533;��������&#65533;��&#65533;��&#65533;&#65533;�&#65533;��&#65533;��&#65533;���). With System.out.println I get ?????
    My first reaction would be that the Japanese fonts aren't on my system, but clearly they are as I can see them in other windows.
    When I try to show a Japanese resource on the web page that is the result of the jsp file I get ????. I can display the same characters UTF-8 encoded in a php page.
    Here is another example:
    java.util.Locale[] locales = { new java.util.Locale("en", "US"), new java.util.Locale("ja","JP"), new java.util.Locale("es", "ES"),
    new java.util.Locale("it", "IT") };
    for (int x=0; x< locales.length; ++x) {
    String displayLanguage = locales[x].getDisplayLanguage(locales[x]);
    System.out.println(locales[x].toString() + ": " + displayLanguage);
    displays:
    en_US: English
    ja_JP: ???
    es_ES: espa�ol
    it_IT: italiano
    instead of the correct Japanese characters.
    What's the secret?
    Thanks.
    -- Gary

    What do you want to do exactly? 1. Making a window application? 2. Making a console application? 3. Making a JSP webpage?
    1. If it's window application, there's nothing to worry about if you use Swing. But if you use AWT, it's time to switch to Swing.
    2. If you're making console application, solution does exist as others had pointed out but you'd better forget it because no console in whatever platform supports Unicode (Linux xterm may be an exception? But it probably has font problem). So, even if you could display characters in your computer, the solution isn't universal. You can't ask your every user to switch system locale, install whatever font in order to display a few characters!!
    3. If you're making JSP, I'd advise you to use UTF-8 in webpages. Most browsers nowadays (probably more than 90%) could support UTF-8. All you need is to add the following JSP headers to your every page:
    <%@ page contentType="text/html;charset=utf-8" %>
    <%@ page pageEncoding="iso-8859-1" %>Now, your every out.println(s); will send the correct data to browser without the least effort from you. All conversions are automatic!
    However, just to make things even surer, you could add this HTML meta header:
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">You use Tomcat, right? I do, and I don't have any problem.
    Last words:
    But, if all you want to do with System.out.println is for debugging, you could use
    JOptionPane.showMessageDialog(null, "your string here");But you'd better have Java 5, or at least 1.4.2, if you want to have everything displayed correctly.

  • Reg: Questions on Personnel Management and Organization Unit ,

    Hi Friends,
    I have 2 questions Could any body help me out on this and Let me know i need to be little elaborate on this ...
    1.    What are the basic settings we need to do for Personnel                          Management ?
    2.    I want to know how to maintain Organization plans for Different Org Units.
    Thank you,
    Ayyappa

    adding to archana response
    Enterprise Structure consists of   Company code
                                                      Personal Area
                                                                                    Personal Sub Area
    Organisational Structure           Organisationla Units
                                                  Jobs
                                                  Positions
    sikindar

  • High-units and expiry-delay question

    Hi,
    I'm looking at a config.xml file that hasn't been set-up by me. In the local-scheme element, there are two settings - expiry-delay and high-units. In these elements there is the following:
    <high-units>{back-size-limit 0}</high-units>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    I can't see anywhere in the documentation reference to {}'s. Has this been configured incorrectly?
    -=david=-

    Hi David,
    that is "documented by example" :-)
    {back-size-limit 0} practically means that it gets the content of the <param-value> child element of the <init-param> element with the <param-name> element containing back-size-limit specified in the <init-params> element within the cache mapping referring to the caching-scheme containing this setting, or zero if such an <init-param> is not specified.
    Best regards,
    Robert

  • Duplicate data in front scheme and back scheme?

    Assume that a near cache has been defined with a front-scheme and a back-scheme. The front-scheme is a local-scheme and the back-scheme is a distributed-scheme.
    Now assume that there is a piece of data for which the current JVM is a master. That is, the data resides in the back cache.
    Assume that the JVM accesses that data.
    Will this populate the front cache even though the master is in the back cache on the same JVM?
    Does Coherence therefore store two copies of the data, one in the front cache, one in the back cache?
    Or does Coherence realize that the requested data is in the back cache on the same JVM, and therefore doesn't make another copy and populate the front cache, but instead simply redirects the get call to it's own back cache?
    This basically has memory implications and I'm trying to configure a cache that works on a single JVM as well as a cluster. I am trying to get away with having just the one configuration file for coherence irrespective of whether the application is on a single storage enabled node, or a cluster.
    I cannot tell from the diagram in the Developer Guide at http://docs.oracle.com/cd/E24290_01/coh.371/e22837/cache_intro.htm#BABCJFHE
    because I'm not sure whether the front/local cache for JVM1 has not been shown as populated with A because it is the master for A, or because JVM 1 never accessed A.

    926349 wrote:
    Assume that a near cache has been defined with a front-scheme and a back-scheme. The front-scheme is a local-scheme and the back-scheme is a distributed-scheme.
    Now assume that there is a piece of data for which the current JVM is a master. That is, the data resides in the back cache.
    Assume that the JVM accesses that data.
    Will this populate the front cache even though the master is in the back cache on the same JVM?
    Yes, it will, if access is via the near cache.
    Does Coherence therefore store two copies of the data, one in the front cache, one in the back cache?
    Yes, but with usual usage patterns the front-map is size limited, so it will hold the just pk-accessed entry in the front map, but it may evict other entries from the front map, so the front-map may not be a full duplicate of the entire data-set.
    Or does Coherence realize that the requested data is in the back cache on the same JVM, and therefore doesn't make another copy and populate the front cache, but instead simply redirects the get call to it's own back cache?
    Nope, no such logic is in place. On the other hand, as I mentioned front-map is usually size-limited, and also the storage nodes don't have to access the cache via the front-map, they can go directly to the distributed cache by getting the back-cache from the near cache instance.
    This basically has memory implications and I'm trying to configure a cache that works on a single JVM as well as a cluster. I am trying to get away with having just the one configuration file for coherence irrespective of whether the application is on a single storage enabled node, or a cluster.
    You should not try to. You should always choose the correct topology and configuration for the actual use case. There is no one-size-optimally-fits-all solution.
    Best regards,
    Robert

  • Live migration to HA failed leaving VHD on local storage and VM in cluster = Unsupported Cluster Configuration

    Hi all
    Fun one here, I've been moving non-HA VMs to a HA and everything has been working perfectly until now.  All this is being performed on Hyper-V 2012R2, Windows Server 2012R2 and VMM 2012R2.
    For some reason on the VMs failed the migration with an error 10608 "Cannot create or update a highly available virtual machine because Virtual Machine Manager could not locate or access Drive:\Folder"  The odd thing is the drive\folder is
    a local storage one and I selected a CSV in the migration wizard.
    The net result is that the VM is half configured into the cluster but the VHD is still on local storage.  Hence the "unsupported cluster configuration" error.
    The question is how do I roll back? I either need to get the VM out of the cluster and back into a non-HA state or move the VHD onto the CSV.  Not sure if the latter is really a option.
    I've foolishly clicked "Ignore" on the repair so now I can't use the "undo" option (brain fade moment on my part).
    Any help gratefully received as I'm a bit stuck with this.
    Thanks
    Rob

    Hi Simar
    Thanks for the advice, I've now got the VM back in a stable state and running HA.
    Just to finish off the thread for future I did the following
    - Shutdown the VM
    - Remove the VM from the Failover Cluster Manager (as you say this did leave the VM configuration intact)
    - I was unable to import the VM as per your instructions so I copied the VHD to another folder on the local storage and took a note of the VM configuration.
    - Deleted the VM from VMM so this removed all the configuration details/old VHD.
    - Built a new VM using the details I saved from the point above
    - Copied the VHD into the new VMs folder and attached it to the VM.
    - Started it up and reconfigured networking
    - Use VMM to make the VM HA.
    I believe I have found the reason for the initial error, it appears there was a empty folder in the Snapshot folder, probably from an old Checkpoint that hadn't cleaned up properly when it was deleted.
    The system is up and running now so thanks again for the advice.
    Rob

  • Changing high and low units while tangosol is running

    Hi,
         i've a 3 machine setup. on the first one i've weblogic and tangosol with local storage as false.
         on the other two i've have tangosol running with local storage enabled and i'm running a distributed service.
         initially i've kept high units as 1000 and low units as 500. i would like to provide a screen to the amdin user where by he can change the high and low unit limit.
         i'm accessing the local cache using the following code
         CacheService service = dealCacheCoarseGrained.getCacheService() ;
         Map mapReadWrite = ( ( DefaultConfigurableCacheFactory.Manager ) service.getBackingMapManager() ) .getBackingMap( bundle.getString( "DistCache" ) ) ;
         if ( mapReadWrite instanceof ReadWriteBackingMap ) {
         rdwMap = ( ReadWriteBackingMap ) mapReadWrite;
         Map mapLocal = rdwMap.getInternalCache() ;
         if ( mapLocal instanceof LocalCache ) {
         localCache = ( LocalCache ) mapLocal;
         localCache.setHighUnits( evictionDetails.getHighUnits() ) ;
         localCache.setLowUnits( evictionDetails.getLowUnits() ) ;
         but in the above case as the tangosol operating within the weblogic JVM has local storage as false,
         the condition if ( mapLocal instanceof LocalCache ) {...
         is never fulfilled and as a result
         i'm not able to bring about the changes from the GUI.
         Can i achieve my objective with this setup?
         Please help
         Thanks
         Jigs

    Hi jigs,
         I would have the GUI kick off an Invocation Task that uses the code that you have here. Then send that Invocation Task to all storage enabled nodes. You can retreive the set of storage enabled nodes from DistributedCacheService.getStorageEnabledMembers().
         Later,
         Rob Misek
         Tangosol, Inc.

  • Help on Time type 0903, Identifying Schema and Rule configuration?

    Can you please help me with the following:
    1. What is time type 0903?
    2. How to identify schema and rule configuration?
    3. How do I identify the number of employees affected in case I have to make a configuration change?
    Thanks

    Hello,
    1) Time type is identified according to what purpose it is used. Check table T555A for details on why it is used in your configuration.
    2) There is no configuration in schema and rule. Schema and rules has the programs in them.
    3)What configuration changes you are planning to do? Depending on this we can then try to identify who is affected.
    Hope this answers your questions.
    Let me know if you need more info on how to configure time management and write schema and rules.

  • Configure Jabber for Mac with local CUCM and WebEx Connect

    Hi, I was wondering if anyone has been able to configure the Jabber for Mac 8.6.2 client to use the WebEx Connect presence server with a local CUCM and Unity Connection servers. The preferences accounts tab does not show or allow the addition of the voice services. I have added from a working local CUCM preferences plist file what I believe are the correct entries however I still cannot see the accounts on the preferences tab. We currently do not have a local cisco presence server hence the requirement to trial the WebEx Connect server as I can't get past the first configuration step without it.
    regards
        paul

    Hi - I have done this via the admin portal but still cannot get the Jabber client for Mac to register for voice.  The Windows version works fine for the same user and CUCM device.   Are there any other settings that need to be enabled specific to the Mac client?
    Thanks.

  • SAP-JEE, SAP_BUILDT, and SAP_JTECHS and Dev Configuration questions

    Hi experts,
    I am configuring NWDI for our environment and have a few questions that I'm trying to get my arms around.  
    I've read we need to check-in SAP-JEE, SAP_BUILDT, and SAP_JTECHS as required components, but I'm confused on the whole check-in vs. import thing.
    I placed the 3 files in the correct OS directory and checked them in via the check-in tab on CMS.   Next, the files show up in the import queue for the DEV tab.  My questions are what do I do next?
    1.  Do I import them into DEV?  If so, what is this actually doing?  Is it importing into the actual runtime system (i.e. DEV checkbox and parameters as defined in the landscape configurator for this track)? Or is just importing the file into the DEV buildspace of NWDI system?
    2.  Same question goes for the Consolidation tab.    Do I import them in here as well? 
    3.  Do I need to import them into the QA and Prod systems too?  Or do I remove them from the queue?
    Development Configuration questions ***
    4. When I download the development configuration, I can select DEV or CON workspace.  What is the difference?  Does DEV point to the sandbox (or central development) runtime system and CONS points to the configuration runtime system as defined in the landscape configurator?  Or is this the DEV an CON workspace/buildspace of the NWDI sytem.
    5.  Does the selection here dictate the starting point for the development?  What is an example scenarios when I would choose DEV vs. CON?
    6.  I have heard about the concept of a maintenance track and a development track.  What is the difference and how do they differ from a setup perspective?   When would a Developer pick one over the over? 
    Thanks for any advice
    -Dave

    Hi David,
    "Check-In" makes SCA known to CMS, "import" will import the content of the SCAs into CBS/DTR.
    1. Yes. For these three SCAs specifically (they only contain buildarchives, no sources, no deployarchives) the build archives are imported into the dev buildspace on CBS. If the SCAs contain deployarchives and you have a runtime system configured for the dev system then those deployarchives should get deployed onto the runtime system.
    2. Have you seen /people/marion.schlotte/blog/2006/03/30/best-practices-for-nwdi-track-design-for-ongoing-development ? Sooner or later you will want to.
    3. Should be answered indirectly.
    4. Dev/Cons correspond to the Dev/Consolidation system in CMS. For each developed SC you have 2 systems with 2 workspaces in DTR for each (inactive/active)
    5. You should use dev. I would only use cons for corrections if they can't be done in dev and transported. Note that you will get conflicts in DTR if you do parallel changes in dev and cons.
    6. See link in No.2 ?
    Regards,
    Marc

  • Question for integration star and snow flake schema in data warehouse

    Dear Reader,
    I facing a problem like that
    I have two data warehouse, one use star schema, other use snow flake schema. I would like to integrate both of them into one data warehouse. What is the strategy should these two data warehouse adopt in order to integrate int one data warehouse?
    Should I scrap both data warehouse and build a new one instead, or scrap one of them and use the other?
    What factors should be considered in order for me to more easily resolve the differences between the two data warehouses.
    Please advise. Thank you very much.

    Hi Mallis,
    This is a very broad question and the answer depends on so many factors. Please go through the following articles to get an
    understanding of what the differences are when to use which.
    When do you use a star schema and when to use a snowflake schema -
    http://www.information-management.com/news/10000243-1.html
    Star vs Snowflake Schemas – what’s your belief? –
    http://www.networkworld.com/community/blog/star-vs-snowflake-schemas-%E2%80%93-what%E2%80%99s-your-belie
    Hope this helps!

  • Extend AD Schema and configure Logon Manager

    Hi,
    i tried to extend schema and configure the logon manager but with no success.
    When i extend schema in ESSO Admin Console a successful message appear, but when i check the Active Directory for the objects that should appear, non of them are in there.
    Can someone please explain me what im doing wrong, or give me a link where the steps to extend and configure logon manager are explained?
    thanks

    I,
    thanks for the answer.
    now the AD is extended successfully.
    But i have one more problem, in the agent installed in the server machine de apps appear correctly and the users are synchronized correctly, although in other machine that the agent is installed i cant connect to AD (i think) the apps not appear..
    When the window "Connect to Active Diretory" of the agent appear i introduce de credentials : Administrator and pass of the AD administrator. Am i right?
    I think theres a problem with the connection with the AD and the machine.
    Thanks..

Maybe you are looking for