LMS 4.0 failover or HA

Hello,
reading documentation seems that only ways to build a failover/HA solution with LMS can be done with Veritas cluster and ESX/i HA features.
due to limited budget, anyone of you knows another way to create a failover solution ?
Thanks in advanced.

These are the only two supported solutions.  I have not heard of anyone rolling their own HA solution for LMS.

Similar Messages

  • Software for managing SNMP Pix failover traps

    Hi, we need to monitor pix failover with snmp. Going through the pix readme shows as example how to do with Cisco Works for WIndows. Is this the only cisco product that can manage this? We are using LMS, is there a way with LMS to monitor failover events?
    Kurtis Durrett

    Thanks!
    The command originally didn't work by itself, but after come changes to the other SNMP configurations the traps were then received.
    SNMP configurations below:
    Switch#show run | inc snmp
    snmp-server community (removed) RW 5
    snmp-server trap-source Vlan411
    snmp-server chassis-id (Removed)
    snmp-server enable traps fru-ctrl
    snmp-server enable traps entity
    snmp-server enable traps envmon fan temperature
    snmp-server host *.*.*.* (Removed)  fru-ctrl envmon
    Logging:
    Switch#show run | inc log
    service timestamps log datetime localtime
    logging buffered 16384
    logging trap notifications

  • Mounted Volume not shwoing up with Windows 2012 R2 failover cluster

    Hi
    We configured some drives as mounted volumes and configured it with Failover cluster. But it's not showing up the mounted volume details with Cluster Manager, it's showing the details as seen below
    Expect support from someone to correct this issue
    Thanks in advance
    LMS

    Hi LMS,
    Are you doubt about the disk shown as GUID? Cluster Mount point Disk is showing as a Volume GUID in server 2012 R2 Failover Cluster I creating a mountpoint inside the cluster
    and had the same behavior, instead of mount point name we had the volume GUI after volume label, that must by design.
    How to configure volume mount points on a Microsoft Cluster Server
    http://support.microsoft.com/kb/280297
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • LMS 4.0 support for ASA firewall

    I need to add ASA 5520 to LMS 4.0, mainly for configuration archiving. ASA seems to be supported on LMS 3.2 as per the below link.
    http://www.cisco.com/en/US/docs/net_mgmt/ciscoworks_lan_management_solution/3.2/device_support/table/lms32sdt.html
    I had directly added the ASA to the DCR, with the right login credentials and SNMPv3 strings , but still LMS fails to detect the ASA.
    Thanks in advance.

    Thanks Nael for the reply, please find below the SNMP configuration on the ASA
    snmp-server group SNMPGRP v3 auth
    snmp-server user SNMPUSR SNMPGRP v3 encrypted auth md5 a9:ba:79:44:5b:b0:98:65:88:30:a1:8b:7b:69:a2:9c
    snmp-server host inside 10.88.80.11 trap version 3 SNMPGRP
    no snmp-server contact
    snmp-server enable traps snmp authentication linkup linkdown coldstart
    The show version is given below.
    ASA5520# sh ver
    Cisco Adaptive Security Appliance Software Version 8.2(3)
    Compiled on Fri 06-Aug-10 07:51 by builders
    System image file is "disk0:/asa823-k8.bin"
    Config file at boot was "startup-config"
    ASA5520 up 8 days 19 hours
    failover cluster up 25 days 14 hours
    Hardware:   ASA5520, 512 MB RAM, CPU Pentium 4 Celeron 2000 MHz
    Internal ATA Compact Flash, 256MB
    BIOS Flash M50FW080 @ 0xffe00000, 1024KB
    Encryption hardware device : Cisco ASA-55x0 on-board accelerator (revision 0x0)
                                 Boot microcode   : CN1000-MC-BOOT-2.00
                                 SSL/IKE microcode: CNLite-MC-SSLm-PLUS-2.03
                                 IPSec microcode  : CNlite-MC-IPSECm-MAIN-2.04
    0: Ext: GigabitEthernet0/0  : address is 001f.9e50.8a24, irq 9
    1: Ext: GigabitEthernet0/1  : address is 001f.9e50.8a25, irq 9
    2: Ext: GigabitEthernet0/2  : address is 001f.9e50.8a26, irq 9
    3: Ext: GigabitEthernet0/3  : address is 001f.9e50.8a27, irq 9
    4: Ext: Management0/0       : address is 001f.9e50.8a28, irq 11
    5: Int: Internal-Data0/0    : address is 0000.0001.0002, irq 11
    6: Int: Internal-Control0/0 : address is 0000.0001.0001, irq 5
    Licensed features for this platform:
    Maximum Physical Interfaces    : Unlimited
    Maximum VLANs                  : 150
    Inside Hosts                   : Unlimited
    Failover                       : Active/Active
    VPN-DES                        : Enabled
    VPN-3DES-AES                   : Enabled
    Security Contexts              : 2
    GTP/GPRS                       : Disabled
    SSL VPN Peers                  : 2
    Total VPN Peers                : 750
    Shared License                 : Disabled
    AnyConnect for Mobile          : Disabled
    AnyConnect for Cisco VPN Phone : Disabled
    AnyConnect Essentials          : Disabled
    Advanced Endpoint Assessment   : Disabled
    UC Phone Proxy Sessions        : 2
    Total UC Proxy Sessions        : 2
    Botnet Traffic Filter          : Disabled
    This platform has an ASA 5520 VPN Plus license.
    Serial Number: JMXXXXX
    Running Activation Key: XX
    Configuration register is 0x1
    Configuration last modified by enable_1 at 15:05:29.268 AST Sun Jun 12 2011
    When I add the ASA to the LMS using SNMPv3, the Device Management shows a blue box with a question mark(shown below).
    Is ASA supported on LMS 4.0 with SNMPv3? Doing a troubleshooting on the LMS shows that LMS might only support SNMPv1 & v2.

  • In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.

  • LMS Services in Solaris 10

    Dear *,
    If for any reason my LMS applicaiton is not opening which services should i check on Solaris 10 server and in which state should they be and how?
    Thanks,
    Aamir

    Post the output of the /opt/CSCOpx/bin/pdshow command from the server as well a the exact error you're seeing when trying to access LMS.  What version of LMS is this?

  • Reporting Services as a generic service in a failover cluster group?

    There is some confusion on whether or not Microsoft will support a Reporting Services deployment on a failover cluster using scale-out, and adding the Reporting Services service as a generic service in a cluster group to achieve active-passive high
    availability.
    A deployment like this is described by Lukasz Pawlowski (Program Manager on the Reporting Services team) in this blog article
    http://blogs.msdn.com/b/lukaszp/archive/2009/10/28/high-availability-frequently-asked-questions-about-failover-clustering-and-reporting-services.aspx. There it is stated that it can be done, and what needs to be considered when doing such a deployment.
    This article (http://technet.microsoft.com/en-us/library/bb630402.aspx) on the other hand states: "Failover clustering is supported only for the report server database; you
    cannot run the Report Server service as part of a failover cluster."
    This is somewhat confusing to me. Can I expect to receive support from Microsoft for a setup like this?
    Best Regards,
    Peter Wretmo

    Hi Peter,
    Thanks for your posting.
    As Lukasz said in the
    blog, failover clustering with SSRS is possible. However, during the failover there is some time during which users will receive errors when accessing SSRS since the network names will resolve to a computer where the SSRS service is in the process of starting.
    Besides, there are several considerations and manual steps involved on your part before configuring the failover clustering with SSRS service:
    Impact on other applications that share the SQL Server. One common idea is to put SSRS in the same cluster group as SQL Server.  If SQL Server is hosting multiple application databases, other than just the SSRS databases, a failure in SSRS may cause
    a significant failover impact to the entire environment.
    SSRS fails over independently of SQL Server.
    If SSRS is running, it is going to do work on behalf of the overall deployment so it will be Active. To make SSRS Passive is to stop the SSRS service on all passive cluster nodes.
    So, SSRS is designed to achieve High Availability through the Scale-Out deployment. Though a failover clustered SSRS deployment is achievable, it is not the best option for achieving High Availability with Reporting Services.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • LMS will not refresh after course is closed

    Hello,
    I am hoping someone can give me insight into what might be causeing this problem.
    I have a number of courses uploaded to both a staging site and a live site. We recently made a few changes to a course which I thought were quite minor and I uploaded the new scorm files to the LMS staging site for testing.
    The changes were:
    the continue button on the Quiz Review slide has always had the text "exit" instead of "continue" but I recently changed the colour from grey to red.
    the message that a user received when they pass the quiz is different
    The old courses worked on both sites as follows:
    If a course was started and a user quite part way through the course the LMS would say resume course when the user logged back in.
    If the course was failed then the user would see retake on the LMS when they logged back in and could then retake the course.
    If the user passed the course, it was moved from an "In Progress"  area on the LMS to a "Completed" area on the LMS automatically.
    Since I made the relatively minor changes to the courses, they will no longer move unless you choose to refresh the page. All courses that have not been updated still work as they should which is what leads me to believe it is not a LMS issue but rather the scorm file issue.
    I am using Captivate 7 and have recently updated all my Adobe products so I should have the most recent version of the software. I am wondering if something has changed with how Captivate creates Scorm files?
    Also under Reporting/advanced LMS for the courses I have selected Send data on every slide and Set exit to normal after completion (these have always been the settings)
    Scorm files are SCORM 2004
    Any input would be very much appreciated.

    Hi Rod,
    I could be wrong but I don't think that is the issue.
    The updated courses are only on the staging site as we do not transfered them until they are tested. Only those courses that have been recently updated are not working properly but I know that the correct course is being loaded as the button on the Quiz Review is now red. Yesterday I did a test on the live site and uploaded one of the updated courses to see if the problem persisted on the live site. Same problem was on the live site. When I removed this undated course and reverted back to the original the problem went away. It seems it is definitely an issue with the newly updated course.
    The courses were oringinaly created using Captivate 6 but then saved in Cativate 7 months ago as I have been using it since if first came out. The original files that were working were created in Captivate 7.
    Thanks

  • LMS 4.0 not starting after upgrade

    I am running LMS 4.0 on Windows 2008 Server, recently changed from eval copy to licensed (for 750 devices). It had been running and working for several months.The license application worked fine and it ran a couple days that way.
    I then downloaded the CiscoWorks_LMS_4.0.x_WIN2K8_R2.exe patch so that I could upgrade the server OS to Windows Server 2008 R2. Per the instructions in the patch readme I first proceeded to upgrade my server to 2008 R2. Then I installed the patch (patch only).
    After doing this (and restarting the server) CiscoWorks daemons will no longer start. I have confirmed my paging file size (swap) is set to start at 8192 MB (8 GB) as required. I tried manuallly starting (net start crmdmgtd) without sucess. I see the initial dmgt_start_lock lock file in C:\Program Files (x86)\CSCOpx\objects\dmgt\ready but none of the other process lock files ever pop in and out as they would normally and the processes (and thus CiscoWorks server) never start.
    Any suggestions?

    The "patch" is as far as I know some sort of launcher for the installation setup.exe.
    When I run the "patch" al I get is the question where my installer is, so I can install LMS.
    I have not yet tried to run this on an existing LMS installation. Nor upgrading the OS on an LMS installation :-)
    Does the "patch" have a different behavior when LMS is already installed?
    If not, I suggest to use the cleansystem.exe in the CSCOpx\bin to remove the current install, reinstall use the patch to launch the installer and restore a backup
    Cheers,
    Michel

  • What is solution of nat failover with 2 ISPs?

    Now I have lease line link to 2 ISPs for internet connection. I separate packets of users by accesslist such as www go to ISP1 and mail or other protocol go to ISP2 . Let's say link go to ISP1 down I need www traffics failover to ISP2 and vice versa.
    Problem is acl on nat statement?
    If you config about this.
    access-l 101 permit tcp any any www -->www traffic to ISP1
    access-l 101 permit tcp any any mail --> back up for mail packet to ISP2 down
    access-l 102 permit tcp any any mail -->mail packet to ISP2
    access-l 102 permit tcp any any www --> back up for www traffic go to ISP2
    ip nat inside source list 101 interface s0 overload
    ip nat inside source list 102 interface s1 overload
    In this case is links of ISP1 and ISP2 are UP.
    when you apply this acl on nat statement then nat will process each statement in order( if I incorrect please correct me) so mail traffics will match in this acl and then nat with ip of ISP1 only.
    please advice solution about this
    TIA

    Hi,
    If you have two serial links connecting to two diff service provider , then you can try this .
    access-l 101 permit tcp any any www
    access-l 102 permit tcp any any mail
    route-map isp1 permit 10
    match ip address 101
    set interface s0
    route-map isp2 permit 10
    match ip address 102
    set interface s1
    ip nat inside route-map isp1 interface s0 overload
    ip nat inside source route-map isp2 interface s1 overload
    ip nat inside source list 103 interface s0 overload
    ip nat inside source list 104 interface s1 overload
    ip route 0.0.0.0 0.0.0.0 s0
    ip route 0.0.0.0 0.0.0.0 s1 100
    In case if any of the link fails , automatically the other traffic would prefer the other serial.
    I have not tried the config , just worked out the config on logic .pls go through and try if possible
    pls see the note2 column
    http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080093fca.shtml#related
    Hope it helps
    regards
    vanesh k

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • LMS application registration problem

    Hi,
    I have a problem with the application registration of Campus Manager 5.2.1 under LMS 3.2.
    Right now in the "Functional" tab of the LMS portal the "Campus Manager" view is empty.
    In "Common Services" - "Server" - "Home Page Admin" - "Application Registration" Campus Manager is missing in the list of registered applications.
    I can connect to the Campus Manager using the direct link http://<servername>:1741/campus/adminIntro.do.
    The "CM" tab on the LMS portal page works.
    I tried to register it this way:
    In the CS Homepage Admin I click on "Register" - "Register from Templates" - "Next"
    Then I get the list of templates. I select "Campus Manager" - "Next".
    I enter these values:
    Server name: fully qualified name of the server
    Server Display Name: hostname of the server
    Port: 1741
    Protocol: http
    When I click on "Next" - "Finish" the Campus Manager is not added to the list of registered application and I get no error message.
    Did I enter wrong values? Where can I find corresponding log/debug messages?

    It sounds like the CMIC database may be corrupt.  There is a process to recover this, but it's somewhat complicated.  If you open a TAC service request, your engineer can walk you through the process.  This will clear out the CMIC database and reregister all applications.  That should sort you out.

  • Backup problem with LMS 3.0 on Solaris

    Hi All,
    I'm encountering problems with LMS 3.0 when I try to do a backup. After you hit OK on the window that says "Do you want to backup now?", an error would pop up saying: "Enter a new directory name or ask the system administrator of the Ciscoworks Common Services server to make the directory accessible to user casuser", thus I cannot proceed with the backup process. When I do backup, my login privilege is admin. Even tried to do backup on the same partition as my CSCOpx directory but to no avail.
    Appreciate your help on this guys. Our LMS 3.0 runs on Solaris platform. Thanks in advance.

    Hi Joe,
    Thanks for your prompt response. By the way, what is a casuser? Is he also one of the user that can be found on the Local User Setup on Common Services? The scenario here is that, I am not the one who installed the Ciscoworks LMS 3.0 on the client side, so I am not aware on how they did the installation procedure for the application. I remember assigning casuser password during installation on our other clients that has LMS 3.0.
    Will the write access on the backup directory for the casuser be done by the Solaris root administrator?
    Thanks for your help.

  • No data in ecc variables in failover mode

    Hi all,
    got a problem with custom enterprise data layout when CAD is connected to secondary node.
    With "set enterprise..." I set a special layout for CAD enterprise data in my script. This works fine as long as CAD is connected to primary node. When CAD is connected to secondary node there is no data in expanded call variales and CAD displays default layout. Changing default layout and writing the content to predefined variables works fine. So it seems only extended call variables are not working in failover mode.
    Any ideas?
    Br
    Sven
    (UCCX v8.5)

    Hi Sven,
    Looks like TAC helped fix the issue.  Previously we had rebooted both servers without success.  TAC suggested to restart the Desktop Enterprise Service on both servers (I did Pub first than Sub).  I've verified ECC variables are being sent to CAD correctly.
    The root cause might of been a network outage we had a week ago.  We were using the Desktop Workflow Administrator at the time of the outage.
    Kyle

  • Difference between scalable and failover cluster

    Difference between scalable and fail over cluster

    A scalable cluster is usually associated with HPC clusters but some might argue that Oracle RAC is this type of cluster. Where the workload can be divided up and sent to many compute nodes. Usually used for a vectored workload.
    A failover cluster is where a standby system or systems are available to take the workload when needed. Usually used for scalar workloads.

Maybe you are looking for

  • Automatic Po creation from shopping cart

    Dear All, I found that this issue has been discussed many time, but all the messages are talking about split criteria. When I create a shopping cart with mulitple items (all the items are just copy of first item...... no question of split criteria) a

  • Is it possible to create a PDF "print" button from InDesign behaviors?

    In Acrobat, there is a way to create a button with a behavior for "print" that will queue the document up for printing. I don't see a way to do this on the InDesign end, though, via behaviors. Something that will export from InDesign and carry over t

  • Final Cut express wont open after Lepard Install

    I just installed Lepard to my mac and Final cut wont open...its gone dead, I find out the my version of FC express is 3.5.1, I have been having a few different things happen within the last 72 hours of the install,My dock sometimes doesnt respond,I n

  • Strange characters displaying in place of file upload

    Has this happened to anyone before? I've built a form that includes a file upload option, but where the field box and button should be there are funny characters displaying (see screenshot) - Clicking on the characters behaves as it should and an upl

  • User / Device Affinity

    I'm wanting service desk level technicians to be able to view devices that a user has logged into without being able to set a primary device.  As a full administrator I can right-click a user and modify primary devices, and see which devices they hav