[SOLVED] Best Practice for systemd, udisks, udev, polkit,and openbox

I have to admit I am stymied here.  I have been in the wiki and forums most of the day and am just not finding the solution here.
I have both udisks and udisks2 installed
I am running a pure systemd
I do not use a login manager
I use startx that has a switch clause based on an environmental variable to pick the environment I want.
For now, let's limit this to xfce4 and openbox.
I used to be able to log in and mount usb and SD volumes using Thunar, Dolphin, and palimpsest.  I can no longer do that; I receive an error indicating I've not the proper privileges.
Even more fun, I log into a console and use udiskie, and can mount volumes without issue.
I start X with startx, launch a terminal program, and udiskie will no longer allow me to mount.  I can read volumes mounted by udiskie before I started X.
I have tried various combinations of ck-launch-session, dbus-launch in concert with openbox-session and startxfce4.  Nothing allows me to automount as a normal user.  This used to work, but something has gone south, but I am not clear when.
Anyone have suggestions for a stable solution here?
A couple files as they exist right now:
ewaller@odin:~ 1005 %cat .xinitrc
#xset b off
setxkbmap -option ctrl:nocaps
case $WM in
openbox)
exec ck-launch-session openbox-session
e17)
dbus-launch
enlightenment_start
vb)
VirtualBox -startvm "Windows XP" -fullscreen
xfce4)
exec ck-launch-session startxfce4
exec ck-launch-session startxfce4
esac
ewaller@odin:~ 1006 %cat /etc/polkit-1/localauthority/50-local.d/10-udisks.pkla
[Local Users udisk2]
Identity=unix-group:users
Action=org.freedesktop.udisks2.*
ResultAny=yes
ResultInactive=yes
ResultActive=yes
[Local Users udisk]
Identity=unix-group:users
Action=org.freedesktop.udisks.*
ResultAny=yes
ResultInactive=yes
ResultActive=yes
ewaller@odin:~ 1007 %
ewaller@odin:~[1] 1007 %
Last edited by ewaller (2012-10-16 03:28:49)

Are you starting X onto the same vt you logged into i.e. startx -- vt1 for tty1 or startx -- vt$(fgconsole) (or falconindy's xserverrc) for a more general case?   If you don't, systemd-logind cannot keep track of the session and the -kit permissions will thus break.
You can see sessions using loginctl:
└$ loginctl
SESSION UID USER SEAT
1 1000 zekesulastin seat0
4 1000 zekesulastin seat0
2 sessions listed.
You can see information about your sessions using loginctl show-session $num - the session must be Active:
└$ loginctl show-session 4
Id=4
Timestamp=Sun, 14 Oct 2012 23:18:56 -0400
TimestampMonotonic=49646619798
DefaultControlGroup=name=systemd:/user/zekesulastin/4
VTNr=1
TTY=tty1
Remote=no
Service=login
Leader=12771
Audit=4
Type=tty
Class=user
Active=yes <-- This must be "yes"
State=active
KillProcesses=no
IdleHint=yes
IdleSinceHint=1350271136064237
IdleSinceHintMonotonic=49646611251
Name=zekesulastin
(I personally have a conditional in bash_profile to exec startx -- vt1 if I log in on tty1.)
Last edited by ZekeSulastin (2012-10-15 04:37:59)

Similar Messages

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • Best Practices for sharing media with iMovie and FCPX

    So I've a large iMovie Events directory, and would like to use that media with both iMovie and FCPX projects.
    I'd rather not duplicate the media, so would prefer to import as references into FCPX.
    The dilemma is that I see that it's possible to modify or move media from within the iMovie application, and therefore break the reference to that media with FCPX.
    I only see two options:  (1) Never Ever modify the location/name of media in the iMovie Events file (even from within the iMovie app) since I would break an FCPX link if that media is referenced, or (2) always import (copy) the iMovie events into the FCPX Event Library making an independent original so that I can confidently operate on those media files in either application.
    I'd surely rather not have to do (2 )(e.g. doubling my storage demands) to gain the flexibility of using either application to edit the video, but really don't want to live with the restrictions of (1).
    Thoughts / Solutions?  What might you consider as options or best practices?

    Unless there is some other reason, users should own the right to share their mailboxes - it shouldn't be something that demands administrator management (if only so that the administrators aren't swamped by user requests for sharing their mailboxes). 
    For true shared mailboxes, when the mailbox is created, full access is granted by an administrator.

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best Practice for Installation of Both Leopard and Aperture 2 upgrade.

    I've finally bought the bullet and purchased both Leopard and Aperture 2.0 upgrade. I've tried searching for a best practice to install both, but haven't been able to find one--only trouble shooting type stuff. Any suggestions, things to avoid, etc would be greatly appreciated. Even a gentle shove to a prior thread would be helpful. . . .
    Thanks for pointing me in the right direction.
    Steve

    steve hutchcraft wrote:
    I've tried searching for a best practice to install...
    • First be really sure that all your apps work well with 10.5.3 before you leave 10.4.11, which is extraordinarily stable.
    • Immediately prior to and immediately after every installation of any kind (OS, apps, drivers, etc.) got to Utilities/Disk Utility/First Aid, and Repair Permissions. Repairing Permissions is not a problem fixer per se, but anecdotally many folks with heavy graphics installations (including me) who follow that protocol seem to maintain better operating environments under the challenge of heavy graphics than folks who do not diligently do so.
    • When you upgrade the OS do a "clean install."
    • RAM is relatively inexpensive and 2 GB RAM is limiting. I recommend adding 4x2 GB RAM. One good source is OWC: http://www.owcomputing.com/.
    • After you do your installations check for updates to the OS and/or Aperture, and perform any upgrades. Remember to Repair Permissions immediately prior to and immediately after the upgrade installations.
    • If you are looking for further Aperture performance improvement, consider the Radeon HD 3870. Reviews at http://www.barefeats.com/harper16.html and at http://www.barefeats.com/harper17.html.
    Good luck!
    -Allen Wicks

  • Best practices for using Normalizer in ASA and in AIP-SSM

    Both PIX OS 7.x and IPS 5.x software have a concept of "traffic normalization". PIX OS on ASA can do virtual reassembly, IPS on SSM (so far as I know) can do physical reassembly and fragmentation of IP packets. Also, both ASA and SSM can do TCP normalization. For example, they both can "check inconsistent retransmissions" and protect against "TTL evasion attacks". I realize that PIX OS has only basic normalization functions and the SSM is much more configurable.
    The question is: what are the best practices here? Is it better to disable some IP/TCP PIX OS checks / IPS signatures on ASA and/or SSM? Is it better to use just SSM for traffic normalization? Does anybody has personal experience here?
    Also, there is a BugID CSCsd04327 - "ASA all out of order packets are dropped when sending to ssm"
    "When ips ssm is inline slowness is reported. show service-policy shows that the number of out of order packets reported match exactly the number of no buffer drops (even with queue-limit option). Performance hit is not the result of tcp normalization (on IPS 5.x ssm) in this case, but rather an issue with asa normalizer."
    To me it seems to be more logical to have normalization function on the firewall, but there may be drawbacks in doing this.
    So, those who're using ASA with SSM, please share your experience.
    Thx.

    Yes, this is almost correct ;)
    TCP SRP (Stream Reassemly Processor) is turned OFF on the SSM and cannot be enabled, contrary to 4200 appliances, but IP FRP (Fragmentation Reassembly Processor) is functioning on the SSM.
    The testing of 7.2(1) shows the following:
    When you configure "policy-map" to send packets to the SSM the "tcp-map" parameter "queue-limit", which has the value of zero by default, is set to an X (the X is unknown). This means that the ASA now only accepts the TCP segments which are sent in the correct order. More specifically, the gaps in SEQs are not allowed anymore. When for example, the ASA receives a TCP segment which has a SEQ within the window, but the previous TCP segment has been lost, it sends an ACK to the sender to enforce retransmition of the lost segment. As a result the sender retransmits both segments. Only after that the ASA forwards both segments to the SSM. This basically means that SSM always sees in-order TCP segments. That it is why SRP is not needed on the SSM.
    There are at least two problems however.
    The first problem is the performance impact.
    ASA now acts almost like a proxy. And, so far as I know, it doesn't support SACK (Selective ACKs). First, when the ASA does TCP SEQ randomization it doesn't change SEQ values within the SACK TCP Option. This simply breakes SACK. Second, even if you turn randomization mechanism OFF, then, I believe, the ASA will not selectively ACK the lost TCP segments, as it simply doesn't support this mechanism.
    The second problem is THE SECURITY HOLE.
    By default the ASA doesn't check TCP checksums. The 4200 appliances do check by default. But as we now know the SRP is turned OFF on the SSM... So, this means that SSM module can easily be evaded. The hacker only needs to mix attacking traffic with the random TCP segments that have bad TCP checksum. The SSM module will see the mixture of the two and will not recognize the attack. The target host will drop TCP segments with the bad checksums and see only attacking traffic... This has been successfully verified in the lab.
    Of course, this security hole can be closed with the "tcp-map" parameter "checksum-verification", but it will definitely has performance impact.
    The last note: All of the above has never been documented by Cisco. So, use at your own risk, etc.
    I hope, you will read this message, Marcoa. All of this MUST be documented. Once again, the default behaviour of the ASA opens up a big security hole.
    Regards,
    Oleg Tipisov,
    REDCENTER,
    Moscow

  • Best Practice for Host Named Site Collections and Web Apps

    Looking for advice on setting up the host named site collections.  If I am reading many of the technet articles and blogs correctly I should 1) have only 1 top level web app for host named site collections and 2) not have a host header for that web
    app.  If that's correct I am looking for advice.  We have 7 separate domains that we support in our farm.  Currently each of those domains is divided into web applications based on the domain,  *.contoso, *.trains.com, *.bakers.com, etc.
      Is the concept now that all of the host named site collections fall under that one web app?  How do we deal with the SSL for each of those separate domains which all have their own certificates? 
    Thanks in advance for your comments. 
    NLewis

    Yes, for creating host named site collections, first you create a host header less web app and then create host named site collections under that web app. However this is only for the cases where all the host named site collections ends in one domain. So
    you can create host named site collections as intranet.contoso.com, my.contoso.com, portal.contoso.com etc as they are all ending in *.contoso.com.
    As per your environment, if you have web apps which caters to different domains like *.contoso.com, *.trains.com, *.bakers.com, you need to create separate web apps as they are all ending in different domains. Then you can have a separate wildcard SSL certificate
    for each of those web apps.
    Hope this helps.
    Thanks
    Mohit

  • [solved] Best Practice for SSDs with crypto on it regarding TRIM

    Hi,
    I was doing some research on this matter but did not find too much information on that that's more useful than confusing.
    The setup is the following: I have an SSD (Crucial, Marvell-Controller) with two partitions: a small one for /boot and a bigger one for the rest, which is a LUKS-Container. I left some unpartitioned space at the end of the SSD. I'd like to *not* enable TRIM on the LUKS-device for security purposes.
    I was wondering now:
    I read that TRIM does reduce the write amplification of garbage collection. But shouldn't the garbage collection do not write at all, but just erase cells with no information on it?
    If TRIM helps keeping up performance: do I need to explicitly trim the unpartitioned space? Or does this area behave like the spare area?
    If TRIM of the unpartitioned space is necessary: what's the most elegant way to do so?
    If someone could shed a little light on this matter, that would help a lot and would be greatly appreciated.
    Last edited by Ovion (2015-03-09 19:20:13)

    I have the same setup. Crucial SSD, LUKS, TRIM (cron.weekly fstrim). A full hexdump (minus gigabytes of random data) looks like this: https://bpaste.net/raw/505157 (tell me about it)
    There is no issue with security. At least, none I care about. So the attacker can see how much free space there is [and where]. The where part is important since lots of small files give a different picture [lots of small free spaces in between] than a single very large file would [no free space in between], assuming there is no fragmentation worth mentioning [which Linux filesystems are usually good at]. So an attacker could probably make some guesses about your amount of data and file sizes. On the other hand I don't see how that's important, in ecryptfs you get this kind of info for free, and I have all sorts of files in all sorts of sizes either way, so it's not a big secret.
    The question is, when all it takes to crack your encryption setup is a keylogger or a $5 encryption wrench, does it really matter?
    Don't use TRIM on your SSD if you don't want to (there are lots of reasons not to TRIM... like better data recovery chances if you delete something by accident). But don't fool yourself thinking it's somehow really important for your security...
    As for unpartitioned space, if it ever was in use before, you need to trim it once. Create a partition on it, then blkdiscard the partition, then delete the partition. That way it's good until it's "in use" again because you dd all over it or had it resynced in a RAID.
    Apart from that, TRIM does all the things you said (reduce write amplification, performance, etc. etc.) but it's not like the SSD can't take it if you're not writing 24/7 because it's a database server burning up or something.
    Last edited by frostschutz (2015-03-09 19:19:16)

  • What is tthe Best practice for Variant List, Add, Edit and Display Forms?

    Requirement:
    I have single list.  The list has a large number of columns and a large number of items (lets say 20,000).
    I want to show users a different view of the list based on clicking on a different left-hand navigation option.
    Lets say I have four types of users:  Sales, Manufacturing, Shipping and Finance. I would like to have four options in the left-hand navigation.
    All of them would be pointing at the same list, BUT, I want each of them to have a customer list form.  The only difference between the custom list forms would be:
    Each would have its own set of views, and hence its own default view.
    Each would have its own New, Edit and Display Forms.  The only difference between the forms in one variant list and another would be: The order of the columns and which columns are modifiable.
    I would like to achieve this in SharePoint Designer in such a way that the "users" could still add/modify views and could even modify the forms from the SharePoint Menu.  BTW, I don't want to use InfoPath for obvious reasons.
    What is the best approach to meeting this requirement?  I have at least 20 sites and 70 lists overall that need variant forms made.
    HELP!!
    Savin
    BTW We are using SharePoint 2013 and I selected the wrong forum *sigh*.  But I think its probably the same answer.
    Cheers, Savin Smith

    Hi,
    I understand that you want to have different forms based on different view.
    Per my knowledge, there are no out of box method to achieve it.
    As a workaround, you can add the JavaScript code to the different view page.
    For example, to open different new form based on different view, you can get the windows.location, and then judge the view, then change the onclick event of the "New item" button.
    For more information, you can refer to:
    http://css-tricks.com/snippets/javascript/get-url-and-url-parts-in-javascript/
    http://samsharepoint.wordpress.com/2013/05/01/change-the-default-sharepoint-ok-and-cancel-button/
    Thanks,
     Linda
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Linda Li
    TechNet Community Support

  • Best practice for calling application module methods and plsql code

    In my application I am experiencing problems with connection pooling, I seem to be using a lot of connections in my application when only a few users are using the system. As part of our application we need to call database procedures for business logic.
    Our backing beans, call methods on the application module which in turn call a database procedure. For instance in the backing bean we have code like this to call the application module method.
    // Calling Module to generate new examination/test.
    CIGAppModuleImpl appMod = (CIGAppModuleImpl)Configuration.createRootApplicationModule("ky.gov.exam.model.CIGAppModule", "CIGAppModuleLocal");
    String testId = appMod.createTest( userId, examId, centerId).toString();
    AdfFacesContext.getCurrentInstance().getPageFlowScope().put("tid",testId);
    // Close the call
    System.out.println("Calling releaseRootApplicationModule remove");
    Configuration.releaseRootApplicationModule(appMod, true);
    System.out.println("Completed releaseRootApplicationModule remove");
    return returnResult;
    In the application module method we have the following code.
    System.out.println("CIGAppModuleImpl: Call the database and use the value from the iterator");
    CallableStatement cs = null;
    try{
    cs = getDBTransaction().createCallableStatement("begin ? := macilap.user_admin.new_test_init(?,?,?); end;", 0);
    cs.registerOutParameter(1, Types.NUMERIC);
    cs.setString(2, p_userId);
    cs.setString(3, p_examId);
    cs.setString(4, p_centerId);
    cs.executeUpdate();
    returnResult=cs.getInt(1);
    System.out.println("CIGAppModuleImpl.createTest: Return Result is " + returnResult);
    }catch (SQLException se){
    throw new JboException(se);
    finally {
    if (cs != null) {
    try {
    cs.close();
    catch (SQLException s) {
    throw new JboException(s);
    I have read in one of Steve Muench presentations (Oracle Fusion Applications Team' Best Practises) that calling the createRootApplicationModule method is a bad idea, and to call the method via the binding interface.
    I am assuming calling the createRootApplicationModule uses much more resources and database connections that calling the method through the binding interface such as
    BindingContainer bindings = getBindings();
    OperationBinding ob = bindings.getOperationBinding("customMethod");
    Object result = ob.execute()
    Is this the case? Also is using getDBTransaction().createCallableStatement the best way of calling database procedures. Would it be better to expose plsql packages as webservices and then call them from the applicationModule. Is this more efficient?
    Regards
    Orlando

    Thanks Shay, this is now working.
    I successfully got the binding to the application method in the pagedef.
    I used the following code in my backing bean.
    package view.backing;
    import oracle.binding.BindingContainer;
    import oracle.binding.OperationBinding;
    public class Testdatabase {
    private DCBindingContainer bindingContainer;
    public void setBindingContainer (DCBindingContainer bc) {this.bindingContainer = bc;}
    public DCBindingContainer getBindingContainer() {return bindingContainer;}
    public static String validateUser()
    // Calling Module to validate user and return user role details.
    System.out.println("Getting Binding Container from Home Backing Bean");
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    System.out.println("Obtain binding");
    OperationBinding operationBinding = bindings.getOperationBinding("calldatabase");
    System.out.println("Set username parameter");
    operationBinding.getParamsMap().put("p_userId",userId);
    System.out.println("Set password parameter");
    operationBinding.getParamsMap().put("p_testId",examId);
    Object result = operationBinding.execute();
    System.out.println("Obtain result");
    String userRole = result.toString();
    System.out.println("Result is "+userRole);
    }

  • Best practices for configure Rogue Detector AP and trunk port?

    I'm using a 2504 controller.  I dont have WCS.
    My questions are about the best way to configure a Rogue Detector AP.
    In my lab environment I setup the WLC with 2 APs.  One AP was in local mode, and I put the other in Rogue Detector mode.
    The Rogue Detector AP was connected to a trunk port on my switch.  But the AP needed to get its IP address from the DHCP server running on the WLC.  So I set the native vlan of the trunk port to be the vlan on which the WLC management interface resides.  If the trunk port was not configured with a native vlan, the AP couldn't get an address through DHCP, nor could the AP communicate with the WLC.  This makes sense because untagged traffic on the trunk port will be delivered to the native vlan.  So I take it that the AP doesn't know how to tag frames.
    Everything looked like it was working ok.
    So I connected an autonomous AP (to be used as the rogue), and associated a wireless client to it.  Sure enough it showed up on the WLC as a rogue AP, but it didn't say that it was connected on the wire.  From the rogue client I was able to successfully ping the management interface of the WLC.
    But the WLC never actually reported the rogue AP as being connected to the wired network.
    So my questions are:
    1. What is the correct configuration for the trunk port?  Should it not be configured with a native vlan?  If not, then I'm assuming the rogue detector AP will have to have a static IP address defined, and it would have to be told which vlan it's supposed to use to communicate with the WLC.
    2.  Assuming there is a rogue client associated with the rogue AP, how long should it reasonably take before it is determined that the rogue AP is connected to the wired network?  I know this depends on if the rogue client is actually generating traffic, but in my lab environment I had the rogue client pinging the management interface of the WLC and still wasn't being picked up as an on-the-wire rogue.
    Thanks for any input!!

    #what's the autonomous AP's(as Rogue AP) Wired and Wireless MAC address?
    it has to be +1 or -1 difference. If Wired MAC is x.x.x.x.x.05 and the wireless mac should be x.x.x.x.x.04 or 06. It is not going to detect if the difference is more than + 1 or - 1.
    #Does the switch sees the Rogue AP's wired MAC on its MAC table.
    Rogue Detector listens to ARPs to get all the Wired MAC info and forwards to WLC, It compares with Wireless MAC, if there is a +1 or -1 difference then it will be flagged as Rogue on wire. And the client that connected to it is also marked as found on wire.
    Regards to Trunking, Only Native vlan matters per trunk link, just configure the right vlan as native and we're done.
    It is not mandatory to keep the Rogue detector on Management vlan of wlc. It can also be on L3 vlan also as long as it can join the WLC to forward the learnt wired MACs.
    So if we don't have +1, -1 difference on Rogues then you've to use RLDP which will work with your existing setup to find Rogue on wire. there's a performance hit when we use this feature on local mode APs.
    Note: For AP join - AP can't understand Trunk, meaning if AP connected to Trunk it'll only talk to its native vlan irrespective of AP mode, however rogue detector listens to the Trunk port to learn MACs via ARPs from different VLANs and forwards to WLC using native vlan.

  • Best Practices for vMotion QoS in N1kv and UCS?

    Hi,
    I'm looking at a few technical documentation on what's the recommended way to provide QoS to vMotion.
    From VMWare's website on vSphere deployment with N1kv,
    They are using
    policy-map type qos vmotion
         class class-default
              police cir percent 30 bc 200 ms conform transmit violate drop
    This rate-limits vMotion traffic to 3Gbps and excess traffic will be dropped.
    Would this be better if I were to use:
    policy-map type qos vmotion
         class class-default
              police cir percent 30 bc 200 ms conform set-cos-transmit 4 exceed drop
    policy-map type vethernet vMotion
         switchport access vlan 900
         service-policy type qos in vmotion
         pinning id 0
    I'm marking vmotion traffic with a CoS of 4 and pin it to Fabric-A. I will have my management VLAN pinned to Fabric-B.
    Also, do I need to configure QoS settings in UCS as well?
    For the upstream switch, if I'm using a Catalyst 3750, would it be sufficient just to do a mls qos trust cos?
    Appreciate your advice.
    Thanks..

    Steven,
    Yes. You can use the modified QoS policy which changes the COS Values.
    We need to do some more configuration in Fabric Interconnect for the COS Values that are modified by Nexus 1000v to keep it as it is.
    The M81KR adapter works in a “no trust” QoS model which means that it will overwrite the CoS value set by an upstream entity (Nexus 1000v for example). For Nexus1000v deployments, it is highly recommended to do the CoS marking at the Nexus 1000v level. This means changing the QoS model to “trust” on the M81KR.
    To create a QoS policy to achieve this, look into the attached file. We need to have option "Full" enabled in it.
    Here are further details about this configuration from UCS Manager help:
    Host Control field
    Whether Cisco UCS controls the class of service (CoS). This can be:
    None—Cisco UCS uses the CoS value associated with the priority selected in the Priority drop-down list regardless of the CoS value assigned by the host.
    Full—If the packet has a valid CoS value assigned by the host, Cisco UCS uses that value. Otherwise, Cisco UCS uses the CoS value associated with the priority selected in the Priority drop-down list.
    Regards
    Nethaji V

  • Best Practices For Household IOS's/Apple IDs

    Greetings:
    I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs.  If there is a specific article please point me to it.
    Here is my situation: 
    We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer.  I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret.  As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video.  I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
    So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads).  Now I have 3 Apple IDs under one household.
    My questions:
    With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids?  Is it multiple accounts on the computer and some sort of sharing? 
    Thanks in advance...

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Best practice for mouseless ADF applications

    I am developing an ADF application where the users do not want to use the mouse.
    So I would like to know if there are a best practice for this?
    I am already using the accessKey functionality and subforms defaultCommand
    But I have had problems setting focus to objects on a page like tables. I would like a button to return the focus to the table after it has made the command like delete.
    I have implemented a solution where I have found inspiration several threads and other webpages (see below).
    Is this solution okay?
    Are there any problems with it?
    I would also like to know if there are better pathways to go like
    out of the box solutions,
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    in advance thanks
    Inspiration webpages
    https://blogs.oracle.com/jdevotnharvest/entry/how_to_programmatically_set_focus
    http://technology.amis.nl/2008/01/04/adf-11g-rich-faces-focus-on-field-after-button-press-or-ppr-including-javascript-in-ppr-response-and-clientlisteners-client-side-programming-in-adf-faces-rich-client-components-part-2/
    how to Commit table by writting Java code in Managed Bean?
    Table does not refresh and getting error as UIComponent is Null
    A short description of the solution:
    (jdeveloper version 11.1.1.2.0)
    --- Example where I use onSetFocus in jsff page
    <af:commandButton text="#{hrsusuiBundle.FOCUS}" id="cb10"
    partialSubmit="true" accessKey="f"
    shortDesc="Alt+Shift+F"
    actionListener="#{managedBean_clientUtils.onSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    </af:commandButton>
    --- Examples where I use doTableActionAndSetFocus in jsff page
    --- There have to be a binding in the jsff page to delete, commit and rollback
    <af:commandButton text="#{hrsusuiBundle.DELETE}" id="cb4"
    accessKey="x"
    shortDesc="Alt+Shift+X"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Delete"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.COMMIT}" id="cb5"
    accessKey="s" shortDesc="Alt+Shift+S"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Commit"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.ROLLBACK}" id="cb6"
    accessKey="z" shortDesc="Alt+Shift+Z"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}"
    immediate="true">
    <af:resetActionListener/>
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Rollback"/>
    </af:commandButton>
    --- This is the java class I use
    --- It is published in adfc-config.xml as a request scope managedbean
    public class ClientUtils {
    public ClientUtils() {
    public void doTableActionAndSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String actionToDo = (String)rcb.getAttributes().get("actionField");
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(focusOn);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    if ( "Delete".equals(actionToDo) || "Commit".equals(actionToDo) || "Rollback".equals(actionToDo) ){
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    OperationBinding operationBinding = bindings.getOperationBinding(actionToDo);
    Object result = operationBinding.execute();
    AdfFacesContext.getCurrentInstance().addPartialTarget(component);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    public static String onSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String clientId = null;
    if (focusOn.contains(":")) {
    clientId = focusOn;
    } else {
    clientId = findComponentsClientIdInRoot(focusOn);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    return null;
    private static void writeJavaScriptToClient(String script) {
    FacesContext fctx = FacesContext.getCurrentInstance();
    ExtendedRenderKitService erks = null;
    erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
    erks.addScript(fctx, script);
    public static void makeSetFocusJavaScript(String clientId) {
    if (clientId != null) {
    StringBuilder script = new StringBuilder();
    //use client id to ensure component is found if located in
    //naming container
    script.append("var textInput = ");
    script.append("AdfPage.PAGE.findComponentByAbsoluteId");
    script.append ("('"+clientId+"');");
    script.append("if(textInput != null){");
    script.append("textInput.focus();");
    script.append("}");
    writeJavaScriptToClient(script.toString());
    public static String findComponentsClientIdInRoot(String id) {
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(id);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    return clientId;
    }

    Hi,
    I am developing an ADF application where the users do not want to use the mouse. So I would like to know if there are a best practice for this?
    Well HTML (and this is the user interface you see) follows a tab index navigation that you follow with "tab" and "shift+tab". Anything else is a short cut for which you use mnemonics (as you already do) or shortcuts (explained in http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html). There is a distinction to make between non-web environments (which I think you and your users have abackground in) and client desktop environments. Browsers block some keyboard functionality for their own purpose. So you may have to find a list of keys first that work across browsers. Unlike desktop clients, which allow you to "press a button" without the button to take focus, this cannot be done on the web. So you need to be clever here, avoiding buttons at all.
    The following paper is about JavaScript in ADF and explains the basics for what Chris Muir explains in : http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf
    It has the outline for how to register short cut keys that perform a specific action (e.g. register ctrl+d to delete the current row you are on, or press F11 to execute a query (similar to Oracle Forms frmres files)). However, be aware that this includes some code you have to write (actually quite some code to be honest).
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    Actually these are implementations as they come with example code for you to use and customize, do they? So what is this question asking for more ? Also note that global buttons don't quite have anything in common with the question you asked. I assume you want to see it as an implementation of the Forms toolbar that operates on the form or table the focus is in. This however does not work for the web as there is nothing that keeps track of which component has a focus and to what iterator (data block) it belongs. This would involve even more coding (though possibly doable)
    Frank

Maybe you are looking for