Best Practice for Installation of Both Leopard and Aperture 2 upgrade.

I've finally bought the bullet and purchased both Leopard and Aperture 2.0 upgrade. I've tried searching for a best practice to install both, but haven't been able to find one--only trouble shooting type stuff. Any suggestions, things to avoid, etc would be greatly appreciated. Even a gentle shove to a prior thread would be helpful. . . .
Thanks for pointing me in the right direction.
Steve

steve hutchcraft wrote:
I've tried searching for a best practice to install...
• First be really sure that all your apps work well with 10.5.3 before you leave 10.4.11, which is extraordinarily stable.
• Immediately prior to and immediately after every installation of any kind (OS, apps, drivers, etc.) got to Utilities/Disk Utility/First Aid, and Repair Permissions. Repairing Permissions is not a problem fixer per se, but anecdotally many folks with heavy graphics installations (including me) who follow that protocol seem to maintain better operating environments under the challenge of heavy graphics than folks who do not diligently do so.
• When you upgrade the OS do a "clean install."
• RAM is relatively inexpensive and 2 GB RAM is limiting. I recommend adding 4x2 GB RAM. One good source is OWC: http://www.owcomputing.com/.
• After you do your installations check for updates to the OS and/or Aperture, and perform any upgrades. Remember to Repair Permissions immediately prior to and immediately after the upgrade installations.
• If you are looking for further Aperture performance improvement, consider the Radeon HD 3870. Reviews at http://www.barefeats.com/harper16.html and at http://www.barefeats.com/harper17.html.
Good luck!
-Allen Wicks

Similar Messages

  • Best practice for installation oracle 11g rac on windows 2008 server x64

    hello!
    can somebody tell me a good book or an other kind of literature regarding "best practice for installation oracle 11g rac on windows 2008 server x64"? thx in advance!
    best regards,
    christian

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • Best Practices for sharing media with iMovie and FCPX

    So I've a large iMovie Events directory, and would like to use that media with both iMovie and FCPX projects.
    I'd rather not duplicate the media, so would prefer to import as references into FCPX.
    The dilemma is that I see that it's possible to modify or move media from within the iMovie application, and therefore break the reference to that media with FCPX.
    I only see two options:  (1) Never Ever modify the location/name of media in the iMovie Events file (even from within the iMovie app) since I would break an FCPX link if that media is referenced, or (2) always import (copy) the iMovie events into the FCPX Event Library making an independent original so that I can confidently operate on those media files in either application.
    I'd surely rather not have to do (2 )(e.g. doubling my storage demands) to gain the flexibility of using either application to edit the video, but really don't want to live with the restrictions of (1).
    Thoughts / Solutions?  What might you consider as options or best practices?

    Unless there is some other reason, users should own the right to share their mailboxes - it shouldn't be something that demands administrator management (if only so that the administrators aren't swamped by user requests for sharing their mailboxes). 
    For true shared mailboxes, when the mailbox is created, full access is granted by an administrator.

  • Best practices for using Normalizer in ASA and in AIP-SSM

    Both PIX OS 7.x and IPS 5.x software have a concept of "traffic normalization". PIX OS on ASA can do virtual reassembly, IPS on SSM (so far as I know) can do physical reassembly and fragmentation of IP packets. Also, both ASA and SSM can do TCP normalization. For example, they both can "check inconsistent retransmissions" and protect against "TTL evasion attacks". I realize that PIX OS has only basic normalization functions and the SSM is much more configurable.
    The question is: what are the best practices here? Is it better to disable some IP/TCP PIX OS checks / IPS signatures on ASA and/or SSM? Is it better to use just SSM for traffic normalization? Does anybody has personal experience here?
    Also, there is a BugID CSCsd04327 - "ASA all out of order packets are dropped when sending to ssm"
    "When ips ssm is inline slowness is reported. show service-policy shows that the number of out of order packets reported match exactly the number of no buffer drops (even with queue-limit option). Performance hit is not the result of tcp normalization (on IPS 5.x ssm) in this case, but rather an issue with asa normalizer."
    To me it seems to be more logical to have normalization function on the firewall, but there may be drawbacks in doing this.
    So, those who're using ASA with SSM, please share your experience.
    Thx.

    Yes, this is almost correct ;)
    TCP SRP (Stream Reassemly Processor) is turned OFF on the SSM and cannot be enabled, contrary to 4200 appliances, but IP FRP (Fragmentation Reassembly Processor) is functioning on the SSM.
    The testing of 7.2(1) shows the following:
    When you configure "policy-map" to send packets to the SSM the "tcp-map" parameter "queue-limit", which has the value of zero by default, is set to an X (the X is unknown). This means that the ASA now only accepts the TCP segments which are sent in the correct order. More specifically, the gaps in SEQs are not allowed anymore. When for example, the ASA receives a TCP segment which has a SEQ within the window, but the previous TCP segment has been lost, it sends an ACK to the sender to enforce retransmition of the lost segment. As a result the sender retransmits both segments. Only after that the ASA forwards both segments to the SSM. This basically means that SSM always sees in-order TCP segments. That it is why SRP is not needed on the SSM.
    There are at least two problems however.
    The first problem is the performance impact.
    ASA now acts almost like a proxy. And, so far as I know, it doesn't support SACK (Selective ACKs). First, when the ASA does TCP SEQ randomization it doesn't change SEQ values within the SACK TCP Option. This simply breakes SACK. Second, even if you turn randomization mechanism OFF, then, I believe, the ASA will not selectively ACK the lost TCP segments, as it simply doesn't support this mechanism.
    The second problem is THE SECURITY HOLE.
    By default the ASA doesn't check TCP checksums. The 4200 appliances do check by default. But as we now know the SRP is turned OFF on the SSM... So, this means that SSM module can easily be evaded. The hacker only needs to mix attacking traffic with the random TCP segments that have bad TCP checksum. The SSM module will see the mixture of the two and will not recognize the attack. The target host will drop TCP segments with the bad checksums and see only attacking traffic... This has been successfully verified in the lab.
    Of course, this security hole can be closed with the "tcp-map" parameter "checksum-verification", but it will definitely has performance impact.
    The last note: All of the above has never been documented by Cisco. So, use at your own risk, etc.
    I hope, you will read this message, Marcoa. All of this MUST be documented. Once again, the default behaviour of the ASA opens up a big security hole.
    Regards,
    Oleg Tipisov,
    REDCENTER,
    Moscow

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • [SOLVED] Best Practice for systemd, udisks, udev, polkit,and openbox

    I have to admit I am stymied here.  I have been in the wiki and forums most of the day and am just not finding the solution here.
    I have both udisks and udisks2 installed
    I am running a pure systemd
    I do not use a login manager
    I use startx that has a switch clause based on an environmental variable to pick the environment I want.
    For now, let's limit this to xfce4 and openbox.
    I used to be able to log in and mount usb and SD volumes using Thunar, Dolphin, and palimpsest.  I can no longer do that; I receive an error indicating I've not the proper privileges.
    Even more fun, I log into a console and use udiskie, and can mount volumes without issue.
    I start X with startx, launch a terminal program, and udiskie will no longer allow me to mount.  I can read volumes mounted by udiskie before I started X.
    I have tried various combinations of ck-launch-session, dbus-launch in concert with openbox-session and startxfce4.  Nothing allows me to automount as a normal user.  This used to work, but something has gone south, but I am not clear when.
    Anyone have suggestions for a stable solution here?
    A couple files as they exist right now:
    ewaller@odin:~ 1005 %cat .xinitrc
    #xset b off
    setxkbmap -option ctrl:nocaps
    case $WM in
    openbox)
    exec ck-launch-session openbox-session
    e17)
    dbus-launch
    enlightenment_start
    vb)
    VirtualBox -startvm "Windows XP" -fullscreen
    xfce4)
    exec ck-launch-session startxfce4
    exec ck-launch-session startxfce4
    esac
    ewaller@odin:~ 1006 %cat /etc/polkit-1/localauthority/50-local.d/10-udisks.pkla
    [Local Users udisk2]
    Identity=unix-group:users
    Action=org.freedesktop.udisks2.*
    ResultAny=yes
    ResultInactive=yes
    ResultActive=yes
    [Local Users udisk]
    Identity=unix-group:users
    Action=org.freedesktop.udisks.*
    ResultAny=yes
    ResultInactive=yes
    ResultActive=yes
    ewaller@odin:~ 1007 %
    ewaller@odin:~[1] 1007 %
    Last edited by ewaller (2012-10-16 03:28:49)

    Are you starting X onto the same vt you logged into i.e. startx -- vt1 for tty1 or startx -- vt$(fgconsole) (or falconindy's xserverrc) for a more general case?   If you don't, systemd-logind cannot keep track of the session and the -kit permissions will thus break.
    You can see sessions using loginctl:
    └$ loginctl
    SESSION UID USER SEAT
    1 1000 zekesulastin seat0
    4 1000 zekesulastin seat0
    2 sessions listed.
    You can see information about your sessions using loginctl show-session $num - the session must be Active:
    └$ loginctl show-session 4
    Id=4
    Timestamp=Sun, 14 Oct 2012 23:18:56 -0400
    TimestampMonotonic=49646619798
    DefaultControlGroup=name=systemd:/user/zekesulastin/4
    VTNr=1
    TTY=tty1
    Remote=no
    Service=login
    Leader=12771
    Audit=4
    Type=tty
    Class=user
    Active=yes <-- This must be "yes"
    State=active
    KillProcesses=no
    IdleHint=yes
    IdleSinceHint=1350271136064237
    IdleSinceHintMonotonic=49646611251
    Name=zekesulastin
    (I personally have a conditional in bash_profile to exec startx -- vt1 if I log in on tty1.)
    Last edited by ZekeSulastin (2012-10-15 04:37:59)

  • Best Practice for Host Named Site Collections and Web Apps

    Looking for advice on setting up the host named site collections.  If I am reading many of the technet articles and blogs correctly I should 1) have only 1 top level web app for host named site collections and 2) not have a host header for that web
    app.  If that's correct I am looking for advice.  We have 7 separate domains that we support in our farm.  Currently each of those domains is divided into web applications based on the domain,  *.contoso, *.trains.com, *.bakers.com, etc.
      Is the concept now that all of the host named site collections fall under that one web app?  How do we deal with the SSL for each of those separate domains which all have their own certificates? 
    Thanks in advance for your comments. 
    NLewis

    Yes, for creating host named site collections, first you create a host header less web app and then create host named site collections under that web app. However this is only for the cases where all the host named site collections ends in one domain. So
    you can create host named site collections as intranet.contoso.com, my.contoso.com, portal.contoso.com etc as they are all ending in *.contoso.com.
    As per your environment, if you have web apps which caters to different domains like *.contoso.com, *.trains.com, *.bakers.com, you need to create separate web apps as they are all ending in different domains. Then you can have a separate wildcard SSL certificate
    for each of those web apps.
    Hope this helps.
    Thanks
    Mohit

  • [JavaDB] Best practices for installation

    Hello,
    this topic is related to java DB, but is not a JDBC problem, so I deem that {forum:id=1050} was not the best fit; I post it to this more general forum, on the grounds that it's a deployment question regarding a standard JDK component.
    We (are only starting to) develop a JavaSE application that will require a local database, using the embedded JavaDB as featured in the JDK 6 installation.
    I wonder how we can design the installation of the application's database in the user environment.
    Here is my current understanding of JavaDB, along with the installation techniques i imagine. Can you suggest pro/cons of each, or suggest another way?
    My understanding:*
    - Java DB is an in-memory DB, which persist its data in proprietary files. It can be used embedded (in the application's JVM), or as a DB server (in a dedicated server JVM), in my case there is a single process that needs persistence, and I have no requirement that suggest to have a DB process alive on its own, so the embedded mode makes most sense.
    Among the "persistence" proprietary files, there are two types of files, the ones that hold the actual data, and "control files" (checkpoints, transaction logs,...). For a stable base (when all traffic is over, all data is in data files.
    My needs*
    After being installed on a customer's PC, the system will consist of a Java application featured as a (collection of) jar file(s), and a JavaDB structured with a schema, and (optionally) populated with default data.
    Installation Strategies*
    I see 4 strategies:
    1) Archive the appropriate persistence files tree from the build/test environment, and unzip it on the customer's machine.
    2) Implement Java code in the application installer, that creates a DB and creates the schema, issuing the relevant DDL SQL statements for the schema and DML statements for its default population.
    3) Implement Java code in the application itself, that at startup, tests whether the DB exists, and if not creates and populates it as in approach 2.
    4) Design application code (probably only entity and DAO classes) that leverages whatever "magic" JPA machinery that automatically creates the mapped DB structures when initialization code "persists" default values the first time: a while ago my team used that in a Glassfish+MySQL prototype, but I don't know if it's a JPA or a Glassfish feature, and I don't know how solid this approach is, especially how it deals with upgrades, when the DB may already exist.
    Note that the rate of changes in the DB structure when the application is upgraded may be a factor: if it helps I can consider it non-existent (and accept that DB upgrades require offline scripts that migrate the DB first, before installing the software).
    Thanks for your help.
    J.

    jduprez wrote:
    Hello, and thank you for replying.
    jschell wrote:
    For MS Access installs I used to (...)I'm not sure I understand (but then, I don't know Access at all). Do I read correctly that it was a matter of copying/renaming the DB's "data" file?
    Obviously when the DBMS supports that it's a very handy.The database didn't support it. On the other hand java.io.* does.
    The problem is that I'm not sure whether the data files can be moved without side-effect (if they include cross-references as absolute path, or host name, whatever). I'm not aware, for example, that Oracle or PostGreSQL can be moved that way, their respective manuals merely recommend to use a "portable dump" format (e.g. Oracle exp/imp).JavaDB is not either of those.
    >
    The advantage is that it is easy to start over if one needs to by doing nothing more than deleting the real file.Yes, quite handy for remote support! :o)
    Similarly (but again, only if I understood correctly), replacing the data file would be enough to force a known "starting state" (very handy for a test platform, for example, or for a customer's staging area).Yes.
    >
    An installer for the above process is only meaningful if it is going to do something dynamic with the database at install time. Like creating customer specific one time configuration records.Good point. But my organization has in-house standards of install scripts, and the integrators will frown at manual file delete/renaming, so I will have an installer script anyway (the simplest, the better). I'm just verifying whether the existing standards are applicable and relevant for this app's architecture.Nothing manual about it, again java.io.*.

  • What is tthe Best practice for Variant List, Add, Edit and Display Forms?

    Requirement:
    I have single list.  The list has a large number of columns and a large number of items (lets say 20,000).
    I want to show users a different view of the list based on clicking on a different left-hand navigation option.
    Lets say I have four types of users:  Sales, Manufacturing, Shipping and Finance. I would like to have four options in the left-hand navigation.
    All of them would be pointing at the same list, BUT, I want each of them to have a customer list form.  The only difference between the custom list forms would be:
    Each would have its own set of views, and hence its own default view.
    Each would have its own New, Edit and Display Forms.  The only difference between the forms in one variant list and another would be: The order of the columns and which columns are modifiable.
    I would like to achieve this in SharePoint Designer in such a way that the "users" could still add/modify views and could even modify the forms from the SharePoint Menu.  BTW, I don't want to use InfoPath for obvious reasons.
    What is the best approach to meeting this requirement?  I have at least 20 sites and 70 lists overall that need variant forms made.
    HELP!!
    Savin
    BTW We are using SharePoint 2013 and I selected the wrong forum *sigh*.  But I think its probably the same answer.
    Cheers, Savin Smith

    Hi,
    I understand that you want to have different forms based on different view.
    Per my knowledge, there are no out of box method to achieve it.
    As a workaround, you can add the JavaScript code to the different view page.
    For example, to open different new form based on different view, you can get the windows.location, and then judge the view, then change the onclick event of the "New item" button.
    For more information, you can refer to:
    http://css-tricks.com/snippets/javascript/get-url-and-url-parts-in-javascript/
    http://samsharepoint.wordpress.com/2013/05/01/change-the-default-sharepoint-ok-and-cancel-button/
    Thanks,
     Linda
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Linda Li
    TechNet Community Support

  • Best practice for calling application module methods and plsql code

    In my application I am experiencing problems with connection pooling, I seem to be using a lot of connections in my application when only a few users are using the system. As part of our application we need to call database procedures for business logic.
    Our backing beans, call methods on the application module which in turn call a database procedure. For instance in the backing bean we have code like this to call the application module method.
    // Calling Module to generate new examination/test.
    CIGAppModuleImpl appMod = (CIGAppModuleImpl)Configuration.createRootApplicationModule("ky.gov.exam.model.CIGAppModule", "CIGAppModuleLocal");
    String testId = appMod.createTest( userId, examId, centerId).toString();
    AdfFacesContext.getCurrentInstance().getPageFlowScope().put("tid",testId);
    // Close the call
    System.out.println("Calling releaseRootApplicationModule remove");
    Configuration.releaseRootApplicationModule(appMod, true);
    System.out.println("Completed releaseRootApplicationModule remove");
    return returnResult;
    In the application module method we have the following code.
    System.out.println("CIGAppModuleImpl: Call the database and use the value from the iterator");
    CallableStatement cs = null;
    try{
    cs = getDBTransaction().createCallableStatement("begin ? := macilap.user_admin.new_test_init(?,?,?); end;", 0);
    cs.registerOutParameter(1, Types.NUMERIC);
    cs.setString(2, p_userId);
    cs.setString(3, p_examId);
    cs.setString(4, p_centerId);
    cs.executeUpdate();
    returnResult=cs.getInt(1);
    System.out.println("CIGAppModuleImpl.createTest: Return Result is " + returnResult);
    }catch (SQLException se){
    throw new JboException(se);
    finally {
    if (cs != null) {
    try {
    cs.close();
    catch (SQLException s) {
    throw new JboException(s);
    I have read in one of Steve Muench presentations (Oracle Fusion Applications Team' Best Practises) that calling the createRootApplicationModule method is a bad idea, and to call the method via the binding interface.
    I am assuming calling the createRootApplicationModule uses much more resources and database connections that calling the method through the binding interface such as
    BindingContainer bindings = getBindings();
    OperationBinding ob = bindings.getOperationBinding("customMethod");
    Object result = ob.execute()
    Is this the case? Also is using getDBTransaction().createCallableStatement the best way of calling database procedures. Would it be better to expose plsql packages as webservices and then call them from the applicationModule. Is this more efficient?
    Regards
    Orlando

    Thanks Shay, this is now working.
    I successfully got the binding to the application method in the pagedef.
    I used the following code in my backing bean.
    package view.backing;
    import oracle.binding.BindingContainer;
    import oracle.binding.OperationBinding;
    public class Testdatabase {
    private DCBindingContainer bindingContainer;
    public void setBindingContainer (DCBindingContainer bc) {this.bindingContainer = bc;}
    public DCBindingContainer getBindingContainer() {return bindingContainer;}
    public static String validateUser()
    // Calling Module to validate user and return user role details.
    System.out.println("Getting Binding Container from Home Backing Bean");
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    System.out.println("Obtain binding");
    OperationBinding operationBinding = bindings.getOperationBinding("calldatabase");
    System.out.println("Set username parameter");
    operationBinding.getParamsMap().put("p_userId",userId);
    System.out.println("Set password parameter");
    operationBinding.getParamsMap().put("p_testId",examId);
    Object result = operationBinding.execute();
    System.out.println("Obtain result");
    String userRole = result.toString();
    System.out.println("Result is "+userRole);
    }

  • Best practices for configure Rogue Detector AP and trunk port?

    I'm using a 2504 controller.  I dont have WCS.
    My questions are about the best way to configure a Rogue Detector AP.
    In my lab environment I setup the WLC with 2 APs.  One AP was in local mode, and I put the other in Rogue Detector mode.
    The Rogue Detector AP was connected to a trunk port on my switch.  But the AP needed to get its IP address from the DHCP server running on the WLC.  So I set the native vlan of the trunk port to be the vlan on which the WLC management interface resides.  If the trunk port was not configured with a native vlan, the AP couldn't get an address through DHCP, nor could the AP communicate with the WLC.  This makes sense because untagged traffic on the trunk port will be delivered to the native vlan.  So I take it that the AP doesn't know how to tag frames.
    Everything looked like it was working ok.
    So I connected an autonomous AP (to be used as the rogue), and associated a wireless client to it.  Sure enough it showed up on the WLC as a rogue AP, but it didn't say that it was connected on the wire.  From the rogue client I was able to successfully ping the management interface of the WLC.
    But the WLC never actually reported the rogue AP as being connected to the wired network.
    So my questions are:
    1. What is the correct configuration for the trunk port?  Should it not be configured with a native vlan?  If not, then I'm assuming the rogue detector AP will have to have a static IP address defined, and it would have to be told which vlan it's supposed to use to communicate with the WLC.
    2.  Assuming there is a rogue client associated with the rogue AP, how long should it reasonably take before it is determined that the rogue AP is connected to the wired network?  I know this depends on if the rogue client is actually generating traffic, but in my lab environment I had the rogue client pinging the management interface of the WLC and still wasn't being picked up as an on-the-wire rogue.
    Thanks for any input!!

    #what's the autonomous AP's(as Rogue AP) Wired and Wireless MAC address?
    it has to be +1 or -1 difference. If Wired MAC is x.x.x.x.x.05 and the wireless mac should be x.x.x.x.x.04 or 06. It is not going to detect if the difference is more than + 1 or - 1.
    #Does the switch sees the Rogue AP's wired MAC on its MAC table.
    Rogue Detector listens to ARPs to get all the Wired MAC info and forwards to WLC, It compares with Wireless MAC, if there is a +1 or -1 difference then it will be flagged as Rogue on wire. And the client that connected to it is also marked as found on wire.
    Regards to Trunking, Only Native vlan matters per trunk link, just configure the right vlan as native and we're done.
    It is not mandatory to keep the Rogue detector on Management vlan of wlc. It can also be on L3 vlan also as long as it can join the WLC to forward the learnt wired MACs.
    So if we don't have +1, -1 difference on Rogues then you've to use RLDP which will work with your existing setup to find Rogue on wire. there's a performance hit when we use this feature on local mode APs.
    Note: For AP join - AP can't understand Trunk, meaning if AP connected to Trunk it'll only talk to its native vlan irrespective of AP mode, however rogue detector listens to the Trunk port to learn MACs via ARPs from different VLANs and forwards to WLC using native vlan.

  • Best Practices for vMotion QoS in N1kv and UCS?

    Hi,
    I'm looking at a few technical documentation on what's the recommended way to provide QoS to vMotion.
    From VMWare's website on vSphere deployment with N1kv,
    They are using
    policy-map type qos vmotion
         class class-default
              police cir percent 30 bc 200 ms conform transmit violate drop
    This rate-limits vMotion traffic to 3Gbps and excess traffic will be dropped.
    Would this be better if I were to use:
    policy-map type qos vmotion
         class class-default
              police cir percent 30 bc 200 ms conform set-cos-transmit 4 exceed drop
    policy-map type vethernet vMotion
         switchport access vlan 900
         service-policy type qos in vmotion
         pinning id 0
    I'm marking vmotion traffic with a CoS of 4 and pin it to Fabric-A. I will have my management VLAN pinned to Fabric-B.
    Also, do I need to configure QoS settings in UCS as well?
    For the upstream switch, if I'm using a Catalyst 3750, would it be sufficient just to do a mls qos trust cos?
    Appreciate your advice.
    Thanks..

    Steven,
    Yes. You can use the modified QoS policy which changes the COS Values.
    We need to do some more configuration in Fabric Interconnect for the COS Values that are modified by Nexus 1000v to keep it as it is.
    The M81KR adapter works in a “no trust” QoS model which means that it will overwrite the CoS value set by an upstream entity (Nexus 1000v for example). For Nexus1000v deployments, it is highly recommended to do the CoS marking at the Nexus 1000v level. This means changing the QoS model to “trust” on the M81KR.
    To create a QoS policy to achieve this, look into the attached file. We need to have option "Full" enabled in it.
    Here are further details about this configuration from UCS Manager help:
    Host Control field
    Whether Cisco UCS controls the class of service (CoS). This can be:
    None—Cisco UCS uses the CoS value associated with the priority selected in the Priority drop-down list regardless of the CoS value assigned by the host.
    Full—If the packet has a valid CoS value assigned by the host, Cisco UCS uses that value. Otherwise, Cisco UCS uses the CoS value associated with the priority selected in the Priority drop-down list.
    Regards
    Nethaji V

  • Best practices for ZFS file systems when using live upgrade?

    I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
    Here is the ZFS layout I have come up with (swap, home, etc omitted):
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              11.0G  52.0G    94K  /rpool
    rpool/ROOT                         4.80G  52.0G    18K  legacy
    rpool/ROOT/boot1                   4.80G  52.0G  4.28G  /
    rpool/ROOT/boot1/zones-root         534M  52.0G    20K  /zones-root
    rpool/ROOT/boot1/zones-root/zone1   534M  52.0G   534M  /zones-root/zone1
    rpool/zone-data                      37K  52.0G    19K  /zones-data
    rpool/zone-data/zone1-runtime        18K  52.0G    18K  /zones-data/zone1-runtimeThere are 2 key components here:
    1) The ROOT file system - This stores the / file systems of the local and global zones.
    2) The zone-data file system - This stores the data that will be changing within the local zones.
    Here is the configuration for the zone itself:
    <zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
      <inherited-pkg-dir directory="/lib"/>
      <inherited-pkg-dir directory="/platform"/>
      <inherited-pkg-dir directory="/sbin"/>
      <inherited-pkg-dir directory="/usr"/>
      <filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
      <network address="192.168.0.1" physical="e1000g0"/>
    </zone>The key components here are:
    1) The local zone / is shared in the same file system as global zone /
    2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
    The system (local and global zone) will operate like this:
    The global zone is used to manage zones only.
    Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
    During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
    Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
    What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
    Thanks in advance for your feedback!
    --Jason                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello "jemurray".
    Have you read this document? (page 198)
    http://docs.sun.com/app/docs/doc/820-7013?l=en
    Then the solution is:
    01.- Create an alternate boot enviroment
    a.- In a new rpool
    b.- In the same rpool
    02.- Upgrade this new enviroment
    03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
    This maybe sound easy but you should be carefull, please try this in a development enviroment
    Good luck

Maybe you are looking for

  • How do I remove my credit card from my iTunes

    I need to know how to remove my credit card from iTunes

  • Is there a way to detect model of iPhone by serial number or IMEI

    Is there a way to check model of iPhone by serial # or IMEI # I'm dying to find out what Apple sent me as my warraty claim, for my iPhone 5. I have heard of many people of late getting the 5s in place of their 5 when they have a warranty claim.

  • Misconfiguration error coming while installing adobe air update version exe

    Hi friends, I need clarification regarding the follwing issue: I already installed my application  0.3 version  in my system.now i created 0.4 version of my application.but while i am trying to install 0.4 version exe. It thorw a alert box like this:

  • A/B switch for monitor?

    Hey all--Please advise. I want to use a large monitor that is connected to a working G5 as a large monitor for my MBP. To avoid having to crawl under the desk to unhook the large monitor from the back of the G5 (as we do use the G5), I was hoping I c

  • Install SCCM on non-domain-membered server!

    Dear friends in my SCCM topology ,on perimeter side , i have server which i want install Primary server to receive updates from internet and give to other side primary server (like WSUS upstream/downstream scenario),but in perimeter i don't have any