Best practice for version control B2B, ESB and BPEL

Hello,
we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
Thanks in advance!

I believe JDeveloper has a plugin for Dimensions.
I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
If you select the thrid party check box - next, you will see an entry for dimentions.
Configure the connection and develop as you would any other project.
cheers
James

Similar Messages

  • Best practice for version control

    Hi.
    I'm setting up a file share, and want some sort of version control on the file share. What's the best practice method for this sort of thing?
    I'm coming at this as a subversion server administrator, and in subversion people keep their own copy of everything, and occasionally "commit" their changes, and the server keeps every "committed" version of every file.
    I liked subversion because: 1) users have their own copy, if they are away from the office or make a big oops mistake, it doesn't ever hit the server, and 2) you can lock a file to avoid conflicts, and 3) if you don't lock the file and a conflict (two simultaneous edits) occur, it has systems for dealing with conflicts.
    I didn't like subversion because it adds a level of complexity to things -- and many people ended up with critical files that should be shared on their own hard drives. So now I'm setting up a fileshare for them, which they will use in addition to the subversion repository.
    I guess I realize that I'll never get full subversion-like functionality in a file share. But through a system of permissions, incremental backups and mirroring (rsync, second-copy for windows users) I should be able to allow a) local copies on user's hard drives, b) control for conflicts (locking, conflict identification), and keeping old versions of things.
    I wonder if anyone has any suggestions about how to best setup a file share in a system where many people might want to edit the same file, with remote users needing to take copies of directories along with them on the road, and where the admin wants to keep revisions of things?
    Links to articles or books are welcome. Thanks.

    Subversion works great for code. Sort-of-ok for documents. Not so great for large data files.
    I'm now looking at using the wiki for project-level documentation. We've done that before quite successfully, and the wiki I was using (mediawiki) provides version history of pages and uploaded files, and stores the uploaded files in the file system.
    Which would leave just the large data files and some working files on the fileshare. Is there any way people can lock a file on the fileshare, to indicate to others that they are working on it and others shouldn't be modifying it? Is there a way to use unix permissions (user-group-other) permissions, "chmod oa-w" to lock a file and indicate that one is working on it?
    I also looked at Alfresco, which provides a CIFS (windows SMB) view of data files. I liked it in principle, but the files are all stored in a database, not in the file system, which makes me uneasy about backups. (Sure, subversion also stores stuff in a database, not a file system, but everyone has a copy of everything so I only lose sleep about backups regarding version history, not backups on the most recent file version.)
    John Abraham
    [email protected]

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Looking For Guidance: Best Practices for Source Control of Database Assets

    Database Version: 11.2.0.3
    OS: RHEL 6.2
    Source Control: subversion
    This is a general question aimed at database professionals, however, it is not specific to any oracle version, etc.  Its a leadership question for other Oracle shops regarding source control.
    The current trunk, in my client's source control, is the implementation of a previous employee who used ER Studio.  After walking the batch scripts and subordinate files , it was determined that there would be no formal or elegant way to recreate the current version of the database from our source control - the engineers who have contributed to these assets are no longer employed or available for consulting.  The batch scripts are stale, if you will.
    To clean this up and to leverage best practices, I need some guidance on whether or not to baseline the current repository and how to move forward with additions of assets; tables, procs, pkgs, etc.  I'm really interested in how larger oracle shops organize their repository - what directories do you use, how are they labeled...are they labeled with respect to version?
    Assumptions:
    1. repository (database assets only) needs to be baselined (?)
    2. I have approval to change this database directory under the trunk to support best practices and get the client steered straight in terms of recovery and
    Knowns:
    1. the current application version in the database is 5.11.0 (that's my client's application version)
    2. this is for one schema/user of a database (other schemas under the database belong to different trunks)
    This is the layout that we currently have and for the privacy of the
    client I've made this rather generic.  I'd love to have a fresh
    start...how do I go about doing that...initially, I like using
    SqlDeveloper's ability to create sql scripts from a connected target. 
    product_name
      |_trunk
         |_database
           |_config
           |_data
           |_database
           |_integration
           |_patch
           |   |_5.2A.2
           |   |_5.2A.4
           |   |_5.3.0
           |   |_5.3.1
           |
           |_scripts
           |   |_config
           |   |_logs
           |
           |_server
    Thank you in advance.

    HiWe are using Data ONTAP 8.2.3p3 on our FAS8020 in 7-mode and we have 2 aggregates, a SATA and SAS aggregate. I want to decommission the SATA aggregate as I want to move that tray to another site. If I have a flexvol containing 3 qtrees CIFS shares can I use data motion (vol copy) to move the flex vol on the same controller but to a different aggregate without major downtime? I know this article is old and it says here that CIFS are not supported however I am reading mix message that on the version of data ONTAP we are now on does support CIFS and data motion however there will be a small downtime with the CIFS share terminating. Is this correct? Thanks

  • Best Practices for sharing media with iMovie and FCPX

    So I've a large iMovie Events directory, and would like to use that media with both iMovie and FCPX projects.
    I'd rather not duplicate the media, so would prefer to import as references into FCPX.
    The dilemma is that I see that it's possible to modify or move media from within the iMovie application, and therefore break the reference to that media with FCPX.
    I only see two options:  (1) Never Ever modify the location/name of media in the iMovie Events file (even from within the iMovie app) since I would break an FCPX link if that media is referenced, or (2) always import (copy) the iMovie events into the FCPX Event Library making an independent original so that I can confidently operate on those media files in either application.
    I'd surely rather not have to do (2 )(e.g. doubling my storage demands) to gain the flexibility of using either application to edit the video, but really don't want to live with the restrictions of (1).
    Thoughts / Solutions?  What might you consider as options or best practices?

    Unless there is some other reason, users should own the right to share their mailboxes - it shouldn't be something that demands administrator management (if only so that the administrators aren't swamped by user requests for sharing their mailboxes). 
    For true shared mailboxes, when the mailbox is created, full access is granted by an administrator.

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best Practice for Installation of Both Leopard and Aperture 2 upgrade.

    I've finally bought the bullet and purchased both Leopard and Aperture 2.0 upgrade. I've tried searching for a best practice to install both, but haven't been able to find one--only trouble shooting type stuff. Any suggestions, things to avoid, etc would be greatly appreciated. Even a gentle shove to a prior thread would be helpful. . . .
    Thanks for pointing me in the right direction.
    Steve

    steve hutchcraft wrote:
    I've tried searching for a best practice to install...
    • First be really sure that all your apps work well with 10.5.3 before you leave 10.4.11, which is extraordinarily stable.
    • Immediately prior to and immediately after every installation of any kind (OS, apps, drivers, etc.) got to Utilities/Disk Utility/First Aid, and Repair Permissions. Repairing Permissions is not a problem fixer per se, but anecdotally many folks with heavy graphics installations (including me) who follow that protocol seem to maintain better operating environments under the challenge of heavy graphics than folks who do not diligently do so.
    • When you upgrade the OS do a "clean install."
    • RAM is relatively inexpensive and 2 GB RAM is limiting. I recommend adding 4x2 GB RAM. One good source is OWC: http://www.owcomputing.com/.
    • After you do your installations check for updates to the OS and/or Aperture, and perform any upgrades. Remember to Repair Permissions immediately prior to and immediately after the upgrade installations.
    • If you are looking for further Aperture performance improvement, consider the Radeon HD 3870. Reviews at http://www.barefeats.com/harper16.html and at http://www.barefeats.com/harper17.html.
    Good luck!
    -Allen Wicks

  • Best practices for using Normalizer in ASA and in AIP-SSM

    Both PIX OS 7.x and IPS 5.x software have a concept of "traffic normalization". PIX OS on ASA can do virtual reassembly, IPS on SSM (so far as I know) can do physical reassembly and fragmentation of IP packets. Also, both ASA and SSM can do TCP normalization. For example, they both can "check inconsistent retransmissions" and protect against "TTL evasion attacks". I realize that PIX OS has only basic normalization functions and the SSM is much more configurable.
    The question is: what are the best practices here? Is it better to disable some IP/TCP PIX OS checks / IPS signatures on ASA and/or SSM? Is it better to use just SSM for traffic normalization? Does anybody has personal experience here?
    Also, there is a BugID CSCsd04327 - "ASA all out of order packets are dropped when sending to ssm"
    "When ips ssm is inline slowness is reported. show service-policy shows that the number of out of order packets reported match exactly the number of no buffer drops (even with queue-limit option). Performance hit is not the result of tcp normalization (on IPS 5.x ssm) in this case, but rather an issue with asa normalizer."
    To me it seems to be more logical to have normalization function on the firewall, but there may be drawbacks in doing this.
    So, those who're using ASA with SSM, please share your experience.
    Thx.

    Yes, this is almost correct ;)
    TCP SRP (Stream Reassemly Processor) is turned OFF on the SSM and cannot be enabled, contrary to 4200 appliances, but IP FRP (Fragmentation Reassembly Processor) is functioning on the SSM.
    The testing of 7.2(1) shows the following:
    When you configure "policy-map" to send packets to the SSM the "tcp-map" parameter "queue-limit", which has the value of zero by default, is set to an X (the X is unknown). This means that the ASA now only accepts the TCP segments which are sent in the correct order. More specifically, the gaps in SEQs are not allowed anymore. When for example, the ASA receives a TCP segment which has a SEQ within the window, but the previous TCP segment has been lost, it sends an ACK to the sender to enforce retransmition of the lost segment. As a result the sender retransmits both segments. Only after that the ASA forwards both segments to the SSM. This basically means that SSM always sees in-order TCP segments. That it is why SRP is not needed on the SSM.
    There are at least two problems however.
    The first problem is the performance impact.
    ASA now acts almost like a proxy. And, so far as I know, it doesn't support SACK (Selective ACKs). First, when the ASA does TCP SEQ randomization it doesn't change SEQ values within the SACK TCP Option. This simply breakes SACK. Second, even if you turn randomization mechanism OFF, then, I believe, the ASA will not selectively ACK the lost TCP segments, as it simply doesn't support this mechanism.
    The second problem is THE SECURITY HOLE.
    By default the ASA doesn't check TCP checksums. The 4200 appliances do check by default. But as we now know the SRP is turned OFF on the SSM... So, this means that SSM module can easily be evaded. The hacker only needs to mix attacking traffic with the random TCP segments that have bad TCP checksum. The SSM module will see the mixture of the two and will not recognize the attack. The target host will drop TCP segments with the bad checksums and see only attacking traffic... This has been successfully verified in the lab.
    Of course, this security hole can be closed with the "tcp-map" parameter "checksum-verification", but it will definitely has performance impact.
    The last note: All of the above has never been documented by Cisco. So, use at your own risk, etc.
    I hope, you will read this message, Marcoa. All of this MUST be documented. Once again, the default behaviour of the ASA opens up a big security hole.
    Regards,
    Oleg Tipisov,
    REDCENTER,
    Moscow

  • Best Practice for Host Named Site Collections and Web Apps

    Looking for advice on setting up the host named site collections.  If I am reading many of the technet articles and blogs correctly I should 1) have only 1 top level web app for host named site collections and 2) not have a host header for that web
    app.  If that's correct I am looking for advice.  We have 7 separate domains that we support in our farm.  Currently each of those domains is divided into web applications based on the domain,  *.contoso, *.trains.com, *.bakers.com, etc.
      Is the concept now that all of the host named site collections fall under that one web app?  How do we deal with the SSL for each of those separate domains which all have their own certificates? 
    Thanks in advance for your comments. 
    NLewis

    Yes, for creating host named site collections, first you create a host header less web app and then create host named site collections under that web app. However this is only for the cases where all the host named site collections ends in one domain. So
    you can create host named site collections as intranet.contoso.com, my.contoso.com, portal.contoso.com etc as they are all ending in *.contoso.com.
    As per your environment, if you have web apps which caters to different domains like *.contoso.com, *.trains.com, *.bakers.com, you need to create separate web apps as they are all ending in different domains. Then you can have a separate wildcard SSL certificate
    for each of those web apps.
    Hope this helps.
    Thanks
    Mohit

  • What is tthe Best practice for Variant List, Add, Edit and Display Forms?

    Requirement:
    I have single list.  The list has a large number of columns and a large number of items (lets say 20,000).
    I want to show users a different view of the list based on clicking on a different left-hand navigation option.
    Lets say I have four types of users:  Sales, Manufacturing, Shipping and Finance. I would like to have four options in the left-hand navigation.
    All of them would be pointing at the same list, BUT, I want each of them to have a customer list form.  The only difference between the custom list forms would be:
    Each would have its own set of views, and hence its own default view.
    Each would have its own New, Edit and Display Forms.  The only difference between the forms in one variant list and another would be: The order of the columns and which columns are modifiable.
    I would like to achieve this in SharePoint Designer in such a way that the "users" could still add/modify views and could even modify the forms from the SharePoint Menu.  BTW, I don't want to use InfoPath for obvious reasons.
    What is the best approach to meeting this requirement?  I have at least 20 sites and 70 lists overall that need variant forms made.
    HELP!!
    Savin
    BTW We are using SharePoint 2013 and I selected the wrong forum *sigh*.  But I think its probably the same answer.
    Cheers, Savin Smith

    Hi,
    I understand that you want to have different forms based on different view.
    Per my knowledge, there are no out of box method to achieve it.
    As a workaround, you can add the JavaScript code to the different view page.
    For example, to open different new form based on different view, you can get the windows.location, and then judge the view, then change the onclick event of the "New item" button.
    For more information, you can refer to:
    http://css-tricks.com/snippets/javascript/get-url-and-url-parts-in-javascript/
    http://samsharepoint.wordpress.com/2013/05/01/change-the-default-sharepoint-ok-and-cancel-button/
    Thanks,
     Linda
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Linda Li
    TechNet Community Support

  • Best practice for calling application module methods and plsql code

    In my application I am experiencing problems with connection pooling, I seem to be using a lot of connections in my application when only a few users are using the system. As part of our application we need to call database procedures for business logic.
    Our backing beans, call methods on the application module which in turn call a database procedure. For instance in the backing bean we have code like this to call the application module method.
    // Calling Module to generate new examination/test.
    CIGAppModuleImpl appMod = (CIGAppModuleImpl)Configuration.createRootApplicationModule("ky.gov.exam.model.CIGAppModule", "CIGAppModuleLocal");
    String testId = appMod.createTest( userId, examId, centerId).toString();
    AdfFacesContext.getCurrentInstance().getPageFlowScope().put("tid",testId);
    // Close the call
    System.out.println("Calling releaseRootApplicationModule remove");
    Configuration.releaseRootApplicationModule(appMod, true);
    System.out.println("Completed releaseRootApplicationModule remove");
    return returnResult;
    In the application module method we have the following code.
    System.out.println("CIGAppModuleImpl: Call the database and use the value from the iterator");
    CallableStatement cs = null;
    try{
    cs = getDBTransaction().createCallableStatement("begin ? := macilap.user_admin.new_test_init(?,?,?); end;", 0);
    cs.registerOutParameter(1, Types.NUMERIC);
    cs.setString(2, p_userId);
    cs.setString(3, p_examId);
    cs.setString(4, p_centerId);
    cs.executeUpdate();
    returnResult=cs.getInt(1);
    System.out.println("CIGAppModuleImpl.createTest: Return Result is " + returnResult);
    }catch (SQLException se){
    throw new JboException(se);
    finally {
    if (cs != null) {
    try {
    cs.close();
    catch (SQLException s) {
    throw new JboException(s);
    I have read in one of Steve Muench presentations (Oracle Fusion Applications Team' Best Practises) that calling the createRootApplicationModule method is a bad idea, and to call the method via the binding interface.
    I am assuming calling the createRootApplicationModule uses much more resources and database connections that calling the method through the binding interface such as
    BindingContainer bindings = getBindings();
    OperationBinding ob = bindings.getOperationBinding("customMethod");
    Object result = ob.execute()
    Is this the case? Also is using getDBTransaction().createCallableStatement the best way of calling database procedures. Would it be better to expose plsql packages as webservices and then call them from the applicationModule. Is this more efficient?
    Regards
    Orlando

    Thanks Shay, this is now working.
    I successfully got the binding to the application method in the pagedef.
    I used the following code in my backing bean.
    package view.backing;
    import oracle.binding.BindingContainer;
    import oracle.binding.OperationBinding;
    public class Testdatabase {
    private DCBindingContainer bindingContainer;
    public void setBindingContainer (DCBindingContainer bc) {this.bindingContainer = bc;}
    public DCBindingContainer getBindingContainer() {return bindingContainer;}
    public static String validateUser()
    // Calling Module to validate user and return user role details.
    System.out.println("Getting Binding Container from Home Backing Bean");
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    System.out.println("Obtain binding");
    OperationBinding operationBinding = bindings.getOperationBinding("calldatabase");
    System.out.println("Set username parameter");
    operationBinding.getParamsMap().put("p_userId",userId);
    System.out.println("Set password parameter");
    operationBinding.getParamsMap().put("p_testId",examId);
    Object result = operationBinding.execute();
    System.out.println("Obtain result");
    String userRole = result.toString();
    System.out.println("Result is "+userRole);
    }

  • Best practices for configure Rogue Detector AP and trunk port?

    I'm using a 2504 controller.  I dont have WCS.
    My questions are about the best way to configure a Rogue Detector AP.
    In my lab environment I setup the WLC with 2 APs.  One AP was in local mode, and I put the other in Rogue Detector mode.
    The Rogue Detector AP was connected to a trunk port on my switch.  But the AP needed to get its IP address from the DHCP server running on the WLC.  So I set the native vlan of the trunk port to be the vlan on which the WLC management interface resides.  If the trunk port was not configured with a native vlan, the AP couldn't get an address through DHCP, nor could the AP communicate with the WLC.  This makes sense because untagged traffic on the trunk port will be delivered to the native vlan.  So I take it that the AP doesn't know how to tag frames.
    Everything looked like it was working ok.
    So I connected an autonomous AP (to be used as the rogue), and associated a wireless client to it.  Sure enough it showed up on the WLC as a rogue AP, but it didn't say that it was connected on the wire.  From the rogue client I was able to successfully ping the management interface of the WLC.
    But the WLC never actually reported the rogue AP as being connected to the wired network.
    So my questions are:
    1. What is the correct configuration for the trunk port?  Should it not be configured with a native vlan?  If not, then I'm assuming the rogue detector AP will have to have a static IP address defined, and it would have to be told which vlan it's supposed to use to communicate with the WLC.
    2.  Assuming there is a rogue client associated with the rogue AP, how long should it reasonably take before it is determined that the rogue AP is connected to the wired network?  I know this depends on if the rogue client is actually generating traffic, but in my lab environment I had the rogue client pinging the management interface of the WLC and still wasn't being picked up as an on-the-wire rogue.
    Thanks for any input!!

    #what's the autonomous AP's(as Rogue AP) Wired and Wireless MAC address?
    it has to be +1 or -1 difference. If Wired MAC is x.x.x.x.x.05 and the wireless mac should be x.x.x.x.x.04 or 06. It is not going to detect if the difference is more than + 1 or - 1.
    #Does the switch sees the Rogue AP's wired MAC on its MAC table.
    Rogue Detector listens to ARPs to get all the Wired MAC info and forwards to WLC, It compares with Wireless MAC, if there is a +1 or -1 difference then it will be flagged as Rogue on wire. And the client that connected to it is also marked as found on wire.
    Regards to Trunking, Only Native vlan matters per trunk link, just configure the right vlan as native and we're done.
    It is not mandatory to keep the Rogue detector on Management vlan of wlc. It can also be on L3 vlan also as long as it can join the WLC to forward the learnt wired MACs.
    So if we don't have +1, -1 difference on Rogues then you've to use RLDP which will work with your existing setup to find Rogue on wire. there's a performance hit when we use this feature on local mode APs.
    Note: For AP join - AP can't understand Trunk, meaning if AP connected to Trunk it'll only talk to its native vlan irrespective of AP mode, however rogue detector listens to the Trunk port to learn MACs via ARPs from different VLANs and forwards to WLC using native vlan.

  • Best Practices for vMotion QoS in N1kv and UCS?

    Hi,
    I'm looking at a few technical documentation on what's the recommended way to provide QoS to vMotion.
    From VMWare's website on vSphere deployment with N1kv,
    They are using
    policy-map type qos vmotion
         class class-default
              police cir percent 30 bc 200 ms conform transmit violate drop
    This rate-limits vMotion traffic to 3Gbps and excess traffic will be dropped.
    Would this be better if I were to use:
    policy-map type qos vmotion
         class class-default
              police cir percent 30 bc 200 ms conform set-cos-transmit 4 exceed drop
    policy-map type vethernet vMotion
         switchport access vlan 900
         service-policy type qos in vmotion
         pinning id 0
    I'm marking vmotion traffic with a CoS of 4 and pin it to Fabric-A. I will have my management VLAN pinned to Fabric-B.
    Also, do I need to configure QoS settings in UCS as well?
    For the upstream switch, if I'm using a Catalyst 3750, would it be sufficient just to do a mls qos trust cos?
    Appreciate your advice.
    Thanks..

    Steven,
    Yes. You can use the modified QoS policy which changes the COS Values.
    We need to do some more configuration in Fabric Interconnect for the COS Values that are modified by Nexus 1000v to keep it as it is.
    The M81KR adapter works in a “no trust” QoS model which means that it will overwrite the CoS value set by an upstream entity (Nexus 1000v for example). For Nexus1000v deployments, it is highly recommended to do the CoS marking at the Nexus 1000v level. This means changing the QoS model to “trust” on the M81KR.
    To create a QoS policy to achieve this, look into the attached file. We need to have option "Full" enabled in it.
    Here are further details about this configuration from UCS Manager help:
    Host Control field
    Whether Cisco UCS controls the class of service (CoS). This can be:
    None—Cisco UCS uses the CoS value associated with the priority selected in the Priority drop-down list regardless of the CoS value assigned by the host.
    Full—If the packet has a valid CoS value assigned by the host, Cisco UCS uses that value. Otherwise, Cisco UCS uses the CoS value associated with the priority selected in the Priority drop-down list.
    Regards
    Nethaji V

  • [SOLVED] Best Practice for systemd, udisks, udev, polkit,and openbox

    I have to admit I am stymied here.  I have been in the wiki and forums most of the day and am just not finding the solution here.
    I have both udisks and udisks2 installed
    I am running a pure systemd
    I do not use a login manager
    I use startx that has a switch clause based on an environmental variable to pick the environment I want.
    For now, let's limit this to xfce4 and openbox.
    I used to be able to log in and mount usb and SD volumes using Thunar, Dolphin, and palimpsest.  I can no longer do that; I receive an error indicating I've not the proper privileges.
    Even more fun, I log into a console and use udiskie, and can mount volumes without issue.
    I start X with startx, launch a terminal program, and udiskie will no longer allow me to mount.  I can read volumes mounted by udiskie before I started X.
    I have tried various combinations of ck-launch-session, dbus-launch in concert with openbox-session and startxfce4.  Nothing allows me to automount as a normal user.  This used to work, but something has gone south, but I am not clear when.
    Anyone have suggestions for a stable solution here?
    A couple files as they exist right now:
    ewaller@odin:~ 1005 %cat .xinitrc
    #xset b off
    setxkbmap -option ctrl:nocaps
    case $WM in
    openbox)
    exec ck-launch-session openbox-session
    e17)
    dbus-launch
    enlightenment_start
    vb)
    VirtualBox -startvm "Windows XP" -fullscreen
    xfce4)
    exec ck-launch-session startxfce4
    exec ck-launch-session startxfce4
    esac
    ewaller@odin:~ 1006 %cat /etc/polkit-1/localauthority/50-local.d/10-udisks.pkla
    [Local Users udisk2]
    Identity=unix-group:users
    Action=org.freedesktop.udisks2.*
    ResultAny=yes
    ResultInactive=yes
    ResultActive=yes
    [Local Users udisk]
    Identity=unix-group:users
    Action=org.freedesktop.udisks.*
    ResultAny=yes
    ResultInactive=yes
    ResultActive=yes
    ewaller@odin:~ 1007 %
    ewaller@odin:~[1] 1007 %
    Last edited by ewaller (2012-10-16 03:28:49)

    Are you starting X onto the same vt you logged into i.e. startx -- vt1 for tty1 or startx -- vt$(fgconsole) (or falconindy's xserverrc) for a more general case?   If you don't, systemd-logind cannot keep track of the session and the -kit permissions will thus break.
    You can see sessions using loginctl:
    └$ loginctl
    SESSION UID USER SEAT
    1 1000 zekesulastin seat0
    4 1000 zekesulastin seat0
    2 sessions listed.
    You can see information about your sessions using loginctl show-session $num - the session must be Active:
    └$ loginctl show-session 4
    Id=4
    Timestamp=Sun, 14 Oct 2012 23:18:56 -0400
    TimestampMonotonic=49646619798
    DefaultControlGroup=name=systemd:/user/zekesulastin/4
    VTNr=1
    TTY=tty1
    Remote=no
    Service=login
    Leader=12771
    Audit=4
    Type=tty
    Class=user
    Active=yes <-- This must be "yes"
    State=active
    KillProcesses=no
    IdleHint=yes
    IdleSinceHint=1350271136064237
    IdleSinceHintMonotonic=49646611251
    Name=zekesulastin
    (I personally have a conditional in bash_profile to exec startx -- vt1 if I log in on tty1.)
    Last edited by ZekeSulastin (2012-10-15 04:37:59)

Maybe you are looking for

  • Can I use parallel and multiplexed mode?

    Hi, I have a NI USB-6259 DAQ Module (Two Connectors with to Cables to two SCXI-1349, which connects the DAQ to the Chassis) a SCXI-1001 Chassis 3 SCXI-1143 8 Channel Low Pass Filter Modules and a SCXI-1180 Front End Plate As the DAQ has 16 diff. AI C

  • Deploy web application to multiple web front end servers

    Hi, I have a SharePoint farm that include 8 web front end servers and 2 application servers. I have created web application from central administration and during creation I added the public URL to refer to one of the web front end servers. Now I wan

  • Moving iTunes library to new pc on Windows 8

    I have moved my library across using an external hard drive and following these instructions http://support.apple.com/en-us/HT4527 It still doesn't work and I get an error message saying "The iTunes Library.itl file cannot be found or created. The de

  • SQL Profiler's equivalent in Oracle.

    Hi All, We are using Oracle 10g as backend for our application. We would like to see which database objects (stored procedures, functions, triggers etc) are getting called at the Oracle database server when various users connect to the application an

  • EJBContainer (glassfish v 3.0 impl) and JUnit, jndi lookup is impossible!!!

    Hello, i hope to find a solution, but i think there's something wrong: i did my JUnit test case and i instantiate the glassfish v3 EJBContainer in my @BeforeClass method running the test, deploy messages are something like this: INFO: Portable JNDI n