Best practice for Javadoc comments in both an interface and implementation

For the Spring project I'm writing, I have interfaces with typically 1 implementation class associated with it. This is for various good reasons. So I'm trying to be very thorough with my Javadoc comments, but I realize I'm basically copying them over from interface to implementation class. Shouldn't Javadoc have a setting or be smart enough to grab the comments from my interface if the implementation class doesn't have a comment? Unless of course there is interesting information about the implementation, it's rare the 2 sets of comments would be different.

Thanks Thomas!! Exactly what I was looking for. Here's the JavaSE 6 version of the page you provided:
http://java.sun.com/javase/6/docs/technotes/tools/windows/javadoc.html#inheritingcomments

Similar Messages

  • Best Practice for Use of ABAP in Customizing SRM and/or CRM

    I was wondering if there is a document that defines best practices for the use of ABAP with the installation and customization of SRM and/or CRM.   Such as amount of ABAP coding typically required, and best practices around the use of ABAP for customization and configuration.
    Thanks.

    Hi, Johnson
    Sorry, Please don't mind, you are not at right place to ask the Question like this
    Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!!
    Thanks and Regards,
    Faisal

  • Best Practice for module components based on API's

    Hi all,
    is there a white paper/other documents which outline best
    practice for using table/module component api's and best
    approaches to get around restrictions?
    I would also appreciate war stories about using this method of
    Forms development bearing in mind i would be using this as the
    foundation of a web deployed based application (intranet first -
    ultimately internet).
    Thanks
    Mark
    null

    You cannot add agents to skills dynamically; however, you can queue the caller into more than one CSQ based on statistics. You would use the Get Reporting Statistics and an If step within the Queued branch of your first Select Resource step. If the condition is met (e.g. Contacts Waiting >= 10) then exectue a second Select Resource step within the queued branch of the first. The contact would then be waiting in both CSQs waiting for an agent.

  • Best Practices for Team Collaboration using Adobe Captivate

    With a team of 6 Instructional Designers, how can Adobe Captivate be approached where we can collaborate while producing e-learning material while maintaining a consistent look and feel of the e-learning we produce?
    What are the best practices for a team of 6 IDs working and creating e-learning material in Captivate?  Is there anything build-in that allows us to connect to the same libraries, templates, etc to share?  
    Please advise.
    Thank you!
    Susanne

    Only some tips, never collaborated with someone else, being the solo teacher. You didn't mention which version you are using, what I write here is meant for CP7.
    Be sure to prepare a theme and/or a template that will be used by everyone. A theme consists of master slides, object styles, skin editor. Master slides can have custom navigation shape buttons.  In a template you can eventually also prepare different slides with placeholders, and eventually advanced actions etc. For CP6 and earlier that is the only way to reuse advanced actions, in Captivate 7 you can export shared actions that can be imported in any project for reuse.
    A feature that few users know about are the external libraries. You can open the library of any project as an external library in another project. That is a good idea to store assets that you want to use in different projects: images, audio clips, video clips, eventually equations. The shared actions in a library can not (yet?) be used in another project however.
    If you are on CP7 you have automatically the roundtripping with source Adobe Photoshop files and source Audition files, both from CC. That can also make collaboration lot easier if those assets are prepared in those applications. Will not expand on that, because I'm not sure you have the Creative Cloud applications.
    Those are my two cents.
    Lilybiri

  • Best practice for reinstalling anti virus on reinstalling windows

    Best practice for reinstalling anti virus after formatting drive and reinstalling windows No anti virus disc.
    Hasty

    Hello,
    I'd ask in the Windows forum on Microsoft Community.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog:http://unlockpowershell.wordpress.com
    My Book:Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • What is the Best practice for ceramic industry?

    Dear All;
    i would like to ask two questions:
    1- which manufacturing category (process or discrete) fit ceramic industry?
    2- what is the Best practice for ceramic industry?
    please note from the below link
    [https://websmp103.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000409682008E ]
    i recognized that ceramic industry is under category called building material which in turn under mill product and mining
    but there is no best practices for building material or even mill product and only fabricated meta and mining best practices is available.
    thanks in advance

    Hi,
    I understand that you refer to production of ceramic tiles. The solution for PP was process, with these setps: raw materials preparation (glazes and frits), dry pressing (I don't know extrusion process), glazing, firing (single fire), sorting and packing. In Spain, usually are All-in-one solutions (R/3 o ECC solutions). Perhaps the production of decors have fast firing and additional processes.
    In my opinion, the curiosity is in batch determination in SD, that you must determine in sales order because builders want that the order will be homogeneus in tone and caliber, and he/she can split the order in diferents deliveries. You must think that batch is tone (diferents colours in firing and so on) and in caliber.
    I hope this helps you
    Regards,
    Eduardo

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Best Practices for E-Commerce

    Hi Experts,
    I was wondering if anybody has experience in installing and configuring SAP Best Practices for E-Commerce. I have downloaded and installed the ADD-ONs but when trying to configure IPC and webshop via XCM http://<server>:<port>/isauseradm/admin/xcm/init.do I am getting a 404 error "URL or resource not found".
    Who knows what component I am missing and can put me in the right direction!
    Thanks in advance!
    Wolfgang

    I am not sure I understand your question.
    If you are using E-Commerce with CRM2007 (CRM6), not ERP, then use the [Best Practices Building Blocks|http://help.sap.com/bp%5Fcrmv12007/CRM%5FDE/html/] -> Technical Information -> Building Block Library
    However, there STILL is not a BP for E-Commerce with CRM though.

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

Maybe you are looking for

  • Restrict % symbol in normalization BI 7.0

    Dear Friends, I have two quick questions. 1. In Bex, while selecting variable values, how do i do setting to read the values either from Master data table or from the cube (data target). 2. I have normalized a key figure. It shows normalization with

  • HP laptop battery plugged in not charg.....

    I bought a HP Mini 5101 from US in April .I had the common problem of charger being plugged in but not charging, I tried all the reboot and tapping the power button etc, etc... None worked, so as long as the computer worked when plugged in I carried

  • (Aus@Fra)W@tch Australia Wallabies vs France Live StreaM June tests online Watching Tv 2 rugby

    http://www.codecademy.com/forum_questions/539c203652f863b4c60015cd http://www.codecademy.com/forum_questions/539c203652f863b4c60015cd http://www.codecademy.com/forum_questions/539c203652f863b4c60015cd http://www.codecademy.com/forum_questions/539c203

  • Configure 6400 scsi controler

    Folks, I have attached a 6400 scsi controler 6400 on my HP DL 380 G5 running Oracle Unbreakable Linux 4 to make a backup to a Dell Power Vault 100 T tape drive. 1. On the first screen BIOS detection it shows that theres is a 6400 scsi controler 2. Sh

  • Client access action script

    I have seen websites with client access text fields. i want to create one, but do not have an example to work from. For example, type in your access code "12345", submit, and be forwarded to a site directory of the same name eg. mysite.com/12345/ Any