Best practice for Source NATTING ?

Is there a general design rule for configuring source NATing ? Is it best to configure the CSS is one/two armed mode.
What are the perfomance limitations in doing this ?
Can soure NATed and non source NATed content rules be configured on the CSS with no impact ?
Cheers, Mike

Source groups translate the source address of packets from back-end services before forwarding them. When a flow is originated from the back-end server with a private address, the request appears to come from the public Virtual IP (VIP) of the source group. You can also use source groups (with Access Lists (ACLs)) to translate clients' private IP addresses (which reside on the back-end of the CSS) to a public IP address (the VIP).
The use of this type of source group is useful when setting up a one-armed configuration where client and server traffic flows through the same CSS switch. For more information read the following document.
http://www.cisco.com/en/US/products/hw/contnetw/ps789/products_tech_note09186a0080093dfc.shtml

Similar Messages

  • Looking For Guidance: Best Practices for Source Control of Database Assets

    Database Version: 11.2.0.3
    OS: RHEL 6.2
    Source Control: subversion
    This is a general question aimed at database professionals, however, it is not specific to any oracle version, etc.  Its a leadership question for other Oracle shops regarding source control.
    The current trunk, in my client's source control, is the implementation of a previous employee who used ER Studio.  After walking the batch scripts and subordinate files , it was determined that there would be no formal or elegant way to recreate the current version of the database from our source control - the engineers who have contributed to these assets are no longer employed or available for consulting.  The batch scripts are stale, if you will.
    To clean this up and to leverage best practices, I need some guidance on whether or not to baseline the current repository and how to move forward with additions of assets; tables, procs, pkgs, etc.  I'm really interested in how larger oracle shops organize their repository - what directories do you use, how are they labeled...are they labeled with respect to version?
    Assumptions:
    1. repository (database assets only) needs to be baselined (?)
    2. I have approval to change this database directory under the trunk to support best practices and get the client steered straight in terms of recovery and
    Knowns:
    1. the current application version in the database is 5.11.0 (that's my client's application version)
    2. this is for one schema/user of a database (other schemas under the database belong to different trunks)
    This is the layout that we currently have and for the privacy of the
    client I've made this rather generic.  I'd love to have a fresh
    start...how do I go about doing that...initially, I like using
    SqlDeveloper's ability to create sql scripts from a connected target. 
    product_name
      |_trunk
         |_database
           |_config
           |_data
           |_database
           |_integration
           |_patch
           |   |_5.2A.2
           |   |_5.2A.4
           |   |_5.3.0
           |   |_5.3.1
           |
           |_scripts
           |   |_config
           |   |_logs
           |
           |_server
    Thank you in advance.

    HiWe are using Data ONTAP 8.2.3p3 on our FAS8020 in 7-mode and we have 2 aggregates, a SATA and SAS aggregate. I want to decommission the SATA aggregate as I want to move that tray to another site. If I have a flexvol containing 3 qtrees CIFS shares can I use data motion (vol copy) to move the flex vol on the same controller but to a different aggregate without major downtime? I know this article is old and it says here that CIFS are not supported however I am reading mix message that on the version of data ONTAP we are now on does support CIFS and data motion however there will be a small downtime with the CIFS share terminating. Is this correct? Thanks

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • Best practices for setting up projects

    We recently adopted using Captivate for our WBT modules.
    As a former Flash and Director user, I can say it’s
    fast and does some great things. Doesn’t play so nice with
    others on different occasions, but I’m learning. This forum
    has been a great source for search and read on specific topics.
    I’m trying to understand best practices for using this
    product. We’ve had some problems with file size and
    incorporating audio and video into our projects. Fortunately, the
    forum has helped a lot with that. What I haven’t found a lot
    of information on is good or better ways to set up individual
    files, use multiple files and publish projects. We’ve decided
    to go the route of putting standalones on our Intranet. My gut says
    yuck, but for our situation I have yet to find a better way.
    My question for discussion, then is: what are some best
    practices for setting up individual files, using multiple files and
    publishing projects? Any references or input on this would be
    appreciated.

    Hi,
    Here are some of my suggestions:
    1) Set up a style guide for all your standard slides. Eg.
    Title slide, Index slide, chapter slide, end slide, screen capture,
    non-screen capture, quizzes etc. This makes life a lot easier.
    2) Create your own buttons and captions. The standard ones
    are pretty ordinary, and it's hard to get a slick looking style
    happening with the standard captions. They are pretty easy to
    create (search for add print button to learn how to create
    buttons). There should instructions on how to customise captions
    somewhere on this forum. Customising means that you can also use
    words, symbols, colours unique to your organisation.
    3) Google elearning providers. Most use captivate and will
    allow you to open samples or temporarily view selected modules.
    This will give you great insight on what not to do and some good
    ideas on what works well.
    4) Timings: Using the above research, I got others to
    complete the sample modules to get a feel for timings. The results
    were clear, 10 mins good, 15 mins okay, 20 mins kind of okay, 30
    mins bad, bad, bad. It's truly better to have a learner complete
    2-3 short modules in 30 mins than one big monster. The other
    benefit is that shorter files equal smaller size.
    5) Narration: It's best to narrate each slide individually
    (particularly for screen capture slides). You are more likely to
    get it right on the first take, it's easier to edit and you don't
    have to re-record the whole thing if you need to update it in
    future. To get a slicker effect, use at least two voices: one male,
    one female and use slightly different accents.
    6) Screen capture slides: If you are recording filling out
    long window based databse pages where the compulsory fields are
    marked (eg. with a red asterisk) - you don't need to show how to
    fill out every field. It's much easier for the learner (and you) to
    show how to fill out the first few fields, then fade the screen
    capture out, fade the end of the form in with the instructions on
    what to do next. This will reduce your file size. In one of my
    forms, this meant the removal of about 18 slides!
    7) Auto captions: they are verbose (eg. 'Click on Print
    Button' instead of 'Click Print'; 'Select the Print Preview item'
    instead of 'Select Print Preview'). You have to edit them.
    8) PC training syntax: Buttons and hyperlinks should normally
    be 'click'; selections from drop down boxes or file lists are
    normally 'select': Captivate sometimes mixes them up. Instructions
    should always be written in the correct order: eg. Good: Click
    'File', Select 'Print Preview'; Bad: Select 'Print Preview' from
    the 'File Menu'. Button names, hyperlinks, selections are normally
    written in bold
    9) Instruction syntax: should always be written in an active
    voice: eg. 'Click Options to open the printer menu' instead of
    'When the Options button is clicked on, the printer menu will open'
    10) Break all modules into chapters. Frame each chapter with
    a chapter slide. It's also a good idea to show the Index page
    before each chapter slide with a progress indicator (I use an
    animated arrow to flash next to the name of the next chapter), I
    use a start button rather a 'next' button for the start of each
    chapter. You should always have a module overview with the purpose
    of the course and a summary slide which states what was covered and
    they have complete the module.
    11) Put a transparent click button somewhere on each slide.
    Set the properties of the click box to take the learner back to the
    start of the current chapter by pressing F2. This allows them to
    jump back to the start of their chapter at any time. You can also
    do a similar thing on the index pages which jumps them to another
    chapter.
    12) Recording video capture: best to do it at normal speed
    and be concious of where your mouse is. Minimise your clicks. Most
    people (until they start working with captivate) are sloppy with
    their mouse and you end up with lots of unnecessarily slides that
    you have to delete out. The speed will default to how you recorded
    it and this will reduce the amount of time you spend on changing
    timings.
    13) Captions: My rule of thumb is minimum of 4 seconds - and
    longer depending on the amount of words. Eg. Click 'Print Preview'
    is 4 seconds, a paragraph is longer. If you creating knowledge
    based modules, make the timing long (eg. 2-3 minutes) and put in a
    next button so that the learner can click when they are ready.
    Also, narration means the slides will normally be slightly longer.
    14) Be creative: Capitvate is desk bound. There are some
    learners that just don't respond no matter how interactive
    Captivate can be. Incorporate non-captivate and desk free
    activities. Eg. As part of our OHS module, there is an activity
    where the learner has to print off the floor plan, and then wander
    around the floor marking on th emap key items such as: fire exits;
    first aid kit, broom and mop cupboard, stationary cupboard, etc.
    Good luck!

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for Spark repeating background image in a SkinnableContainer?

    In my old Flex 3.5 code I would accomplish this by dropping an Image into a Canvas and hitting the Canvas with some css style stuff to get the repeat.  The Image tag had a source property that would take a web address so I could dynamically grab different images from the web based on some conditions.  Little bit more trouble with Flex 4.5 and Spark but I'm trying to get there.
    Here Adobe docs explain how to *embed* a background image:
    http://help.adobe.com/en_US/flex/using/WS422719A4-7849-4921-AF39-57FF567B483B.html#WS063B0 491-B7AB-4b00-A39F-E44310BCB0E0
    They use a BitmapFill object in the skin.
    <!-- background fill -->
        <s:Rect id="background" left="3" top="3" right="3" bottom="3" alpha=".25">
            <s:fill>
                <s:BitmapFill source="@Embed(source='../../assets/myImage.jpg')"/>
            </s:fill>
        </s:Rect>
    Problem is I need to do this without embedding the image.  In my old code I grabbed the image from web (set source property on Image tag to web address).  What's the best practice for achieving this with a skinnable container?  The BitmapFill object used above won't take a web address for a source.
    Thanks in advance.

    Use BitmapImage with a fillMode of repeat:
    <s:BitmapImage source="http://www.google.com/intl/en_com/images/srpr/logo2w.png" width="100%" height="100%" scaleMode="stretch" fillMode="repeat" />

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • Best Practice For Working on Composite In Team

    Hello,
    I would like to know what is the best practice for working on a single composite by mutiple members in a team.
    We have a core services module wherein a single composite contains many services. So, to complete in time, we would like many members to work on it simultaneously.
    In such scenarios, if some one adds a new adapter or some other services, composite.xml changes.
    Saving it would override other member's changes. Also, it is not possible to apply lock simultaneously on the same file through some version control mechanism.
    Please let us know what should be the best practice in such scenarios.
    Thanks-
    Ashish

    You can very well use a version control software with JDev. You may refer -
    http://www.oracle.com/technetwork/articles/soa/jimerson-config-soa-355383.html
    I think without version control mechanism (like subversion) it won't be easy to work in a multi-developer environment. If you really don't have a source and version control mechanism then manual merging will be required which may be error prone and time & effort consuming.
    Regards,
    Anuj

  • "Best practice" for components calling components on different panels.

    I'm very new to Swing. I have been learning from tutorials, but these are always relatively simple interfaces , in which every component and container is initialised and added in the constructor of a main JFrame (extension) object.
    I would assume that more complex, real-world examples would have JPanels initialise themselves. For example, I am working on a project in which the JFrame holds multiple JPanels. One of these Panels holds a group of JToggleButtons (grouped in a ButtonGroup). The action event for each button involves calling the repaint method of one of the other Panels.
    Obviously, if you initialise everything in the JFrame, you can simply have the ActionListener refer to the other JPanel directly, by making the ActionListener a nested class within the JFrame class. However, I would like the JPanels to initialise their own components, including setting the button actions, by using an extension of class JPanel which includes the ActionListeners as nested classes. Therefore the ActionListener has no direct access to JPanel it needs to repaint.
    What, then, is considered "best practice" for allowing these components to interact (not simply in this situation, but more generally)? Should I pass a reference to the JPanel that needs to be repainted to the JPanel that contains the ActionListeners? Should I notify the main JFrame that the Action event has fired, and then have that call "repaint"? Or is there a more common or more correct way of doing this?
    Similarly, one of the JPanels needs to use a field belonging to the JFrame that holds it. Should I pass a reference to this object to the JPanel, or should I have the JPanel use "getParent()", or some other method?
    I realise there are no concrete answers to this query, but I am wondering whether there are accepted practices for achieving this. My instinct is to simply pass a JPanel reference to the JPanel that needs to call repaint, but I am unsure how extensible this would be, how tightly coupled these classes would become.
    Any advice anybody could give me would be much appreciated. Sorry the question is so long-winded. :)

    Hello,
    nice to get feedback.
    I've been looking at a few resources on this issue from my last post. In my application I have been using the Observer and Observable classes to implement the MVC pattern suggested by T.PD.(...)
    Two issues (not fatal, but annoying) with this are:
    -Observable is a class, not an interface; since most of my Observers already extend JPanel (or some such), I have had to create inner classes.
    -If an Observer is observing multiple Observables, it will have to determine which Observer called its update() method (by using reference equality or class comparison or whatever). Again, a very minor issue, but something to keep in mind.I don't deem those issues are minor. The second one in particular, is rather annoying in terms of maintenance ("Err, remind me, which widget is calling this "update()" method?").
    In addition to that, the Observable/Observer are legacy non-generified classes, that incurr a loosely-typed approach (the subject and context arguments to the update(Observable subject, Object context) methods give hardly any info in themselves, and they generally have to be cast to provide app-specific information.
    Note that the "notification model" from AWT and Swing widgets is not Observer-Observable, but merely EventListener . Although we can only guess what reasons made them develop a specific notification model, I deem this essentially stems from those reasons.
    The contrasting appraoches are discussed in this article from Bill Venners: The Event Generator Idiom (http://www.artima.com/designtechniques/eventgenP.html).
    N.B.: this article is from a previous-millenary series of "Design Techniques" articles that I found very useful when I learned OO design (GUI or not).
    One last nail against the Observer/Observable model: these are general classes that can be used regardless of the context (GUI/non-GUI code), so this makes it easier to forget about Swing threading rules when using them (essentially: is the update method called in the EDT or not).
    If anybody has any information on the performance or efficiency of using Observable/ObserverI would be very surprised if this had any performance impact. If it had, that would mean that you have either:
    - a lot of widgets that are listening to one another (and then the Mediator pattern is almost a must to structure such entangled dependencies). And even then I don't think there could be any impact below a few thousands widgets.
    - expensive or long-running computation in the update methods. That's unrelated to the notification model itself.
    - a lot of non-GUI components that use the Observer/Observable to communicate among themselves - all the more risk then, to have a GUI update() called outside the EDT, see remark above.
    (or whether there are inbuilt equivalents for Swing components)See discussion above.
    As far as your remark 2 goes (if one observer observes more than one subjects, the update() method contains branching logic) : this also occurs with the Event Delegation model indeed: for example, it is quite common that people complain that their actionPerformed() method becomes unwieldy when the same class listens for several JButtons.
    The usual advice for this is, use anonymous listeners, each of which handles the event from only one source (and generally very close in code to the definition of that source), and that simply translates the "generic" event notification method into a specific method call of a Controller or Mediator .
    Best regards.
    J.
    Edited by: jduprez on May 9, 2011 10:10 AM

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

Maybe you are looking for

  • Access of ms access database and displaying it on applets

    I had made the connection thru JDBC and was able to access data from MS Access database when i had written java application program. But when i created one applet and created instance of that class which was making connection to the database thru JDB

  • Minimum steps required for Change Management and Incident management

    Hello experts, Does anyone have a check list of the absolute minimum steps required for getting Change Management and Incident management functioning in Solution Manager. I just need it to function with creating changes and incidents and be able to a

  • Planned order number range in apo

    Where planned order number range  can be maintain in apo. I am converting agreed plan into planned order but planned order number has generated. Please somebody tell me. Regards, Sunill

  • Director and external devices

    Hello: I would like to get an online course about Multimedia applications with Lingo/Director and devices like LED's, switches, displays, etc. Controlled by Serial/USB/parallel ports. Do you know where I can get this online course? thank you

  • Adprep/Domian prep problem

    Dear Exper I've been facing a problem in creating a Windows 2008 Server as a Domain controller for an existing WIN 2003 Domain Controller. I ran adprep/forestprep and adprep/rodcprep successfully but while running adprep/domainprep it prompted an err