Best practice for read-only functionality

Hi,
I'm part of the development team of a system with about 100 screens. The customer would like us to add some read-only functionality to the system, so that certain users are able to access the screens but not change any of the data on them. We already have policies in place on the database level that keeps read-only users from saving data, but it's not very user friendly to allow users to change data on a screen, only to tell them that they're not allowed to save those changes once they try to do so. It would clearly be better if all components are rendered as read-only components for read-only users, making them unable to make any data changes in the first place.
User privileges in the system are controlled by roles defined and set in the system (not ADF roles or Weblogic roles). At any given time and place, it's possible to check whether the current user has a certain role. We already use this in a number of places to make it possible to control which user has access to which screens. In a few places we even control which functionality should be enabled for the current user within a screen, but mostly the access control is currently on the screen level. With read-only users getting access to all screens, it seems we will need lot of extra in-screen access control to keep these users from changing anything.
But what's the best practice here? One way to go would be to add some logic to every single active component on every single screen, to determine whether it should be rendered as active or disabled/read-only. But that would require a lot of extra coding.
So my question is: Is there a smarter way to do this? Maybe something done through skinning? Or something else?
(I'm not sure how relevant this is for this sort of question, but we're currently using JDev 11.1.1.4.0, and expect to upgrade to 11.1.1.6.0 within the next 6 months)
Best regards,
Andreas

Hi Guna, Puthanampatti and Don,
Thanks a lot for your replies. I'm currently looking into implementing something along the lines of what Guna has suggested:
Our application consists of a number of individual work spaces that are deployed as adflibs which have all been added to a "master application work space", and the master application is deployed as an .ear file. Most of the individual work spaces are for all the different functional areas of the application, with their own task flows, page fragments etc. The rest are work spaces with common functionality, like datamodel (entity definitions), utility methods, page templates, and framework extensions. In the latter, we have defined custom classes for all the base classes (somewhat similar to what Don describes, I believe).
In our custom class for ViewRowImpl, I have added an isAttributeUpdateable method, and in our custom class for ApplicationModuleImpl I have added an isReadOnlyUser method. The isAttributeUpdateable method uses the isReadOnlyUser method to determine if the current user is a read-only user or not; if the user is a read-only user the isAttributeUpdateable method will return false, otherwise true. The isReadOnlyUser method in our base class is just a dummy method that always returns true. But in the ApplicationModuleImpl classes of our individual work spaces, i've written an override for isReadOnlyUser, giving the answer that is relevant for the work space in question (for instance, whether or not the current user has the role "User Administrator").
That pretty much takes care of all input fields in tables and forms, which is a big step in the right direction. This still leaves some work to be done for components that are not directly linked to view object attributes (like buttons), but I guess that can't be helped. Also, there are a few of the work spaces that contain a number of pages that are related to different user privileges (as in: page 1 requires user privilege A, and page 2 requires user privilege B); in these cases I will have to do something different than just writing an override in the "local" ApplicationModuleImpl class.
@Don: What you describe seems to be pretty close in functionality to what we already have, though your implementation is different from ours. You have used your custom base ApplicationModuleImpl class to keep read-only users from committing changes. We use Virtual Private Database and database policies to the same end: If a user without the required full-access role tries to commit data it will cause a database error, which we then handle in the application (so the user gets a message like "You don't have the required privileges to changes this data", rather than an ORA message). Unfortunately, our customers are not content with this. They want a solution where all input fields and most of the buttons etc. are disabled for read-only users, and that's why I'm looking into the best/smartest way to do this.
@Puthanampatti: We already use something similar to what you're suggesting. The challenge I'm currently facing is how best to disable/enable components based on the current user's roles, not how to determine and store those roles.
Best regards,
Andreas

Similar Messages

  • What is the best practice for uninstalling only some CS4 programs (Win 7 PC)

    I recently upgraded from CS4 to CS5.5 and wanted to free up some hard drive space on my Windows 7 PC. I wanted to uninstall only a few programs though from CS4, such as Photoshop, Illustrator, Flash and Bridge. What is the best way to do this keeping in mind licensing, deactivating and properly removing components? I have the original installation disk for CS4 if needed. Thanks for any help!

    Best practice: Uninstall everything (including your CS5.5), run the Creative Suite Cleaner Tool, then reinstall the components you need from both editions. CS4 may have a repair/ change configuration mode, but I'd strongly advise against using it, as it will do more damage than good, so use the long way round. It's also the only way to not bust up file associations with an uninstall of CS4...
    Mylenium

  • Best practice for reading GUI values?

    I am using a Matisse GUI. This GUI has a number of variables and buttons that will invoke methods from other classes. How do I pass or otherwise make the variables in the GUI jText... jLabel... to the receiving function without passing each variable individually?

    You should define objects for each Panel.
    For example, your application will consist of a lots of Panels.
    Each panel will have 2 methods. One method will accept
    an object, and set all the GUI fields. One object will
    return a new object based on values in the GUI fields.
    E.g.
    public class CirclePanel extends JPanel {
         JLabel radiusLabel = new JLabel("Circle Radius");
         JTextField radiusTF = new JTextField();
         public CirclePanel() {
              // .. layout GUI components
         public void setFields(Circle c) {
              radiusTF.setText(c.radius);
         public Circle getCircle() {
              return new Circle(Double.parseDouble(radiusTF.getText()));
    }Now, let's say you want to have this Panel in a dialog.
    You should put your ok and cancel button in the dialog.
    Don't put the buttons in the panel because you may want
    to use that panel somewhere else that doesn't need the
    buttons.
    public class CircleDialog extends JDialog implements ActionListener {
         JButton okButton = new JButton("OK");
         JButton cancelButton = new JButton("Cancel");
         CirclePanel panel = new CirclePanel();
         ActionListener a = null;
         Object src = null;
         public CircleDialog() {
              // ... add circle panel
              // ... add buttons
              okButton.addActionListener(this);
              cancelButton.addActionListener(this);
         public void actionPerformed(ActionEvent evt) {
              a.actionPerformed(evt);
         public Object getSource() {
              return src;
         public void show(Object src, ActionListener a) {
              this.a = a;
              this.src = src;
              super.show();
    }Suppose you have a menubar that launches the dialog:
    public clas CircleMenuBar extends JMenuBar implements ActionListener {
         JMenuItem circleItem = new JMenuItem("Make Circle");
         public void actionPerformed(ActionEvent evt) {
              Object src = evt.getSource();
              if (src == circleItem) {
                   circleDialog.show(src, this);
              else if (src == circleDialog.okButton) {
                   Circle c = circleDialog.panel.getCircle();
                   // do something with it
                   // like call CircleAPI.add(c); // adds circle to database
                   circleDialog.close();
              else if (src == circleDialog.cancelButton) {
                   circleDialog.close();
    }This architecture is basically allow your dialogs to be non-modal.
    Typically for a GUI application, I create a Dialogs class that has
    all the dialogs the GUI uses. The dialogs are created only if the
    user asks for the dialog. Basically the singleton pattern
    for each dialog.

  • OSB: What is best practice for reading configuration information

    I'm trying to establish the best way to utilise configuration information within OSB. At present I have a standard Java style properties file and Java callout to get certain information and I have also been experimenting with Dynamic Routing using an XML based Xquery resource.
    The latter appears to me the best approach and is described on page 3-40 of the OSB User Guide but it appears to be configurable only from the OSB console and not as a resource from within Workshop. Ideally I'd like to configure an XML resource file that is loaded from disk at OSB startup and then be able to use that via an XQuery.
    How have other people addressed the issue of getting access to parameter/control values at run time?

    Can you please elaborate a little on what you mean by making it available "via a OSB Server URL" What I have done:
    1) Created simple web application in domain directory.
    Sample web.xml
    <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5">
    <display-name>Sample</display-name>
    <welcome-file-list>
    <welcome-file>properties.xml</welcome-file>
    </welcome-file-list>
    </web-app>
    2) Deploy this web application in exploded format using weblogic console(http://localhost:7001/console) in the OSB domain instance.
    3) properties.xml is available via http://localhost:7001/Sample
    4) Use the above URL in your service callout.
    and how this avoids File IO and how it is cached? I'm not sure what you mean here.If we are using Java callout , we have to create File and close the File handle every time. There is this over head in creating/reading file in java every time java callout is used. This over head can be avoided if the XML is available over http. I'm quite sure that weblogic server will cache this XML file by default (What use is webserver if it cannot cache static content?), as this is not dynamic content. Weblogic Server forums will be the right place to get conformation on how to configure caching with web application..
    Thanks
    Manoj
    Edited by: mneelapu on Jul 1, 2009 8:46 AM

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • SAP Best Practices for Data Migration :repositories only on MS SQL Server ?

    Hi,
    I'm implementing the "SAP Best Practices for Data Migration" (see https://websmp109.sap-ag.de/bp-datamigration).
    As part of the installation you have to install MS SQL Server Express Edition. The installation guide contains detailed steps to do this. All repositories for Data Services should be running on SQL Server, according to the installation guide.
    The customer I'm working for now does not want to use SQL Server, but DB2, as company standard.
    So I use DB2 for the local and profiler repositories.
    I notice however that the web application http://localhost:8080/MigrationServices does not support DB2.The only database type you can select in the configuration area is MS SQL Server.
    Is this a limitation, a by design ?

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • What is the best practice for changing view states?

    I have a component with two Pie Charts that display
    percentages at two specific dates (think start and end values).
    But, I have three views: Start Value only, End Value only, or show
    Both. I am using a ToggleButtonBar to control the display. What is
    the best practice for changing this kind of view state? Right now
    (since this code was inherited), the view states are changed in an
    ActionScript function which sets the visible and includeInLayout
    properties on each Pie Chart based on the selectedIndex of the
    ToggleButtonBar, but, this just doesn't seem like the best way to
    do this - not very dynamic. I'd like to be able to change the state
    based on the name of the selectedItem, in case the order of the
    ToggleButtons changes, and since I am storing the name of the
    selectedItem for future reference.
    Would using States be better? If so, what would be the best
    way to implement this?
    Thanks.

    I would stick with non-states, as I have always heard that
    states are more for smaller components that need to change under
    certain conditions, like a login screen that changes if the user
    needs to register.
    That said, if the UI of what you are dealing with is not
    overly complex, and if it will not become overly complex, maybe
    states is the way to go.
    Looking at your code, I don't think you'll save much in terms
    of lines of code.

  • Best Practice for Extracting a Single Value from Oracle Table

    I'm using Oracle Database 11g Release 11.2.0.3.0.
    I'd like to know the best practice for doing something like this in a PL/SQL block:
    DECLARE
        v_student_id    student.student_id%TYPE;
    BEGIN
        SELECT  student_id
        INTO    v_student_id
        FROM    student
        WHERE   last_name = 'Smith'
        AND     ROWNUM = 1;
    END;
    Of course, the problem here is that when there is no hit, the NO_DATA_FOUND exception is raised, which halts execution.  So what if I want to continue in spite of the exception?
    Yes, I could create a nested block with EXCEPTION section, etc., but that seems clunky for what seems to be a very simple task.
    I've also seen this handled like this:
    DECLARE
        v_student_id    student.student_id%TYPE;
        CURSOR c_student_id IS
            SELECT  student_id
            FROM    student
            WHERE   last_name = 'Smith'
            AND     ROWNUM = 1;
    BEGIN
        OPEN c_student_id;
        FETCH c_student_id INTO v_student_id;
        IF c_student_id%NOTFOUND THEN
            DBMS_OUTPUT.PUT_LINE('not found');
        ELSE
            (do stuff)
        END IF;
        CLOSE c_student_id;   
    END;
    But this still seems like killing an ant with a sledge hammer.
    What's the best way?
    Thanks for any help you can give.
    Wayne

    Do not design in order to avoid exceptions. Do not code in order to avoid exceptions.
    Exceptions are good. Damn good. As it allows you to catch an unexpected process branch, where execution did not go as planned and coded.
    Trying to avoid exceptions is just plain bloody stupid.
    As for you specific problem. When the SQL fails to find a row and a value to return, what then? This is unexpected - if you did not want a value, you would not have coded the SQL to find a value. So the SQL not finding a value is an exception to what you intend with your code. And you need to decide what to do with that exception.
    How to implement it. The #1 rule in software engineering - modularisation.
    E.g.
    create or replace function FindSomething( name varchar2 ) return foo.col1%type is
      id foo.col1%type;
    begin
      select col1 into id from foo where col2 = upper(name);
      return( id );
    exception when NOT_FOUND then
      return( null );
    end;
    And that is your problem. Modularisation. You are not considering it.
    And not the only problem mind you. Seems like your keyboard has a stuck capslock key. Writing code in all uppercase is just as bloody silly as trying to avoid exceptions.

  • Query Best Practice for Reports

    I am new to Apex and I am wondering what is the best practice for store your sql quries for reports.  I am a believer of storing all sql behind pacakge functions or procedures.  And it looks like the only options for report pages are to use a direct SQL query, or a function that returns a query as a string.  Yes the function method counts as putting the code in Oracle but not really.  It is still getting compiled and parsed on the Apex side.  It would be nice if Apex could handle a cursor but I have read that it doesn't directly. You have to have a function that returns a cursor and then create a pipelined function that calls the cursor function.  That is kind of silly.  Is there some other way to do this?
    Apex 4.2
    Oracle 11.2.0.2
    Thanks for any input.
    Jeff

    Hi Jeff,
    I'm not necessarily a believer in packaging queries. I'm a little more pragmatic in that I believe it may make sense in environments where you have a client environment that just expects a result set that is then manipulated by the client for the purposes of presentation, pagination etc. Apex has a different architecture in that the client is purely an HTML presentation layer (browser) and the presentation, pagination etc is formulated in the database along with the data using the Oracle web toolkit, which is a set of internal packages that produce HTML. Note that handling and manipulating ref cursors inside PL/SQL is not a joy, they were mainly designed to be passed out to external clients. (Often to shield programmers who don't or won't even try to understand relational concepts)
    This means that when you create a report based on a query, the Apex engine will manipulate that base query, depending on the display requirements and pagination requirements of your report, before it submits that query to the database for execution. To get an idea of how this manipulation occurs, you can run your report in debug mode and check the actual query that is submitted to the database. If the query is presented as an already executed ref cursor, then the Apex engine can't execute in the way that it does. As you have already found out, the only way of using packaged queries returning ref cursors is by the use of a pipelined function, so that the Apex engine can treat the result as a normal query.
    This is the architecture of Apex, and I suspect that re-engineering the Apex engine to handle ref cursors natively, as opposed to using a pipelining trick, would be a considerable change. I hope this at least helps to explain why ref cursors and Apex don't mix. I personally don't see the purpose of having an abstraction layer of packaged queries below an abstraction layer of an API such as Apex. SQL is a perfectly good API.
    Regards
    Andre

  • Best Practice for Servlets

    I guess I'm asking for most peoples' input on what I'm planning to do here ....
    Here's what I want to do, and know a bit about.
    o I want to make a servlet that serves only XML.
    o After that, I want to transform the XML into web pages, RSS feeds etc.. using XSLT.
    Here's what I'm not so sure about...
    o How should I implement the interface to the web-based aspects? Should the servlet be coded to display HTML pages on "GET" requests? Or should I use a pile of HTML files to make forms?
    o What do I use to perform the XSLT transformations? Where should the set of solutions be placed relative to my servlet? Would a user then access this solution rather than the servlet itself?
    o How do I code the servlet on one machine, and then test it on another without breaking the libraries? How do I set up any libraries I might have to use (like for XSLT transformations) on the server?
    Any other advice here? I'm sure this is done often, but I can't find a resource that explains the best practices for it all.
    I know this sounds like a lot of stupid questions. I've had lots of programs working with Java before, but I'm at a loss as to how I'm supposed to package libraries I use in my programs - moreso with a servlet. To make matters worse, I plan on using MySQL as the database.
    If there's some wizard on the forums here who's willing to say more than just "RTFM" (of which there is none to answer my questions together as one), I'd be very very happy ":^)

    Let me re-pose my question so as to be specific
    enough to not be picked apart in my answer.
    I want to FIRST AND FOREMOST, create a servlet that
    serves up XML based on parameters given to it (how?
    who cares.).What does "serves up XML" mean?
    Let's be precise. Do you intend the servlet to send the XML back to the client? Or is the XML an intermediate step in your processing? (Yes, it matters.)
    Then, I want to create interfaces (HTML, RSS,
    boogledeedoo) to this XML data by having either JSP,
    another servlet or insert something else here,
    transform the XML into whatever the desired format
    is."interface" is a loaded term in Java. What do you mean by it?
    >
    My assumption is that I'll make the servlet that is
    capable of outputting my desired XML data and then
    create another servlet that will poke it for data as
    needed to transform the XML into HTML. This servlet
    would also likely serve as the web site itself and
    would manage user logins etc...(persistance yaddy
    yadda)You're not thinking about this properly.
    "yadda yadda" == muddled thinking.
    My other assumption is that I'll make another servlet
    that will poke the XML servlet and transform that
    into RSS or anything else I can dream up.How does "poke the XML servlet" fit into the request/response protocol that is HTTP? Please explain.
    -=-!REASONING!-=-
    Previously, when I was working with PHP, I liked to
    make scripts that would display interfaces and post
    to themselves. OK, now I see. "interface" == GUI in a browser to you. Very good.
    You can create a JSP that is an interface. You can have that JSP submit the HTTP POST or GET request to itself. No problem there, as long as "itself" knows what to do with the request.
    It was a nice way of creating a
    complete little package. Everything for one function
    was encapsulated nicely under one roof. No excessive
    HTML files all over the place to nurture.A simple problem, a simple solution. You can do that with a JSP.
    Look. Part of my inability to describe this well is
    because I DO feel like I'm in a lot of directions at
    once. Or you don't understand the technology very well.
    But I have to be in order to pull together
    some sort of plan for myself. I understand many
    concepts and have just finished studying object
    oriented design etc..."Just finished"? How long did it take?
    I know things about how Tomcat does connection
    pooling for SQL connections.Great. Not much to understand there. It's harder to figure out how to do n-tier apps with more than one page well.
    I do know how to use Google, probably a lot better
    than most. But rest assured, I've yet to find a
    little guide as complete as any of the "LAMP" books
    there are out there. Which by the way, I have never
    purchased.That's because Java Enterprise Edition isn't intended for little problems. LAMP is. Maybe the limitation is that you are used to "little" problems and not bigger ones.
    If JEE seems scattered and complex, it's because it is. It encompasses more than LAMP.
    I'm confident in good guidance, and not a heartfelt
    smackdown. I'm still waiting for some clear suggestions.I gave you one, you just didn't know it: go read about Spring.
    http://www.springframework.org
    It'll help you structure complex apps from the user interface to the database in the back.
    You're welcome.
    %

  • Best Practice for 0InfoObjects

    Hi,
       I need assistance on BI best practices for using 0infoObjects. Currently, we have many 0infoObjects such as 0CostCenter, 0ProfitCenter, 0SalesOrg etc which are used in zinfoObjects like Zcustomer, Zmaterial etc in addition to Z cubes and zODS.
    We are starting an upgrade project which presents a chance to convert some of these 0infoObjects to ZinfoObjects so that they can be customized later as needed.
    What is the best practice for situations like this? do we take on the extra effort of changing 0 to Z ( I read somewhere this is advisable ) or should we target only some infoObjects which historically change the most.
    Thanks,
    Kartik

    Hi:
    Use the 0-infoobjects as they are.
    Business Content gven by SAP is pretty good for some modules (e.g., Finance).
    Clients pay lot of maney for SAP so that they can use the delivered Business Content as much as possible and then, <b>if needed</b> create new Custom Objects.
    When it comes to InfoProviders, its a good idea to copy them to Z-InfoProviders and make new changes accordingly. but, InfoPbjects like Cost Center, COntrolling Area, Functional Loc, Material, etc, etc; just use them as they are. Good luck if you want to create Z-Infoobjects for the above critical infoobjects.
    Do you know how difficult it is to get R/3 based hierarchies for Z-infoobjects (again, Cost Center, Fucntional Location, Cost Element)?
    Also, you <b>have</b> to use SAP delivred Time Characteristics.
    Since you need to use some delivered InfoObjects anyway, it dictates that you have to be careful installing new Business Content.
    There you go.
    U.P.Ram Chamarthy

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Best practice for intervlan routing?

    are there some best practices for intervlan routing ?
    I've been reading allot and I have seen these scenarios
    router on a stick
    intervlan at core layer
    intervlan at distribution layer.
    or is intervlan needed at all if the switches will do the routing?
    I've done all of the above but I just want to know what's current.

    The simple answer is it depends because there is no one right solution for everyone. 
    So there are no specific best practices. For example in a small setup where you may only need a couple of vlans you could use a L2 switch connected to a router or firewall using subinterfaces to route between the vlans.
    But that is not a scalable solution. The commonest approach in any network where there are multiple vlans is to use L3 switches to do this. This could be a pair of switches interconnected and using HSRP/GLBP/VRRP for the vlans or it could be stacked switches/VSS etc. You would then dual connect your access layer switches to them.
    In terms of core/distro/access layer in general if you have separate switches performing each function you would have the inter vlan routing done on the distribution switches for all the vlans on the access layer switches. The core switches would be used to route between the disribution switches and other devices eg. WAN routers, firewalls, maybe other distribution switch pairs.
    Again, generally speaking, you may well not need vlans on the core switches at all ie. you can simply use routed links between the core switches and everything else. 
    The above is quite a common setup but there are variations eg. -
    1) a collapsed core design where the core and distribution switches are the same pair. For a single building with maybe a WAN connection plus internet this is quite a common design because having a completely separate core is usually quite hard to justify in terms of cost etc.
    2) a routed access layer. Here the access layer switches are L3 and the vlans are routed at the access layer. In this instance you may not not even need vlans on the distribution switches although again to save cost often servers are deployed onto those switches so you may.
    So a lot of it comes down to the size of the network and the budget involved as to which solution you go with.
    All of the above is really concerned with non DC environments.
    In the DC the traditional core/distro or aggregation/access layer was also used and still is widely deployed but in relatively recent times new designs and technologies are changing the environment which could have a big impact on vlans.
    It's mainly to do with network virtualisation, where the vlans are defined and where they are not only routed but where the network services such as firewalling, load balancing etc. are performed.
    It's quite a big subject so i didn't want to confuse the general answer by going into it but feel free to ask if you want more details.
    Jon

  • Tips and best practices for translating C into LabVIEW? SERIOUS newbie...

    I need to translate a C function into LabVIEW.  This will be my *first* LabVIEW project.  I've been reading some tutorials, and I'm still struggling to get my brain out of "C/C++ mode" and learn the LabVIEW paradigms.
    Structurally, the function that I need to translate gets called from a while-loop and performs a bunch of mathematical calculations. 
    The basic layout is something like this (this obviously isn't the actual code, it just illustrates the general flow control and techniques that it uses).
    struct Params
    // About 20 int and float parameters
    int CalculateMetrics(Params *pParams,
    float input1, float input2 [etc])
    int errorCode = 0;
    float metric1;
    float metric2;
    float metric3;
    // Do some math like:
    metric1 = input1 * (pParams->someParam - 5);
    metric2 = metric1 + (input2 / pParams->someOtherParam);
    // Tons more simple math
    // A couple for-loops
    if (metric1 < metric2)
    // manipulate metric1 somehow
    else
    // set some kind of error code
    errorCode = ...;
    if (!errorCode)
    metric3 = metric1 + pow(metric2, 3);
    // More math...
    // etc...
      // update some external global metrics variables  
    return errorCode;
    I'm still too green to understand whether or not a function like this can translate cleanly from C to LabVIEW, or whether the LabVIEW version will have significant structural differences. 
    Are there any general tips or "best practices" for this kind of task?
    Here are some more specific questions:
    Most of the LabVIEW examples that I've seen (at least at the beginner level) seem to heavily rely on using the front panel controls  to provide inputs to functions.  How do I build a VI where the input arguments(input1, input2, etc) come as numbers, and aren't tied to dials or buttons on the front panel?
    The structure of the C function seems to rely heavily on the use of stack variables like metric1 and metric2 in order to perform calculations.  It seems like creating temporary "stack" variables in LabVIEW is possible, but frowned upon.  Is it possible to keep this general structure in the LabVIEW VI without making the code a mess?
    Thanks guys!

    There's already a couple of good answers, but to add to #1:
    You're clearly looking for a typical C-function. Any VI that doesn't require front panel opening (user interaction) can be such a function.
    If the front panel is never opened the controls are merely used to send data to the VI, much like (identical to) the declaration of a C-function. The indicators can/will be return values.
    Which controls and indicators are used to sending data in and out of a VI is almost too easy; Click the icon of the front panel (top right) and show connector, click which control/indicator goes where. Done. That's your functions declaration.
    Basically one function is one VI, although you might want to split it even further, dont create 3k*3k pixel diagrams.
    Depending on the amount of calculations done in your If-Thens they might be sub vi's of their own.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Best Practices for new iMac

    I posted a few days ago re failing HDD on mid-2007 iMac. Long story short, took it into Apple store, Genius worked on it for 45 mins before decreeing it in need of new HDD. After considering the expenses of adding memory, new drive, hardware and installation costs, I got a brand new iMac entry level (21.5" screen,
    2.7 GHz Intel Core i5, 8 GB 1600 MHz DDR3 memory, 1TB HDD running Mavericks). Also got a Superdrive. I am not needing to migrate anything from the old iMac.
    I was surprised that a physical disc for the OS was not included. So I am looking for any Best Practices for setting up this iMac, specifically in the area of backup and recovery. Do I need to make a boot DVD? Would that be in addition to making a Time Machine full backup (using external G-drive)? I have searched this community and the Help topics on Apple Support and have not found any "checklist" of recommended actions. I realize the value of everyone's time, so any feedback is very appreciated.

    OS X has not been officially issued on physical media since OS X 10.6 (arguably 10.7 was issued on some USB drives, but this was a non-standard approach for purchasing and installing it).
    To reinstall the OS, your system comes with a recovery partition that can be booted to by holding the Command-R keys immediately after hearing the boot chimes sound. This partition boots to the OS X tools window, where you can select options to restore from backup or reinstall the OS. If you choose the option to reinstall, then the OS installation files will be downloaded from Apple's servers.
    If for some reason your entire hard drive is damaged and even the recovery partition is not accessible, then your system supports the ability to use Internet Recovery, which is the same thing except instead of accessing the recovery boot drive from your hard drive, the system will download it as a disk image (again from Apple's servers) and then boot from that image.
    Both of these options will require you have broadband internet access, as you will ultimately need to download several gigabytes of installation data to proceed with the reinstallation.
    There are some options available for creating your own boot and installation DVD or external hard drive, but for most intents and purposes this is not necessary.
    The only "checklist" option I would recommend for anyone with a new Mac system, is to get a 1TB external drive (or a drive that is at least as big as your internal boot drive) and set it up as a Time Machine backup. This will ensure you have a fully restorable backup of your entire system, which you can access via the recovery partition for restoring if needed, or for migrating data to a fresh OS installation.

Maybe you are looking for