Validation best practices?

I'm still working on learning Flex, having previously worked
in VB, ASP and PHP. Today I was working on a small form that needed
to be validated. My specs are that all fields are required, and the
submit button is to remain disabled until all fields are valid.
In the past I have used a simple summing function to see how
many fields in a form were valid; if less than the total field
count then validation failed.
Here is what I wrote in my Flex project. Is there a more
efficient method that I should be using? Execution seems to be
instant, but if this does not conform to "best practices" I'd like
to know the alternative. Thank you for your input.
Paul

To me, you might be able to use binding to just have the
button enabled property bound to a Boolean variable.
Also have an ArrayCollection named validAC.
private function
handlerValidation(event:ValidationResultEvent,
validatorName:String):void{
if(!validAC.contains(event.target)){
validAC.addItem(event.target);
submitBtn.enabled = numControls == (validAC.length);

Similar Messages

  • Validation Best Practice for an javafx project

    Hello,
    I am investigating how to validate a JavaFX project and specifically where to display the validation results with appropriate icons and text. Do you have any thoughts on best practice for this? Are there any features in JavaFX to do this? Are there any third-party library that does this?
    Thanks

    See:
    Re: Forms and validations - here's some of my ideas, what are yours? "Thread: Forms and validations - here's some of my ideas, what are yours?"
    And messages in this mailing list thread:
    http://mail.openjdk.java.net/pipermail/openjfx-dev/2012-February/000717.html "Validate Me"

  • Backup validation best practice  11GR2 on Windows

    Hi all
    I am just reading through some guides on checking for various types of corruption on my database. It seems that having DB_BLOCK_CHECKSUM set to TYPICAL takes care of much of the physical corruption and will alert you to the fact any has occurred. Furthermore RMAN by default does its own physical block checking. Logical corruption on the other hand does not seem to be checked automatically unless the CHECK LOGICAL is added to the RMAN command. There are also various VALIDATE commands that could be run on various objects.
    My question is really, what is best practice for checking for block corruption. Do people even bother regularly checking this and just allow Oracle to manage itself, or is it best practice to have the CHECK LOGICAL command in RMAN (even though its not added by default when configuring backup jobs through OEM) or do people schedule jobs and output reports from a VALIDATE command on a regular basis?
    Many thanks

    To use CHECK LOGICAL clause is considered best practice at least by Oracle Support according to
    NOTE:388422.1  Top 10 Backup and Recovery best practices
    (referenced in http://blogs.oracle.com/db/entry/master_note_for_oracle_recovery_manager_rman_doc_id_11164841).

  • Best Practice for validating radio buttons

    Hi,
    I'm building a form which has quite a lot of questions that a
    user answers my selecting a radio button. All the questions are
    required. Could someone please advise me what the best practice is
    for validating a radio button in a radio button group is checked?
    There doesn't seem to be much information out there on this
    one???
    Thanks...

    Have you found a solution? If yes can you please give me a
    short description?
    Thanks.

  • Creating Validations,what's the best practice?

    Good Day Folks
    I need help in creating validations.
    I know that thereu2019s GGB0  which contains OKC7 for CO validations and OB28 for FI Validations.
    My concern what are the detailed steps for creating Validations.
    Is there a best practice?
    Is there anyway to link the validations created in CO to that of FI or do they need to be created in each?
    Regards
    Just
    Edited by: Just J on Jan 25, 2012 2:24 PM
    Moderator: Please, search before posting

    Hi
    Could you tell us the business requirement?
    So that we can help you
    Thanks
    Vishnu

  • Best Practice/Validation for deploying a Package to Azure

    Before deploying a package to Azure, What kind of best practice/Validation can be done to know the Package compatibility with Azure Enviroment?

    What do you mean by the compatibility of the azure package with the azure environment? what do you want to validate? would be great if you provide bit of a background for your question.
    As far as the deployment best practice is concerned, the usual way is to upload your azure cloud service deployment package and configuration files (*.cspkg and *.cscfg) to the blob container first and upload it to the cloud service by referring from uploaded
    container. This will not only give you flexibility to keep different versions of your deployments which you can use to roll back entire service but also the process of the deployment will be comparatively faster than that of deploying from VS or by uploading
    manually from file system.
    You can refer link - http://azure.microsoft.com/en-in/documentation/articles/cloud-services-how-to-create-deploy/#deploy
    Bhushan | Blog |
    LinkedIn | Twitter

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practices for Integrating UC-5x0's with SBS 2003/8?

    Almost all of Cisco's SBCS market is the small and medium business space.  Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
    To that end, I see a  lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
    I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
    Some of the challanges include;
    1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
    2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
    3. Which device should handle DNS?
    My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
    1. A UC-540 device utilizing SIP for the cost savings
    2. High Speed Internet with 5 static routable IP addresses
    3. An existing Microsoft Small Business Server 2003/8
    4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
    5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
    Your thoughs appreciated.

    Progress Report;
    The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
    DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
    Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.  
    To review, this configuration assumes the following;
    1. The UC540 has a public IP address of 4.2.2.2
    2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
    3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
    First, backup your current configuration via the CCA,
    Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
    ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
    ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
    ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
    ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
    ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
    ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
    ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
    ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
    ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
    Next, you will need to amend your UC540's default ACL.
    First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
    Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
    int fas 0/0
    no ip access-group 104 in
    no access-list 104
    access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
    access-list 104 remark SDM_ACL Category=1
    access-list 104 permit tcp any host 4.2.2.3 eq 25 log
    access-list 104 permit tcp any host 4.2.2.3 eq 80 log
    access-list 104 permit tcp any host 4.2.2.3 eq 443 log
    access-list 104 permit tcp any host 4.2.2.3 eq 987 log
    access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
    access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log 
    access-list 104 permit tcp any host 4.2.2.4 eq 80 log
    access-list 104 permit tcp any host 4.2.2.4 eq 443 log
    access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
    access-list 104 permit udp host 116.170.98.142 eq 5060 any
    access-list 104 permit udp host 116.170.98.143 any eq 5060
    access-list 104 deny   ip 10.1.10.0 0.0.0.3 any
    access-list 104 deny   ip 10.1.1.0 0.0.0.255 any
    access-list 104 deny   ip 192.168.10.0 0.0.0.255 any
    access-list 104 permit udp host 116.170.98.142 eq domain any
    access-list 104 permit udp host 116.170.98.143 eq domain any
    access-list 104 permit icmp any host 4.2.2.2 echo-reply
    access-list 104 permit icmp any host 4.2.2.2 time-exceeded
    access-list 104 permit icmp any host 4.2.2.2 unreachable
    access-list 104 permit udp host 192.168.10.1 eq 5060 any
    access-list 104 permit udp host 192.168.10.1 any eq 5060
    access-list 104 permit udp any any range 16384 32767
    access-list 104 deny   ip 10.0.0.0 0.255.255.255 any
    access-list 104 deny   ip 172.16.0.0 0.15.255.255 any
    access-list 104 deny   ip 192.168.0.0 0.0.255.255 any
    access-list 104 deny   ip 127.0.0.0 0.255.255.255 any
    access-list 104 deny   ip host 255.255.255.255 any
    access-list 104 deny   ip host 0.0.0.0 any
    access-list 104 deny   ip any any log
    int fas 0/0
    ip access-group 104 in
    Lastly, save to memory
    wr mem
    One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
    (config)#ip access-list extended 104
    (config-ext-nacl)#7 permit gre any any
    Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
    Thanks to Vijay in Cisco Tac for the guidence.

  • What are the best practices for using the enhancement framework?

    Hello enhancement framework experts,
    Recently, my company upgraded to SAP NW 7.1 EhP6.  This presents us with the capability to use the enhancement framework.
    A couple of senior programmers were asked to deliver a guideline for use of the framework.  They published the following statement:
    "SAP does not guarantee the validity of the enhancement points in future releases/versions. As a result, any implemented enhancement points may require significant work during upgrades. So, enhancement points should essentially be used as an alternative to core modifications, which is a rare scenario.".
    I am looking for confirmation or contradiction to the statement  "SAP does not guarantee the validity of enhancement points in future releases/versions..." .  Is this a true statement for both implicit and explicit enhancement points?
    Is the impact of activated explicit and implicit enhancements much greater to an SAP upgrade than BAdi's and user exits?
    Is there any SAP published guidelines/best practices for use of the enhancement framework?
    Thank you,
    Kimberly
    Edited by: Kimberly Carmack on Aug 11, 2011 5:31 PM

    Found an article that answers this question quite well:
    [How to Get the Most From the Enhancement and Switch Framework as a Customer or Partner - Tips from the Experts|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0f0373e-a915-2e10-6e88-d4de0c725ab3]
    Thank you Thomas Weiss!

  • OS X Server 3.0 new setup -- best practices?

    Alright, here's what I'm after.
    I'm setting up a completely new OS X Server 3.0 environment.  It's on a fairly new (1.5 year old) Mac Mini, plenty of RAM and disk space, etc.  This server will ONLY be used interally.  It will have a private IP address such as 192.168.1.205 which will be outside of my DHCP server's range (192.168.1.10 to .199) to prevent any IP conflicts.
    I am using Apple's Thuderbolt-to-Ethernet dongle for the primary network connection.  The built-in NIC will be used strictly for a direct iSCSI connection to a brand new Drobo b800i storage device.
    This machine will provide the following services, rougly in order of importance:
    1.  A Time Machine backup server for about 50 Macs running Maverics.
    1a.  Those networked Macs will authenticate individually to this computer for the Time Machine service
    1b.  This Server will get it's directory information from my primary server via LDAP/Open Directory
    2.  Caching server for the same network of computers
    3.  Serve a NetInstall image which is used to set up new computers when a new employee arrives
    4.  Maybe calendaring and contacts service, still considering that as a possibility
    Can anyone tell me the recommended "best practices" for setting this up from scratch?  I've done it twice so far and have faced problems each time.  My most frequent problem, once it's set up and running, is with Time Machine Server.  With nearly 100 percent consistency, when I get Time Machine Server set up and running, I can't administer it.  After a few days, I'll try to look at it via the Server app.  About half the time, there'll be the expected green dot by "Time Machine" indicating it is running and other times it won't be there.  Regardless, when I click on Time Machine, I almost always get a blank screen simply saying "Loading."  On rare occasion I'll get this:
    Error Reading Settings
    Service functionality and administration may be affected.
    Click Continue to administer this service.
    Code: 0
    Either way, sometimes if I wait long enough, I'll be able to see the Time Machine server setup, but not every time.  When I am able to see it, I'll have usability for a few minutes and then it kicks back to "Loading."
    I do see this apparently relevant entry in the logs as seen by Console.app (happens every time I see the Loading screen):
    servermgrd:  [71811] error in getAndLockContext: flock(servermgr_timemachine) FATAL time out
    servermgrd:  [71811] process will force-quit to avoid deadlock
    com.apple.launchd: (com.apple.servermgrd[72081]) Exited with code: 1
    If I fire up Terminal and run "sudo serveradmin fullstatus timemachine" it'll take as long as a minute or more and finally come back with:
    timemachine:command = "getState"
    timemachine:state = "RUNNING"
    I've tried to do some digging on these issues and have been greeted with almost nothing to go on.  I've seen some rumblings about DNS settings, and here's what that looks like:
    sudo changeip -checkhostname
    Primary address = 192.168.1.205
    Current HostName = Time-Machine-Server.local
    The DNS hostname is not available, please repair DNS and re-run this tool.
    dirserv:success = "success"
    If DNS is a problem, I'm at a loss how to fix it.  I'm not going to have a hostname because this isn't on a public network.
    I have similar issues with Caching, NetInstall, etc.
    So clearly I'm doing something wrong.  I'm not upgrading, again, this is an entirely clean install.  I'm about ready to blow it away and start fresh again, but before I do, I'd greatly appreciate any insight from others on some "best practices" or an ordered list on the best way to get this thing up and running smoothy and reliably.

    Everything in OS X is dependant on proper DNS.  You probably should start there.  It is the first service you should be configuring and it is the most important to keep right.  Don't configure any services until you have DNS straight.  In OS X, DNS really stands for Do Not Skip.
    This may be your toughest decision.  Decide what name you want the machine to be.  You have two choices.
    1: Buy a valid domain name and use it on your LAN devices.  You may not have a need now for use externally, but in the future when you use VPN, Profile Manager, or Web Services, at least you are prepared.  This method is called split horizon DNS.  Example would be apple.com.  Internally you may name the server tm.apple.com.  Then you may alias to it vpn.apple.com.  Externally, users can access the service via vpn.apple.com but tm.apple.com remains a private address only.
    2: Create an invalid private domain name.  This will never route on the web so if you decide to host content for internal/external use, you may run into trouble, especially with services that require SSL certificates.  Examples might be ringsmuth.int or andy.priv.  These type of domains are non-routable and can result in issues of trust when communicating with other servers, but it is possible.
    Once you have the name sorted out, you need to configure DNS.  If you are on a network with other servers, just have the DNS admin create an A and PTR record for you.  If this is your only server, then you need to configure and start the DNS service on Mavericks.  The DNS service is the best Apple has ever created.  A ton of power in a compact tool.  For your needs, you likely need to just hit the + button and fill out the New Device record.  Use a fully qualified host name in the first field and the IP address of your server (LAN address).  You did use a fixed IP address and disabled the wireless card, right?
    Once you have DNS working, then you can start configuring your other services.  Time Machine should be pretty simple.  A share point will be created automatically for you.  But before you get here, I would encourage starting Open Directory.  Don't do that until DNS is right and you pass the sudo changeip -checkhostname test.
    R-
    Apple Consultants Network
    Apple Professional Services
    Author, "Mavericks Server – Foundation Services" :: Exclusively in the iBooks Store

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Sessions and Controllers best-practice in JSF2

    Hi,
    I've not done web development work since last using Apache Struts for its MVC framework ( about 6 years ago now ). So bear with me if my questions does not make sense:
    SESSIONS
    1) Reading through the JSF2 spec PDF, it mentions about state-saving via the StateManager. I presume this is also the same StateManager that it used to store managed-beans that are in @SessionScoped ?
    2) In relation to session-scoped managed beans, when does a JSF implementation starts a new session ? That is, when does the implementation such as Mojarra call ExternalContext.getSession( true ) .. and when does it simply uses an existing session ( calling ExternalContext.getSession( false ) ) ?
    3) In relation to session-scoped managed beans, when does a JSF implementation invalidate a session ? That is, when does the implementation call ExternalContext.invalidateSession() ?
    4) Does ExternalContext.getSession( true ) or ExternalContext.invalidateSession() even make sense if the state-saving mechanism is client ? ( javax.faces.STATE_SAVING_METHOD = client ) Will the JSF implementation ever call these methods if the state-saving mechanism is client ?
    CONTROLLERS
    Most of the JSF2 tutorials that I have been reading on-line uses the same backing-bean when perfoming an action on the form ( when doing a POST or a GET or a post-back to the same page ).
    Is this best practice ? It looks like mixing what should have been a simple POJO with additional logic that should really be in a separate class.
    What have others done ?

    gimbal2 wrote:
    jmsjr wrote:
    EJP wrote:
    It's better because it ensures the bean gets instantiated, stuck in the session, which gets instantiated itself, the bean gets initialised, resource-injected, etc etc etc. Your way goes goes behind the scenes and hopes for the best, and raises complicated questions that don't really need answers.Thanks.
    1) But if I only want to check that the bean is in the session ... and I do NOT want to create an instance of the bean itself if it does not exist, then I presume I should still use ExternalApplication.getSessionMap.get(<beanName>).I can't think of a single reason why you would ever need to do that. Checking if a property of a bean in the session is populated however is far more reasonable to me.In my case, there is an external application ( e.g. a workflow system from a vendor ) that will open a page in the JSF webapp.
    The user is already authenticated in the workflow system, and the external system from the vendor sends along the username and password and some parameters that define what the request is about ( e.g. whether to start a new case, or open an existing case ). There will be no login page in the JSF webapp as the authentication was already done externally by the workflow system.
    Basically, I was think of implementing a PhaseListener that would:
    1) Parse the request from the external system, and store the relevant username / password and other information into a bean which I store into the session.
    2) If the request parameter does not exist, then I go look for a bean in the session to see if the actual request came from within the JSF webapp itself ( e.g. if it was not triggered from the external workflow system ).
    3) If this bean does not exist at all ( e.g. It was triggered by something else other than the external workflow system that I was expecting ) then I would prefer that it would avoid all the JSF lifecycle for the current request and immediately do a redirect to a different page ( be it a static HTML, or another JSF page ).
    4) If the bean exist, then proceed with the normal JSF lifecycle.
    I could also, between [1] and [2], do a quick check to verify that the username and password is indeed valid on the external system ( they have a Java API to do that ), and if the credentials are not valid, I would also avoid all the JSF lifecycle for the current request and redirect to a different page.

  • Best practice in dreamweaver based web design

    Hi DW Gurus
    I have been learning dreamweaver at work and at home for a while now and I am very interested in getting some details about best practices.
    I am aware that my designs are still quite simple and don't expand dreamweaver anyway near its abilities, however as a novice I am sticking with DW CS4 and trying to developing foundation skills before trying to move on to greater adventures in web design.  My dream is to eventually leave my employer and start my own freelance design business that I can manage while I am travelling in SE Asia (yeah I know - dream on .....). I am saving for my CS4 master suite license - my current one belongs to my employer.
    In particular I have just started to do some free design a community organisation - I am trying to develop a portfolio of sites that I can show new clients (with money) what they could expect for their investments.
    I am interested in getting some ideas about work flows - design standards - whether designers use the dreamweaver layouts - whether to code or to use the WYSIWYG.
    Anyone care to share how they plan and go about their design work.
    In my past life I was a SQL developer and a C#.net programmer - in those feilds there are standards for code and applications . Are there industry standards that would point to good or better designing?
    Thanks for your time and interest in my topic,
    Respect,
    Doug

    The first thing you need to learn is html and css, without knowing the programming language will only make DW a hard tool to use.  You need to learn coding to WC3 standards.
    FREE HTML & CSS Tutorials  - http://w3schools.com/
    http://reference.sitepoint.com/css
    http://reference.sitepoint.com/html
    Validation tools that you will need to bookmark.
    HTML Validator - http://validator.w3.org
    CSS Validator - http://jigsaw.w3.org/css-validator/
    Hopefully you use Firefox, and if you do, the web developer toolbar is essential:
    http://chrispederick.com/work/web-developer/
    Yes, you can rely somewhat on DW's layout mode (or what features are left of it), but this will only lead to problems when it comes time to troubleshoot any problem pages that do not render correctly across the various browsers.
    A paying client will expect a well designed, fully functional website and you need to have the programming skills to accomplish this for them.
    Not sure how far advanced you are but the Adobe site has quite a few CSS based tutorials - and I would strongly urge you to go through the series listed below - to get up to speed on the correct way to lay out a page.
    Creating your first website (series)
    http://help.adobe.com/en_US/Dreamweaver/10.0_Using/WS42d4a1c0291fbe4e59147ede1232ff9686c-8 000.html
    LIST OF CSS TUTORIALS ON ADOBE SITE:
    http://www.adobe.com/devnet/dreamweaver/css.html
    If you use Fireworks as a tool to create your layouts, then the following tutorial may be of benefit:
    TAKING FIREWORKS COMP TO DREAMWEAVER:
    http://www.adobe.com/devnet/dreamweaver/articles/dw_fw_css_pt1.html
    Good luck in your future endeavours 
    Nadia
    Adobe Community Expert : Dreamweaver
    Unique CSS Templates | Tutorials | SEO Articles
    http://www.DreamweaverResources.com
    Web Design & Development
    http://www.perrelink.com.au
    http://twitter.com/nadiap

  • What is best practice for conditional rendering?

    i have a set of radio buttons that conditionally render another set, which in turn conditionally render a 3rd set.
    i am doing this by using a valueChangeListener on an h:selectOneRadio
    when i click yes on the 1st set it renders the 2nd set.
    when i click yes on the 2nd set it renders the 3rd set.
    when i click no on the 1st set now it correctly removes the 2nd & 3rd sets, the method also nulls the values.
    when i click yes on the 1st set, it renders the 2nd set with the old values even though they were nulled. and i can see in the debugger they are still null.
    so there seems to be an issue updating the model, but i dont know what that issue is. im sure it just has something to do with the way i am trying this conditional render. please see below for my first field and my valueChangeListener method.
    any help with what is a best practice for this scenario would be greatly appreciated.
    <h:panelGrid id="grid20a" columns="1">
                             <h:outputText value="Does this consult pertain to a specific "/>
                             <h:outputText value="planned or ongoing research project?: *"/>
                        </h:panelGrid>
                        <h:selectOneRadio id="specificResearch1" value="#{ethicsConsultBacking.bean.specificResearch}"
                             layout="lineDirection" required="true"
                             valueChangeListener="#{ethicsConsultBacking.processValueChangeSpecificResearch}">
                                  <f:selectItems value="#{ethicsConsultBacking.yesNoMap}" />
                                  <a4j:support event="onclick" reRender="form"/>
                        </h:selectOneRadio>
                        <h:message for="specificResearch1" styleClass="redText" />
    public void processValueChangeSpecificResearch(ValueChangeEvent e) throws AbortProcessingException
              String newValue = e.getNewValue().toString();
              //reinitialize
              this.setRenderHumanSubjectResearch(false);
              this.setRenderIrbSection(false);
              this.setRenderIrbProtocolNumber(false);
              this.getBean().setHumanSubjectResearch(null);
              this.getBean().setPrimaryIrb(null);
              this.getBean().setIrbStatus(null);
              this.getBean().setIrbProtocolNumber(null);
              //check condition
              if (newValue.equalsIgnoreCase(this.YES))
                   this.setRenderHumanSubjectResearch(true);
               * Clearing validation messages.
               * This will get around the issue with having a field
               * on the form that is both required & immediate.
              Iterator it = this.getFacesContext().getMessages();
              while (it.hasNext())
                   it.next();
                   it.remove();
              this.getFacesContext().renderResponse();          
         }i also tried doing this another way by just using an action method attached to the a4j:support tag, but that introduced a different set of issue. so ill leave that out for now unless that is the direction someone would like to direct my issue in.
    Thanks in advance

    Similar issue is covered and explained here: [http://balusc.blogspot.com/2007/10/populate-child-menus.html]. Not sure if it solves your problem as you're using ajax4jsf whereas I don't, but it might give new insights. To the point you might need to bind the component(s) and use setValue(null) or setSubmittedValue(null).

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

Maybe you are looking for

  • Problems with text using Photoshop CC on Windows 8.1

    I am using Photoshop CC on Windows 8.1.  I get  flickering on the screen as described by others.  My main problem, however,  is that when I try to use the text tool the layer goes completely black and I cannot see what I am typing or edit the text. 

  • Weird RAM problem- installing

    Hi- I have 4 different ram chips. They are all the same and the correct type for the macbook. Whenever I install any of them, 2 are 512 chips, 2 are 1 gb chips, the computer seems to turn on but has a blank screen and the sleep-light indicator blinks

  • I need Active X plug-in for some features my brokerage website for Jave and Adobe Reader to properly function. You have blocked Active X. How do I get it to wo

    My brokerage requires Active X in order for Adobe Reader and Java to function. Without Active X neither work. After a lengthy session with their tech staff I was told to contact you or use IE. I hate IE. IE is the whole reason I got FF back when you

  • ITunes 7.7.1 repeatedly asking for authorization

    Something has happened (at some point after upgrading to iTunes 7.7.1) and now every time I try to play any iTunes purchased song it asks me to authorize that song. Over and over and over again. In addition all my purchased songs / TV shows / applica

  • Invoice and Inventory App

    I run a small clinic with one acupuncturist and would like to track her daily sales on her iPad. Ideally, she'd write up an invoice (no need for printing) with the treatment cost and the supplements purchased for each patient. Then at the end of the