Best-Practice Best-In-Class Examples

I am new to flash and have found a few articles on best
practices and find them useful. I would like to find some
repository or site that refrerences outstanding Flash appications,
sites, or objects. I want to learn from the best. I am not
specifically interested in getting the code to the Flash, although
that would obviously be helpful. I would like to see what people
are doing out there and then I can try to apply it to what I'd like
to do.
Thanks for the help
Rich Rainbolt

Hi,
Warnings may not be harmful to your code but perfect code may give you the best performance.Defining keys is the best practice becos it will increase the query performance.May be you didnt define keys on unused objects ,so the unused stuff in the RPD may degrade the performance.Anyway we are not always follow the best practices but do which is possible.
mark if helpful/correct...
thanks,
prassu

Similar Messages

  • Best Practice for  initialising class variables, should they be null?

    class Person
    private String name, address;
    public String getName()
    return name
    public void main()
    Person me = new Chris()
    ..........loads of code........
    if(me.getName==null)
    do something
    else
    do something else
    So my question is : whats the best behaviour when declaring variables? In this case should I have initialized the Strings to the empty string?
    I think i need to make a decision because I'm constantly unsure if the method should return null or empty string. So i find myself doing this occasionally :
    if(person.getName()==null || person.getName().equalIgnoreCase(""))
    Thanks
    Chris

    I believe that when you create an object it should be 100% ready for use. That means all private member variables set to a non-null, sensible value.
    You shouldn't force clients to know what's safe and what's not:
    public class Person
        private String name;
        private Date birthDate;
        public static void main(String [] args)
            Person p = new Person();
            System.out.println("age: " + p.getAge());
        public Person() { // do nothing; name is null }
        public String getName() { return name; }
        public void setName(String newName) { name = newName; }
        public Date getBirthDate() { return birthDate; }
        public void setBirthDate(Date newBirthDate) { birthDate = new Date(newBirthDate.getTime()); }
        public int getAge()
            int age = 0;
            // calculating age with a null birth date will be a problem.
            return age;
    }You might argue that it's perfectly reasonable to expect a user to call setters to initialize an object after it's created, but I don't like that idiom. There's no guarantee that it'll be done properly.
    %

  • 'Best practice' - separating packages, classes, methods.

    I'm writing a few tools for handling classical propositional logic sentences.
    I currently have a parser (using JavaCC)
    Various reasoning methods (e.g. finding whether sentences are satisfiable or a contradition, whether two sentences are equivalent, converting to conjunctive/disjunctive normal form etc.)
    I can export/import sentences to/from XML
    I'm now coming to a tidying up stage. I'm using Eclipse 3.3 and within the project have 4 packages: logic, parser, xml, testing.
    The tree data structures are created using the JavaCC parser - because I want to leave the JavaCC generated code alone as much as possible I've created a 'LogicOperations' class containing the methods.
    I also have an 'Interpretation' class that holds a TRUE/FALSE value for each atom, e.g. if storing an example where a sentence is satisfiable (= evaluates to true).
    Most of the methods that take an Interpretation as an argument are held inside the 'LogicOperations' class too currently.
    So in a test class I might have
    LogicParser parser = new LogicParser(new StringReader("a | (b & c) => !d"));
    SimpleNode root = parser.query();
    //displays the above String in it's tree form
    root.dump();
    // List atoms in sentence
    System.out.println(Arrays.toString(LogicOperations.identifyAtoms(root)));
    // Show models (interpretations where sentence is TRUE)
    LogicOperations.printInterpretations(LogicOperations.models(root));
    // is the sentence satisfiable?
    System.out.println("Satisfiable: " + LogicOperations.isSatisfiable(root));
    // is the sentence a contradiction?
    System.out.println("Contradiction: " + LogicOperations.isContradiction(root));
    // Is the sentence consistent (a tautology)
    System.out.println("Consistent: " + LogicOperations.isConsistent(root));
    System.out.println("CNF: " + LogicOperations.printCNF(root));
    System.out.println("DNF: " + LogicOperations.printDNF(root));Question is - does separating out methods, classes and packages in this way seem the 'correct' way of doing things? It works, which is nice... but is it sloppy, lazy, bad practice, you name it...
    If any example code is worth seeing let me know, but I'm mainly after advice before I go about tidying the code up.
    Thanks in advance
    Duncan

    eknight wrote:
    My intuition would be to be able to call:
    root.isSatisfiable()Instead of having to go through a helper Method. The Functionality could still remain in LogicOperations, but I would encapsulate it all in the SimpleNode Class.
    - Just MHOThe SimpleNode class is one of a number of classes generated by JavaCC. My thinking was that if there are any modifications to the grammar then SimpleNode would be regenerated and the additional methods would need to be added back in, whereas if they're in a separate class it would mean I wouldn't have to worry about it. In the same vein, I have the eval() method separate from SimpleNode, and most other methods rely on it for their output.
    public static Boolean eval(SimpleNode root, Interpretation inter) {
         Boolean val = null;
         SimpleNode left = null;
         SimpleNode right = null;
         switch (root.jjtGetNumChildren()) {
         case 1:
              left = root.jjtGetChild(0);
              right = null;
              break;
         case 2:
              left = root.jjtGetChild(0);
              right = root.jjtGetChild(1);
              break;
         default:
              break;
         switch (root.getID()) {
         case LogicParserTreeConstants.JJTROOT:
              val = eval(left, inter);
              break;
         case LogicParserTreeConstants.JJTVOID:
              val = null;
              break;
         case LogicParserTreeConstants.JJTCOIMP:
              val = (eval(left, inter) == eval(right, inter));
              break;
         case LogicParserTreeConstants.JJTIMP:
              val = (!eval(left, inter) || eval(right, inter));
              break;
         case LogicParserTreeConstants.JJTOR:
              val = (eval(left, inter) || eval(right, inter));
              break;
         case LogicParserTreeConstants.JJTAND:
              val = (eval(left, inter) && eval(right, inter));
              break;
         case LogicParserTreeConstants.JJTNOT:
              val = !eval(left, inter);
              break;
         case LogicParserTreeConstants.JJTATOM:
              val = inter.get(root.getLabel());
              break;
         case LogicParserTreeConstants.JJTCONSTFALSE:
              val = false;
              break;
         case LogicParserTreeConstants.JJTCONSTTRUE:
              val = true;
              break;
         default:
              val = null;
              break;
         return val;
    }

  • Best practice Eloqua Landing Page examples

    Hi
    I was just wondering if anybody had any examples they have found of Landing Pages created in Eloqua that they think have been done excellently done.
    I feel like I have improved mine significantly since I started working at my current company, however I am always on the look out for some new inspiration to open my mind up a bit!
    Thanks for the help
    Scott

    Looks like I found what I wanted. Cheers,
    http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAA/Help/LandingPages/CodeRequirementsForLandingPages.htm
    https://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAA/Help/Emails/CodeRequirementsForHTMLEmailUploads.htm

  • Raid configuration -- the elusive best practice/best value framework

    After much research on RAID, both on and off the Apple site, I am still looking for answers. I searched RAID threads for thoughts on some of the top users in the Forum so I apologize, I am sure some of this seems old hat to you old Pros.
    I just bought a Mac Pro. Haven't even fired it up yet because I want to get the storage issue (RAID 0 or 1, or 0+1) settled before I transfer my jpg and video files. I have pretty much decided against the expense of the Hardware RAID card when its performance seems somewhat less than rock solid (perhaps a myth). Anyway, with a 4-core PRO I assume (correct if I am wrong) that the CPU hit to do software RAID is reasonable.
    Other Questions:
    1) The single 640 Gb hard drive as retailed. Can I separate "Users and their jpg/video files" to a drive separate from the boot, OSX, and Applications? Is it advisable?
    2) If #1 is recommended, should I then mirror the boot drive or simply Time Machine the backup of the single drive?
    3) Somewhere someone suggested better performance by putting together drive 1 and 3, and drive 2 and 4? I don't recall if that approach was RAID 1 for drive 1&3 (mirroring the boot), and striped (RAID 0) for drive 2 and 4. Your thoughts on this -- assuming its even relevant based on your recommendation for the boot drive on question #1 and #2?
    4) So at this point we're down to how we should use either the remaining two or three drive bays depending the choice taken with the boot drive. What is your recommendation for optimal value -- e.g. maximizing storage, data protection, and costs. Note: I am willing to purchase external drive(s) for Time Machine backup.
    5) Name your top two or three internal and external drive picks for this arrangement.
    Thanks much for your help on this. Anything else you'd like to suggest or question for clarification.
    cougar90

    Well, if you're going to be using the system for dual purposes, right off the bat maybe software raid is not for you (especially a 0+1). That's a lot of overhead to be dealing with: video editing plus users accessing files on the same comp.
    If you have users needing to access files, I would keep the "video editing" system and "server" system separate.
    Hatter's idea of a PC/Mac compatible NAS sounds good; very easy and affordable to implement. I only wonder about the speed, if you will be transferring large video files, and have multiple users connected at once (although if hatter recommended it, I'm sure it's fine). If you do go the NAS route though, make sure you have a gigabit network running. If you have any old computer system laying around (pc or mac), you can also configure that very easily as a server. Add hard drives or external enclosures for space. If it's a spare mac, and you have tiger 10.4, the app sharepoints works very well.
    The PVR can be done on the Mac pro; keep it with the "video editing" system.
    In regards to Raid card stability, I believe you were looking at the "Apple Raid Card" for the mac pro. Yes, there have been many problems with it regarding the battery. However this is pertaining to only the Apple raid card, NOT hardware raid in general.
    There are other companies that put out very solid raid cards. Check the before link to www.amug.org, they are a great resource of raid info. On my setup, I use an ATTO Card connected to a D800RAID from Sonnet (mine is the previous model).
    http://www.sonnettech.com/product/fusiondx800raid.html
    http://eshop.macsales.com/item/ATTO/ESASR380000/
    Also, before even attempting raid, get a good grasp on it.
    http://www.acnc.com/040100.html
    http://en.wikipedia.org/wiki/RAID
    Note RAID 5 needs Hardware Raid. Also if hardware raid is too expensive, you can also go with esata enclosures. This sonnet enclosure with esata card for example:
    http://www.sonnettech.com/product/fusiond500p.html
    http://www.sonnettech.com/product/temposatae4p.html
    More affordable, with great performance. Use for Raid 0 scratch, temp files.

  • Best Practices for UI Elements in a Class

    Hi,
    I am quite new to Flash. I am making a class called "Slice" in the file "Slice.as" which is bound to a symbol called "Slice" which is defined in my main .fla file. I have used the visual tools in Adobe Flash CS4 to design the symbol. The "Slice" symbol contains a dynamic text area which I called "m_TextArea". It looks like I can modify the properties of the text area in "Slice.as", but it is not defined anywhere in that file.
    What are the best practices regarding symbol/class binding? Should visual elements be defined in code in the .as file? It seems like it will be much harder to do all the nice animation and stuff if I do this... Is there a way to define the symbol with the class in the .as file, so that all the logical pieces are together?
    It just seems a bit messy to me right now to have a separate .as file which is referring to entities that are defined in the .fla file. I would like all the pieces to be packaged together nicely.
    Thanks!

    I know that I am new to this board (and this whole development environment) and I don't mean to be rude, but it is very unhelpful when some asks a question and then someone else responds by saying "Why would you want to do that?" Would I really be asking if I didn't have a reason? But to answer your question:
    There is a clear reason why someone might want to bind a class to a symbol: so that he/she can use the Flash IDE's nice graphical tools to animate the classes UI. The same reasoning is behind codebehind files in other programming paradigms such as Microsoft's WPF or Apple's iPhone development environment. They allow graphical IDE's to modify the visuals while modularizing the logic out into a codebehind file. It is clearly something that Adobe has in mind also, since Flash CS4 presents you with the option "Bind class to library symbol" right there in the "Create Class" dialog and when you do it, the class and symbol interact seamlessly.
    The thing that I do not like about the Adobe way of doing it, is that the class definition is in a nice file all by itself, while the symbol definition is packed into the .fla file and cannot be readily reused in another project. In Microsoft's WPF, the class definition is in a .cs file, while the equivalent of the symbol is in a file with the same name as the .cs file, with the extension .xaml. So if you want to reuse the object, you just copy the .cs file and the .xaml file over to a new project and have at it. It is a nice, simple solution.
    Is there some way to export the symbol definition to a file outside of the .fla file? Or does the symbol really need to remain in the .fla file?
    Thanks.

  • Connect JavaFx(Applets) to J2EE - best practice & browser session

    Hi there,
    I’m new to JavaFX and Applet programming but highly interested.
    What I don’t get at the moment is how you connect the locally executed code of the applet to your system running on a server (J2EE).
    Of course there seem to be different ways but I would like to avoid using RMI or things like that because of the problem with firewalls and proxies.
    So I would like to prefer using HTTP(s) connection.
    And here my questions:
    1.) Is there any best practice around? For example: using HTTP because of the problems I mentioned above. Sample code for offering java method via HTTP?
    2.) Is there a possibility to use the browser session? My J2EE applications are normally secured. If the user opens pages he has to login first and has than a valid session.
    Can I use the applet in one of those pages and use the browser environment to connect? I don’t want the user to input his credentials on every applet I provide. I would like to use the existing session.
    Thanks in advance
    Tom

    1) Yes. If you look at least at the numerous JavaFX official samples, you will find a number of them using HttpRequest to get data from various servers (Flickr, Amazon, Yahoo!, etc.). Actually, using HTTP quite insulates you from the kind of server: it doesn't matter if it run servlets or other Java EE stuff, PHP, Python or other. The applet only knows the HTTP API (GET and POST methods, perhaps some other REST stuff).
    2) It is too long since I last did Java EE (was still J2EE...), so I can't help much, perhaps somebody will shed more light on the topic. If the Web page can use JavaScript to access this browser session, it can provide this information to the JavaFX applet (JS <-> JavaFX communication works as well as with Java applets).

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Logical architecture+best practice

    Hi,
    what does these mean for you regading Oracle applications :
    1-logical architecture
    2-best practice
    Regards.

    1-logical architecture -> I assume the technical architecture for deployment of Oracle Applications (Single node, multi node, HA, DMZ configuration etc)
    2-best practice -> Best Practices in each function within maintaining and implementing Oracle Applications.. like best practice for Upgrades, Coning Patching etc
    Sam
    http://www.appsdbablog.com

  • Best practice regarding package-private or public classes

    Hello,
    If I was, for example, developing a library that client code would use and rely on, then I can see how I would design the library as a "module" contained in its own package,
    and I would certainly want to think carefully about what classes to expose to outside packages (using "public" as the class access modifier), as such classes would represent the
    exposed API. Any classes that are not part of the API would be made package-private (no access modifier). The package in which my library resides would thereby create an
    additional layer of encapsulation.
    However, thus far I've only developed small applications that reside in their own packages. There does not exist any "client code" in other packages that relies on the code I've
    written. In such a case, what is the best practice when I choose to make my classes public or package-private? Is it relevant?
    Thanks in advance!

    Jujubi wrote:
    ...However, thus far I've only developed small applications that reside in their own packages. There does not exist any "client code" in other packages that relies on the code I've
    written. In such a case, what is the best practice when I choose to make my classes public or package-private? Is it relevant?I've always gone by this rule of thumb: Do I want others using or is it appropriate for others to use my methodes. Are my methods "pure" and not containing package speicific coding. Can I guarentee that everything will be initialized correctly if the package is included in other projects.
    Basically--If I can be sure that the code will do what it is supposed to do and I've not "corrupted" the obvious meaning of the method, then I usually make it public--otherwise, the outside world, other packages, does not need to see it.

  • Best practice for encoding and decoding DUT/UUT registers? Class? Cluster? Strings? Bitbanging?

    I am architectecting a LabVIEW system to charactarize silicon devices.  I am trying to decide the best way to enqueue, encode, and decode device commands executed by my test system.
    For example, a ADC or DAC device might come in both I2C and SPI flavors (same part, different interface) and have a large register map which can be represented as register names or the actual binrary value of it's address. 
    I would like my data structure to
    *) be protocol agnostic
    *) have the flexibility to program using either the memonics or hard coded addresses
    *) be agnostic to the hardware which executes the command. (
    *) I would like to enqueue mulitple commands in a row.
    I am thinking a detailed class is my best bet, but are there are examples or best practices already established?

    I agree on the detailed class inherited from a general DUT-class. Especially if you want to mix interfaces you need to keep those as far away from your top vi as possible.
    As to the 4th point i'd implement command-vi's as enque and have a in-class command-queue (or possibly just an array of commands).
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Looking for best practice sending args to classes

    Unfortunately, I'm stuck in a company that only employs MS developers, so I'm kind of stranded with no buddies to mentor my java skills... so thanks in advance for any help, I appreciate it.
    Anyway, I think that I've been doing things the hard way for a while and I'm looking for some sort of best practice to start using. I'm currently working on a GUI that will take all the selections, via combo boxes, text fields, etc., and send them to a class (a web bot, actually) and run it.
    I'm starting to run into the problem of having too many arguments to send to my Bot class. What's a good way that I should be doing this? I figure I can do it a couple of ways, right?
    new Bot(arg1, arg2, ......... argX);
    Bot bot = new Bot();
    bot.setArg1("something");
    bot.setArg2("something");
    etc..
    bot.run();Or, is there a better way? Can I package all the args in a collection somehow?? That way I only have 1 argument to send... I don't know... Thanks for the help.

    Create a class "Data" (for example) that encapsulates all the data you want to pass to the Bot class. Then create an instance of the Data class and set all the relevant fields (i.e. setArg1 etc). Now you pass this Data instance to your Bot class. This way you only have to pass one Object around and you've encapsulated all your data.

  • Best practice to handle the class definitions among storage enabled nodes

    We have a common set of cache servers that are shared among various applications. A common problem that we face upon deployment is with the missing class definition newly introduced by one of the application node. Any practical approach / best practices to address this problem?
    Edited by: Mahesh Kamath on Feb 3, 2010 10:17 PM

    Is it the cache servers themselves or your application servers that are having problems with loading classes?
    In order to dynamically add classes (in our case scripts that compile to Java byte code) we are considering to use a class loader that picks up classes from a coherence cache. I am however not so sure how/if this would work for the cache servers themselves if that is your problem!?
    Anyhow a simplistic cache class loader may look something like this:
    import com.tangosol.net.CacheFactory;
    * This trivial class loader searches a specified Coherence cache for classes to load. The classes are assumed
    * to be stored as arrays of bytes keyed with the "binary name" of the class (com.zzz.xxx).
    * It is probably a good idea to decide on some convention for how binary names are structured when stored in the
    * cache. For example the first tree parts of the binary name (com.scania.xxxx in the example) could be the
    * "application name" and this could be used as by a partitioning strategy to ensure that all classes associated with
    * a specific application are stored in the same partition and this way can be updated atomically by a processor or
    * transaction! This kind of partitioning policy also turns class loading into a "scalable" query since each
    * application will only involve one cache node!
    public class CacheClassLoader extends ClassLoader {
        public static final String DEFAULT_CLASS_CACHE_NAME = "ClassCache";
        private final String classCacheName;
        public CacheClassLoader() {
            this(DEFAULT_CLASS_CACHE_NAME);
        public CacheClassLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public CacheClassLoader(ClassLoader parent, String classCacheName) {
            super(parent);
            this.classCacheName = classCacheName;
        @Override
        public Class<?> loadClass(String className) throws ClassNotFoundException {
            byte[] bytes = (byte[]) CacheFactory.getCache(classCacheName).get(className);
            return defineClass(className, bytes, 0, bytes.length);
    }And a simple "loader" that put the classes in a JAR file into the cache may look like this:
    * This class loads classes from a JAR-files to a code cache
    public class JarToCacheLoader {
        private final String classCacheName;
        public JarToCacheLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public JarToCacheLoader() {
            this(CacheClassLoader.DEFAULT_CLASS_CACHE_NAME);
        public void loadClassFiles(String jarFileName) throws IOException {
            JarFile jarFile = new JarFile(jarFileName);
            System.out.println("Cache size = " + CacheFactory.getCache(classCacheName).size());
            for (Enumeration<JarEntry> entries = jarFile.entries(); entries.hasMoreElements();) {
                final JarEntry entry = entries.nextElement();
                if (!entry.isDirectory() && entry.getName().endsWith(".class")) {
                    final InputStream inputStream = jarFile.getInputStream(entry);
                    final long size = entry.getSize();
                    int totalRead = 0;
                    int read = 0;
                    byte[] bytes = new byte[(int) size];
                    do {
                        read = inputStream.read(bytes, totalRead, bytes.length - totalRead);
                        totalRead += read;
                    } while (read > 0);
                    if (totalRead != size)
                        System.out.println(entry.getName() + " failed to load completely, " + size + " ," + read);
                    else
                        System.out.println(entry.getName().replace('/', '.'));
                        CacheFactory.getCache(classCacheName).put(entry.getName() + entry, bytes);
                    inputStream.close();
        public static void main(String[] args) {
            JarToCacheLoader loader = new JarToCacheLoader();
            for (String jarFileName : args)
                try {
                    loader.loadClassFiles(jarFileName);
                } catch (IOException e) {
                    e.printStackTrace();
    }Standard disclaimer - this is prototype code use on your own risk :-)
    /Magnus

  • Resource class best practice

    I have created a reserved context with 20% min and max = to min in every resource
    including sticky.
    I also have the default resource class
    I have also created another resource with 20% sticky but left everything else at default 0-100%
    our network traffic doesnt carry a heavy load on the new loadbalancer..but what is a good rule of thumb?
    most of the traffic is http and at this point we will create about 2 contexts after the Admin

    Hello!
    This is a very pertinent question, however as many things in life there is no one size fits all here.
    We basically recommend, as best practice, to allocate for each specific context only the estimated needed resources. These values should always come from a previous study on the network patterns/load.
    To accomodate for growth and scalability it is strongly advised to initially keep as many resources reserved as possible and allocate the unused resources as needed. To accomplish this goal, you should created a reserved resource class, as you did already, with a guarantee of 20 to 40 percent of all ACE resources and configure a virtual context solely with the purpose of ensuring that these resources are reserved.
    As you might already know ACE protects resources in use, this means that when decreasing a context's resources, the resources must be unused before then can be reused by other context. Although it is possible to decrease the resource allocations in real time, it typically requires additional overhead to clear any used resources before reducing them.
    Based on the traffic patterns, number of connections, throughput, concurrent SSL connections , etc, for each of the sites you will be deploying you will have a better idea on what might be the estimated needed resources and then assign them to each of the contexts. Thus this is something that greatly depends on customer's network environment.
    Hope this helps to clarify your doubts.

Maybe you are looking for