Efficiant way of drawing?

Im looking for the most efficiant way to draw, my application requires drawing of images every 50m/s or so.
What im doing is using a class which uses PixelGrabbar for a pixel array, this is left there to use so it dosn't have to load every single time.
For example;
public ImageClass image1 = new ImageClass("myimage");Then, there i have a class which has a set pixel array size, for example;
public ScreenDrawing sd1 = new ScreenDrawing(400, 600, this);This uses the following code;
public ScreenDrawing(int w, int h, Component component)
        drawingAreaHeight = h;
        drawingAreaWidth = w;
        drawingAreaSize = new int[w * h]; //pixel array
        drawingAreaImage = component.createImage(this);
        component.prepareImage(drawingAreaImage, this);
        component.prepareImage(drawingAreaImage, this);
        component.prepareImage(drawingAreaImage, this);
    }Basically, when i want to draw the whole screen image i would do this;
sd1.drawPixels(x, graphics, y);This would draw the pixel array to the screen at x, y.
To get the pixels set on the "screenDraw" class i would do;
image1.setPixels(x, y);After this is set it paints it to the screen.
public void drawGraphics(int xOffset, Graphics g, int yOffset)
        setImageConsumer();
        g.drawImage(drawingAreaImage, xOffset, yOffset, this);
public synchronized void setImageConsumer()
        if(imageConsumer != null)
            imageConsumer.setPixels(0, 0, drawingAreaHeight, drawingAreaWidth, colorModel, drawingAreaSize, 0, drawingAreaHeight);
            imageConsumer.imageComplete(2);
    }This is basically everything i use.
But i have a problem, when drawing an image, for example just the size, 96 x 96 pixels, this will just flicker the image instead of drawing it because of the speed it needs to be drawn to update with the image placed above of this image.
If anyone knows any more efficiant way to draw please help.
Thanks.

If you are painting by using repaint(), then for an AWT component you need to override
public void paint(Graphics g) {
     //call methods to do painting stuff
}and override
public void update(Graphics g) {
     paint(g);
}to call paint. What's happening is that when you call repaint(), the AWT painting mechanism calls update(...) . This method in turn erases the background before calling paint(...) . The flickering is from seing the erasing.

Similar Messages

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • EFFICIENT way of escalating an open task

    I need to escalate TASKS that are still open after 31 days.
    I figure i need 2 workflows to do this.
    As i see it right now:
    1st WF. Waits for 31 days after the task has been created. On the 31st day it changes a read only field called "escalate" to YES.
    2nd WF checks for changes in tasks where: If (Status=OPEN AND escalate<>pre(escalate)) is true then send an escalete email or task.
    Is there a more efficient way of doing this?
    TIA
    Paul

    Is there a reason you want two worfklows? Why not put an e-mail action after the Wait on the same workflow? If you check the "Reevaluate Rule Conditions After Wait" checkbox on the Wait action, the workflow rule will be re-evaluated after your 31 days... so it would only send the e-mail message if the Task is still open (assuming your workflow condition is set to look at Status = Open).
    Chris

  • SQL query with multiple tables - what is the most efficient way?

    Hello I am learning PL/SQL. I have a simple procedure where I need to find number of employees and departments per location as per user input of location_id.
    I have 3 Tables:
    LOCATIONS
    location_id (pk)
    location_name
    DEPARTMENTS
    department_id (pk)
    location_id (fk)
    department_name
    EMPLOYEES
    employee_id (pk)
    department_id (fk)
    employee_name
    1 Location can have 0-MANY Departments
    1 Employee has 1 Department
    Here is the query I came up with for PL/SQL procedure:
    /*Ecount, Dcount are NUMBER variables */
    SELECT SUM (EmployeeCount), COUNT(DepartmentNumber)
         INTO Ecount, Dcount
         FROM     
         (SELECT COUNT(employee_id) EmployeeCount, department_id DepartmentNumber
              FROM employees
              GROUP BY department_id
              HAVING department_id IN
                        (SELECT department_id
                        FROM departments
                        WHERE location_id = userInput));
    I do get the correct result, but I am just wondering if my query is on the right track and if there is a more "efficient" way of doing this.
    Thanks in advance for helping a newbie out.

    Hi,
    Welcome to the forum!
    Something like this will be more efficient:
    SELECT    COUNT (employee_id)               AS ECount
    ,       COUNT (DISTINCT department_id)     AS DCount
    FROM       employees
    WHERE       department_id IN (     SELECT     department_id
                        FROM      departments
                        WHERE      location_id = :userInput
    ;You should also try a join instead of the IN subquery.
    For efficiency, do only the things you need to do.
    For example, you don't need a count of employees in each department, so don't compute one. That means you won't need the in-line view, so don't have one.
    You don't need PL/SQL for this job, so don't use PL/SQL if you don't have to. (I realize this question was out of context, so you may have good reasons for doing this in PL/SQL.)
    Do all filtering as early as possible. Don't waste effort computing things that won't be used .
    A particular example of this is: Never use a HAVING clause when you can use a WHERE clause. What's the difference between a WHERE clause and a HAVING clause? The WHERE clause is applied before aggregate functions are computed, and the HAVING clause is applied after; there's no other difference. Therefore, if the HAVING clause isn't referencing an aggregate function, it could be done in a WHERE clause instead.

  • Advice needed: Efficient way to scan users

    Hi all,
    I wish to know the efficient way to scan users in Lighthouse. I need to write a workflow that checkout all the users and perform some updates. This workflow should run everyday at midnight.
    I have created a scanner myself. Basically what It did are:
    1. call FormUtils.getUsers method to return all users' name into a variable.
    2. loop through this list and call a subprocess workflow to process every user. This subprocess checks out a user view, performs updates, and then checks in view.
    This solution is not efficient at all since it causes my JVM to be Out of Memory. (1G RAM assigned to JVM with about 78,000 users)
    Any advice is highly appreciated. Thank you.
    Steve

    Ok...I know understand what you are doing and why you need this.
    A long, long, long time ago (back in 3.x days) the deferred task scanner was really bad. Its nightly scan would scan ALL users each time. This is fine when your client had 4k users...but not when it has 140k users.
    Additionally, the "set deferred task" function had problems with two tasks with the same name "i.e. disable resource" since it used the name as the xml object name which can not be duplicated.
    soooo, to beat this I rewrote the deferred task handler to allow me to do all of this. Part of this was to add a searchable field called 'nextTaskDate' on the user object. After each workflow this 'date" is updated so it is always correctly populated with the users "next deferred task date"
    each night the scanner runs and querys all users with a nextTaskDate of today. This then gives us a result set that we can iterate over instead of having to list each user and search for tasks. It's a billion times faster.
    Your best bet is to store the task date in miliseconds and make your query a "all users with next task date BEFORE now"...this way if the server is hosed you can execute tasks you may have missed.
    We have an entire re-usable implmentation framework that we have patented (of which this code is a part) that answers most of these types of issues you are bringing up. It makes these implementations much much simpler, faster, scalable and maintainable.
    this make sense?
    Dana Reed
    AegisUSA
    Denver, CO 80211
    [email protected]
    773.412.3782
    "Now hiring best-in-class IdM architects. Inquire via emai"

  • A more efficient way to assure that a string value contains only numbers?

    Hi ,
    I'm using Oracle 9.2.0.6.
    I was curious to know if there was any way I could write a more efficient query to determine if a string value contains only numbers.
    Here's my current query. This SQL is from a sub query in a Join clause.
    select distinct cta.CUSTOMER_TRX_ID, to_number(cta.SALES_ORDER) SALES_ORDER
                from ra_customer_trx_lines_all cta
                where length(cta.SALES_ORDER) = 6
                and cta.SALES_ORDER is not null
                and substr(cta.SALES_ORDER,1,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,2,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,3,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,4,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,5,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,6,1) in('1','2','3','4','5','6','7','8','9','0')This is a string where I'm finding A-Z-a-z characters and '/' and '-' characters in all 6 positions, plus there are values that are longer than 6 characters. That's what the length(cta.SALES_ORDER) = 6 is for. Also, of course. some cells are NULL.
    So the question is, is there a more efficient way to screen out only the values in this field that are 6 character numbers or is what I have the best I can do?
    Thanks,

    I appreciate all of your very helpfull workarounds. The cost is a little better in all cases than my original where clause.
    To address the discussion that's popped up about design from this question, I can say a few things that should clear , at least, my situation up.
    First of all this custom quoting , purchase order , and sales order entry system WAS written by a bunch a of 'bad' coders who didn't document their work and then left. We don't even have an ER diagram
    The whole project that I'm only a small part of is literally trying to put Humpty Dumpty together again and then move it from a bad custom solution into Oracle Applications.
    We're rebuilding, documenting, and doing ETL. This is one of your prototypical projects from hell.
    It's a huge database project so we're taking small bites as a time. Hopefully, somewhere right before Armageddon hits, this thing will be complete.
    But until then,..., well,..., you know the drill.
    Thanks Again.

  • What is the efficient way of working with tree information in a table?

    hi all,
    i have to design a database to store,access,add or delete(the tree nodes) the tree information. what is the efficient way of accomplishing it?
    let's assume the information to be stored in the table is parent,child and type(optional).The queries should be very generic(should be able to work with any database).
    anybody has any suggestions?I have to work with large data.
    quick response is highly appreciated.
    thanks in advance,
    rahul

    Did you check out this link?
    http://www.intelligententerprise.com/001020/celko1_1.shtml
    Joe Celko has really gave some interesting way to implement tree in a rdbms.
    Best wishes
    Anubrata

  • Efficient way for Continous Creation of XML Content?

    Hi
    I have a requirement of creating xml content from the data extraced from a udp packet.
    As the packet arrives, i have to generate appropriate xml content from them and keep in the same single xml file.
    Problem:
    Since the xml file is not a flat file, i can't just append the new contents at the end. So if i have to write into xml file, Each and Every time i have to parse the content as a packet arrives and insert the new content under appropriate parent. I think this is not the most efficient way.
    Every time parsing the file may affect cpu time and as the file grows in size, the memory will also be a constraint.
    Other options i could think of
    * Hold the XML Document Object in memory until a certain event like timeout for receiving packet and write into the xml file at oneshot.
    * Serialize the objects containing the extracted packet content to a temp file and after some event, parse and create the xml file at oneshot
    Which is the efficient way or is there any design pattern to handle this situation? I am worried about the memory footprint and performance on peak loads
    I am planning to use JDOM / SAX Builder for xml creation.
    Thank you...

    Lot's of "maybe" and "I think" and "I'm worried about" in that question, and no "I have found" or "it is the case that". In short, you're worrying too much about problems you don't even know you have. XML is a verbose format anyway, efficiency isn't paramount when dealing with it. Even modestly powered machines can deal with quite a lot of disk I/O these days without noticeable impact. The most efficient thing you can do here is write something that works, and see if you can live with the performance

  • Most efficient way to use thumbnails of multiple sizes

    When a user submits an image on my website, the upload script
    currently creates thumbnails in three different sizes (120px, 90px,
    and 20px). Different thumbnail sizes are used in different areas of
    the site.
    Is there a more storage-efficient way to display high-quality
    thumbnails in different sizes, without requiring a separate
    thumbnail file for each size used?
    I cannot rely on browsers to resize images as the quality is
    often very undesirable.

    AngryCloud wrote:
    > I may not have been clear in my last post...
    >
    > When an image is viewed normally on a page, it is saved
    to the client's
    > computer so that it will load instantly the next time
    the image is called for.
    >
    > I do not want visitors to have to wait for the same
    images they have already
    > seen to re-download and resample. A file of each
    resampled image should be
    > saved to the client's computer to avoid this.
    >
    >
    Is it possible to save resampled images on a page to the
    client's
    > computer?
    It sounds like your page needs to check whether the cached
    version
    exists before creating the a new image, otherwise its always
    going to
    create a new version of the image and send it to the
    browser... but I
    don't know how this is possible.
    Dooza
    Posting Guidelines
    http://www.adobe.com/support/forums/guidelines.html
    How To Ask Smart Questions
    http://www.catb.org/esr/faqs/smart-questions.html

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
    Thanks in advance for your review and am hopeful for a reply.
    ITBobbyP85

    You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
    csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
    In the flat file connection manager, set the column delimiter to the pipe character.

  • What is the efficient way of insert some bytes into a file?

    Hello, everyone:
    If I want to insert some bytes into a file (for example, insert the bytes before all the original content of the file, or append the bytes to a file), and the size of the original file is very big. I am wondering what is the efficient way? Where can I get some sample codes?
    regards,
    George

    Thanks, DrClap.
    I have tried your method and you are correct. I have written a simple program which can be used to insert "Hello World " to the start of a file ("c:\\temp\\input.txt"), and I have verified that it can work. Please help to see whether it is correct and whether it has a more efficient way.
    public class TestDriver {
         public static void main(String[] args) {
              byte[] back_buffer = new byte [1024];
              byte[] write_buffer = new byte [1024];
              System.arraycopy("Hello World".getBytes(), 0, write_buffer, 0, "Hello World".getBytes().length);
              int write_buffer_length = "Hello World ".getBytes().length;
              int count = 0;
              FileInputStream fis = null;
              FileOutputStream fos = null;          
              try {
                   fis = new FileInputStream (new File("c:\\temp\\input.txt"));
                   fos = new FileOutputStream (new File("c:\\temp\\output.txt"));
                   while ((count = fis.read (back_buffer)) >= 0)
                        fos.write(write_buffer, 0, write_buffer_length);
                        System.arraycopy (back_buffer, 0, write_buffer, 0, count);
                        write_buffer_length = count;
                   //write the last block
                   fos.write(write_buffer, 0, write_buffer_length);
                   fis.close();
                   fos.close();
                   //copy content back into original file
                   fis = new FileInputStream (new File("c:\\temp\\output.txt"));
                   fos = new FileOutputStream (new File("c:\\temp\\input.txt"));
                   while ((count = fis.read (back_buffer)) >= 0)
                        fos.write(back_buffer, 0, count);
                   fis.close();
                   fos.close();
                   //remove temporary file
                   File f = new File ("c:\\temp\\output.txt");
                   f.delete();
              } catch (FileNotFoundException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
                   try {
                        fis.close();
                   } catch (IOException e1) {
                        // TODO Auto-generated catch block
                        e1.printStackTrace();
                   try {
                        fos.close();
                   } catch (IOException e2) {
                        // TODO Auto-generated catch block
                        e2.printStackTrace();
              } catch (IOException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
                   try {
                        fis.close();
                   } catch (IOException e1) {
                        // TODO Auto-generated catch block
                        e1.printStackTrace();
                   try {
                        fos.close();
                   } catch (IOException e2) {
                        // TODO Auto-generated catch block
                        e2.printStackTrace();
    }regards,
    George

  • Most efficient way to loop though many similarlly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    If this is an LCD form then you're better off asking over at the LCD forum.

  • Most efficient way to loop through similarly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    I have solved my issue.   It took some battling in javascript using xfa.resolveNode.
    I have 5 pages, each consisting of a series of 60 fields named Viol1Num, Viol2Num, Viol3Num .... Viol60Num.
    If when this javascript runs, it detects a blank field, then insert a '3' into it.
    The below is the javascript which runs for the second page of this document.
    while (LoopCounter < 61) {
    if ((LoopCounter != 21) && (LoopCounter != 22)) {
    if((xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == null) | (xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == "")) {
    xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue = 3;
    LoopCounter = LoopCounter + 1

  • Most efficient way to insert into a story with many floating images?

    I have a document with many floating images. They must all be floating because otherwise I do not get the Caption numbering right and it is impossible (because some images take an entire page) to use only anchored images.
    Now, I have to insert a large part into the middle / at the beginning of a section. If I do that the text will flow, but the images remain in place. This is extremely slow working because of all teh movements I have to do on all the images. What I would like to do is to let the pages after the page where I am entering remain the same and when I insert text and images this should just move up one page at a time. Then, at the end, I can do the fine tuning of attaching both parts again. Is there a way to do that in an efficient way?
    What I now often do is move all the floating images out of the pages, then insert the new stuff, move the images back, then make sure all the references are fine (e.g. if they refer to an image on another page, the style becomes paragraph number + page numer). For some sections this is a hideous amount of work (many, many images).
    Is there a smarter way. I have been thinking about splitting and later rejoining a section. If I add pages to one section, the following sections are not damaged, after all.
    What is the best way to do this?
    Thanks in advance.

    I have a document with many floating images. They must all be floating because otherwise I do not get the Caption numbering right and it is impossible (because some images take an entire page) to use only anchored images.
    Now, I have to insert a large part into the middle / at the beginning of a section. If I do that the text will flow, but the images remain in place. This is extremely slow working because of all teh movements I have to do on all the images. What I would like to do is to let the pages after the page where I am entering remain the same and when I insert text and images this should just move up one page at a time. Then, at the end, I can do the fine tuning of attaching both parts again. Is there a way to do that in an efficient way?
    What I now often do is move all the floating images out of the pages, then insert the new stuff, move the images back, then make sure all the references are fine (e.g. if they refer to an image on another page, the style becomes paragraph number + page numer). For some sections this is a hideous amount of work (many, many images).
    Is there a smarter way. I have been thinking about splitting and later rejoining a section. If I add pages to one section, the following sections are not damaged, after all.
    What is the best way to do this?
    Thanks in advance.

  • Most efficient way to track 3 states?

    In a program I am writing, I have an object with three states, which it progresses through during the corse of the program. Since the program uses a lot of these objects (potentially) I want to keep the size of them down. Here's the issue. How to store what state the object is in? a boolean won't work, of corse, since there are three states. So here's what I can up with:
    Either a Boolean (initial state is null, then false, then true)
    or a byte (any three values would work)
    I have a feeling that the byte is more efficient, but want to make sure. Also, if I'm missing an easy, efficient way to do this, tell me.

    Nearly there:
    <code>
    public abstract class State {
    abstract void handleState();
    abstract int getState();
    public class State1 extends State {
    void handleState() {
    //do state 1 specific stuff
    int getState() {
    return 1;
    public class State2 extends State {
    void handleState() {
    //do state 2 specific stuff
    int getState() {
    return 2;
    public class State3 extends State {
    void handleState() {
    //do state 3 specific stuff
    int getState() {
    return 3;
    public class StateController {
    private State currentState = new State1();
    public int getState() {
    return currentState.getState();
    public void handleState() {
    currentState.handleState();
    </code>
    might work ;)
    As to booleans... no they cannot be null actually... they are represented by a bit which is 0 or 1 (or if you prefer true of false).
    Boolean b = null;
    is not a null boolean, it is an uninitialised object.
    If you need an Object that you want to be a boolean then it is fine to use the Boolean class :)
    The reason boolean b: b == null doesn't compile is because you cannot compare different types.

Maybe you are looking for