Looking for a 2D Polygon Bevel Algorithm

I am looking for a graphics algorithm to give two-dimensional polygons a 3D beveled edge look. The polygon should look like as though it is lit from the top-left. This is trivial for rectangular figures, but obviously much more complex for general polygons. The algorithm needs to support any type of polygon, including those with splines or other curved edges, as well as convex figures. Making matters even more complicated, it must also handle texture-filled polygons.
A Java class that provides this functionality would be perfect. Short of that, an implementation in any language, or even a description of an algorithm, would be helpful.
I created a java applet to illustrate the problem I am trying to solve - see the source below. This applet with source is also available at
http://www.keithhilen.com/Java/bevel/
Keith Hilen
[email protected]
Polygons.java :
import java.applet.*;
import java.awt.*;
import java.awt.image.*;
import java.awt.geom.*;
public class Polygons extends Applet
Image image;
int polyWidth = 100;
int polyHeight = polyWidth;
int polyOfsX1 = 100;
int polyOfsY1 = 100;
int polyOfsX2 = 250;
int polyOfsY2 = 100;
int polyOfsX3 = 100;
int polyOfsY3 = 240;
int polyOfsX4 = 250;
int polyOfsY4 = 240;
Polygon octagon;
Ellipse2D.Double ellipse;
GeneralPath ornament;
BufferedImage imageBuf1, imageBuf2, imageBuf3, imageBuf4;
public void init()
image = loadImage("background.jpg");
createPolygons();
createImageBufs();
private Image loadImage(String name)
MediaTracker tracker = new MediaTracker(this);
Image image = getImage(getDocumentBase(), name);
tracker.addImage(image, 0);
for ( ; ; )
try { tracker.waitForAll(); } catch (InterruptedException e) { }
if (tracker.checkAll())
break;
return image;
public void createPolygons()
double sqrt2 = Math.sqrt(2);
int m1 = (int) (polyWidth / (2 + sqrt2));
int m2 = (int) (polyWidth * sqrt2 / (2 + sqrt2));
octagon = new Polygon();
octagon.addPoint(-m2/2, -(m1 + m2/2));
octagon.addPoint(+m2/2, -(m1 + m2/2));
octagon.addPoint(+(m1 + m2/2), -(m2/2));
octagon.addPoint(+(m1 + m2/2), +(m2/2));
octagon.addPoint(+m2/2, +(m1 + m2/2));
octagon.addPoint(-m2/2, +(m1 + m2/2));
octagon.addPoint(-(m1 + m2/2), +(m2/2));
octagon.addPoint(-(m1 + m2/2), -(m2/2));
ellipse = new Ellipse2D.Double(-polyWidth/2, -polyWidth/2, polyWidth, polyWidth*3/4);
ornament = new GeneralPath();
int l = polyWidth/2;
int m = polyWidth/16;
int s = polyWidth/32;
ornament.moveTo(+0+0, -l+0);
ornament.quadTo(+0+s, -l+0, +0+s, -l+s);
ornament.quadTo(+m+0, -m+0, l-s, 0-s);
ornament.quadTo(+l+0, +0-s, l0, +0+0);
ornament.quadTo(+l+0, +0+s, l-s, 0+s);
ornament.quadTo(+m+0, m0, +0+s, +l-s);
ornament.quadTo(+0+s, l-0, 0+0, l0);
ornament.quadTo(+0-s, l0, +0-s, +l-s);
ornament.quadTo(-m+0, m0, -l+s, +0+s);
ornament.quadTo(-l+0, +0+s, -l+0, +0+0);
ornament.quadTo(-l+0, +0-s, -l+s, +0-s);
ornament.quadTo(-m+0, -m+0, +0-s, -l+s);
ornament.quadTo(+0-s, -l+0, -0+0, -l+0);
public void createImageBufs()
Graphics2D g2d;
Composite saveAlpha;
// Figure 1
// Create image buf and get context
imageBuf1 = new BufferedImage(polyWidth, polyHeight,
BufferedImage.TYPE_INT_ARGB_PRE);
g2d = (Graphics2D) imageBuf1.getGraphics();
// Fill with transparent color
saveAlpha = g2d.getComposite();
g2d.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR,
0.0f));
g2d.fillRect(0, 0, polyWidth, polyHeight);
g2d.setComposite(saveAlpha);
// Draw figure
g2d.translate(polyWidth/2, polyHeight/2);
g2d.setClip(octagon);
g2d.setColor(Color.blue);
g2d.fillRect(-polyWidth/2, -polyHeight/2, polyWidth, polyHeight);
// Figure 2
// Create image buf and get context
imageBuf2 = new BufferedImage(polyWidth, polyHeight,
BufferedImage.TYPE_INT_ARGB_PRE);
g2d = (Graphics2D) imageBuf2.getGraphics();
// Fill with transparent color
saveAlpha = g2d.getComposite();
g2d.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR,
0.0f));
g2d.fillRect(0, 0, polyWidth, polyHeight);
g2d.setComposite(saveAlpha);
// Draw figure
g2d.translate(polyWidth/2, polyHeight/2);
g2d.setClip(octagon);
g2d.drawImage(image, -polyWidth/2, -polyWidth/2, null);
// Figure 3
// Create image buf and get context
imageBuf3 = new BufferedImage(polyWidth, polyHeight,
BufferedImage.TYPE_INT_ARGB_PRE);
g2d = (Graphics2D) imageBuf3.getGraphics();
// Fill with transparent color
saveAlpha = g2d.getComposite();
g2d.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR, 0.0f));
g2d.fillRect(0, 0, polyWidth, polyHeight);
g2d.setComposite(saveAlpha);
// Draw figure
g2d.translate(polyWidth/2, polyHeight/2);
g2d.setClip(ellipse);
g2d.drawImage(image, -polyWidth/2, -polyWidth*5/8, null);
// Figure 4
// Create image buf and get context
imageBuf4 = new BufferedImage(polyWidth, polyHeight, BufferedImage.TYPE_INT_ARGB_PRE);
g2d = (Graphics2D) imageBuf4.getGraphics();
// Fill with transparent color
saveAlpha = g2d.getComposite();
g2d.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR, 0.0f));
g2d.fillRect(0, 0, polyWidth, polyHeight);
g2d.setComposite(saveAlpha);
// Draw figure
g2d.translate(polyWidth/2, polyHeight/2);
g2d.setClip(ornament);
g2d.drawImage(image, -polyWidth/2, -polyWidth/2, null);
public void paint(Graphics g)
g.drawImage(imageBuf1, polyOfsX1 - polyWidth/2, polyOfsY1 - polyHeight/2, null);
g.drawImage(imageBuf2, polyOfsX2 - polyWidth/2, polyOfsY2 - polyHeight/2, null);
g.drawImage(imageBuf3, polyOfsX3 - polyWidth/2, polyOfsY3 - polyHeight/2, null);
g.drawImage(imageBuf4, polyOfsX4 - polyWidth/2, polyOfsY4 - polyHeight/2, null);
Polygons.html :
<applet
code=Polygons.class
name=Polygons
width=360
height=300>
</applet>

Think you'll be lucky to find anything in the forum on push down automata. Do a search on google, perhaps with the keyword "parser" or "grammer checker" thrown in. There may well be whole books devoted to it!

Similar Messages

  • Looking for an efficient data structur & search algorithm

    Hi all
    i have a list of digits (international phone network prefixes) with some hundreds to some thousends entries. An entry may be in the form
    ^00[1-9]{1}[0-9]{0,7}$
    I.e. this might be 001, 0041, 00317545, 00317548, 00317549 and so on. Regarding the last three examples, it might even be that there is an additonal 0031754.
    What i need a a data structur that allows to match these prefixes against a phonenumber.
    I.e. if i have the phonennumber 001123456789 it would match the prefix 001. If i had 00317549111 it would match 00317549.
    The easiest way would be to but all prefixes into an Vector or similar and loop over all entries, trying to match the phonenumber with startsWith(). But this wouldn't always result in a absolutely perfect match, since, i.e. for the phonenumber 00317549111 the check against the prefix 0031754 would return a match even if there was a more specific match with the prefix 00317549. But more than that, this simple algorithm is not very efficient.
    So i am looking for a more efficient way/pattern to do this. I thought about a kind of tree structure, starting with 00 in the top level, than provding [1-9] in the second level, and [0-9] from third level on. Then on every node it would either store if there is a matching prefix on that level, or if there is a prefix starting with that digits on a lower level or if there is no prefix on that level or any lower.
    I.e. when i have the phonenumber 00317549111 it would start at the top level with 00. That would be ok. On the next level it would check if there is a node for digit 3. If there is, it would go one level deeper and check if there is a node for digit 1. If yes, again it would go one level deeper to check if there is a node for digit 7. If that algorithm comes to a level where, for the request digit, it get's a prefix indicator rather than a node indicator, the algortihm would know, that a matching prefix was found and that there is no more specifig match on deeper levels.
    One thing i forgot to mention - the prefixes might be read once during startup/init and there it might take some time for building up the datastructur - i don't care about that. But, when running, then the maching process should be as efficient as possible, that's the most important point for me.
    What do you think about a pattern like this? Could this be efficient? Do you see other patterns, that might be easier to implement and that might be faster/need less memory?
    Thanks a lot for your help.
    Cheers, Frank

    I would really have gone for your first approach. With mperemsky5's approach you have the loop with (potential) n iterations (Let n be the length of the number) and in each iteration to compute the hash-code for the string which again takes time proportional to the strings length.
    The tree approach takes time equal to the length of the prefix and is imho not more complicated.
    Perhaps this way:
    public class DigitTree
      private class Node {
           private Object content;
           private Node[] children = new Node[10];
      private Node root = new Node();
      public DigitTree() {
      public void addPrefix(String prefix, Object value) {
           char[] numberChars = prefix.toCharArray();
           Node node = root;
           for (int i=0; i<numberChars.length; i++) {
                int number = numberChars[i] - '0';
                if (node.children[number] == null) node.children[number] = new Node();
                node = node.children[number];
           node.content = value;
      public Object match(String phonenumber) {
           char[] numberChars = prefix.toCharArray();
           Node node = root;
           for (int i=0; i<numberChars.length; i++) {
                int number = numberChars[i] - '0';
                if (node.children[number] == null) return code.content;
                node = node.children[number];
           return node.content;
    }The method addPrefix lets you add a prefix to the tree. The content-Object can hold a String or whatever to identify the prefix. If your data is not complete (i.e. if there are numbers for which no prefix exists) you might want to initialize the content-object of a node with a default value (e.g. "not found").
    The method match lets you look up a prefix for a given number and returns the Object associated with the prefix..
    The code was not tested.
    Greetings
    Thomas

  • Upgrading to uber SDO_GEOMETRY, looking for comments

    Good morning folks,
    I see the new thread from Dalibor discussing an issue with what I am calling "uber" SDO_GEOMETRY - anyone got a better name?
    Uber SDO_GEOMETRY is described here
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e11830/sdo_migrat.htm#CHDBCHBG
    Dalibor wins some kind of prize for the first time anyone has brought up uber SDO on the forum since the old thread ran its course years back
    Re: SDO_GEOMETRY size limits here to stay?
    Coincidentally this past week was my first shot at trying out the upgrade script and I thought I might post my experiences and see if anyone would have comments. This was just a test machine and I agree that each major error could have been followed up with a SR. But my overall goal is to determine if installing and running uber SDO_GEOMETRY is feasible in my production environment, so for better or worse these experiences are building those recommendations.
    So I am not much interested in going much in length why anyone would want to upgrade. I think the old thread covers it. For my work at least, the issues have not gone away. No question a polygon with millions of vertices is nutty, but they are out there all over the place and growing in number each year. Here are two additional datasets I am working with that are "over the top" for normal SDO:
    * National Wetlands Inventory (http://www.fws.gov/wetlands/data/DataDownload.html) - Largest polygon has 1,210,907 vertices.
    * FEMA National Flood Hazard Layer (http://www.msc.fema.gov/webapp/wcs/stores/servlet/info?storeId=10001&catalogId=10001&langId=-1&content=nfhlResources&title=NFHL%20Resources) - Largest polygon has 1,537,698 vertices.
    My current procedure is to load the datasets using ESRI's geometry format, identify the biggies, export the remainder and then reimport them as SDO_GEOMETRY.
    E.g. toss the big polygons away and explain to my clients that Oracle Spatial cannot handle this type of thing.
    My production instances remain on 10g so uber SDO is not yet a possibility but I am trying to think ahead.
    So back to the present. I started with a fresh 11gR2 (no upgrade history) running PSU 11.2.0.1.2 on OpenSUSE on a rather modest machine with two processors and four gig of RAM. I have about 100 gig of Oracle Spatial data on this machine. Basically I ended up running the upgrade script four times of the course of about 5 days, each run required about 30 hours to process. Here are some notes:
    * Make sure you spool the results! The script is quite "verbose" and does not have any error checking so the ALTER TYPE CASCADE part can fail but the script continues recompiling packages and rebuilding all your indexes. This is good in that the system is put back together whether it bombs or not. Bad in that you can waste a lot of time waiting to rerun the script.
    * You need lots of undo! The first time it ran through it appeared to finish and it was not until I looked at the logs did I see that I ran out of undo. I just had a single smallfile undo and it topped out at 32 gig. In the end I created 4 undo datafiles and it appears I needed 100 gig of undo! So I believe you need as much undo as you have SDO_GEOMETRY data. The first thing you want to do afterwards is DESCRIBE MDSYS.SDO_ORDINATE_ARRAY to see if the type was altered. If you see the usual 1048576 then you bombed.
    * Second time through I received an error that I had some spatial indexes in a failed state. Turns out as this is a development box there was an index stuck in "INPROGRS" state due to some flub on my part. This was enough to hose the upgrade. I would suggest folks check for this ahead of time using SELECT * FROM dba_indexes WHERE index_type = 'DOMAIN' AND domidx_status != 'VALID' and drop or rebuild those indexes.
    * Third time through I received the cryptic error
    ERROR at line 1:
    ORA-22324: altered type has compilation errors
    ORA-22332: a dependent object in schema "NAVTEQ_SF" has errors.
    ORA-00600: internal error code, arguments: [qctostiix1], [], [], [], [], [],
    [], [], [], [], [], []I had some sample Navteq data for use with the networking demo I never got functioning. At this point after four days of trying I was getting a bit impatient and just decided to drop that schema and try again. I have no idea why this navteg data caused an issue but the error never happened again with any other datasets.
    * Fourth time through the alter type worked! But the script still crashed with
    ERROR at line 1:
    ORA-00600: internal error code, arguments: [qximgcn1], [], [], [], [], [], [],
    ORA-06512: at line 14After eyeballing the script and the spool file it looks like it failed in the final step where it rebuilds all the spatial indexes. This seems right as I found sixty seven bad spatial indexes afterwards. However, it was easy enough to rebuild those by hand myself (I mean have ArcSDE do so).
    So I call this a success! :)
    So I am looking for anyone else's experiences they'd like to share.
    With the amount of undo required, I think it might be a better route to just use the old school exp to export out all spatial data, run the upgrade and then imp it all back in? Or perhaps just dump out the large tables? Weird 00600 errors cause the DBAs (not part of my organization) that manage my production environment to bleed internally. Ideally I would like to avoid all that pain and suffering. Perhaps just dropping ALL the spatial indexes before the upgrade? I guess I would need to open an SR to see what caused the 00600s. I suspect the indexes.
    So I would like to keep this thread tied to the upgrade process and/or planning/strategy for the upgrade process. There are a lot of other issues that we should start new threads over. For example, exchange of data between standard and uber instances as Dalibor is doing. It sure appears that datapump is never going to work and that only the old-school unsupported imp/exp utility can play this role? I have done some initial testing and ESRI's sdeimport/sdeexport tools seems to work fine between these instances. And none of this even scratches the performance issues.
    So hey Dalibor, I know you went through this. Any comments?
    Cheers!
    Paul

    Hi Mike,
    Nice to hear from you. I would indeed be interested in your thoughts on the upgrade and the actual business case surrounding the upgrade or the decision not to upgrade. I am feeling rather skeptical at this point as the side-effects of the medicine so far are pretty serious. But yet the need to treat the disease remains. You can always drop me an email offline at my first name at my last name dot com or via LinkedIn. The nice things about a super-rare unpronounceable Polish last name is I am easy to find - and that's the bad thing too. Here is an update on my testing.
    Per my earlier SR, the folks are Oracle have provided a backport for bug 10085580 for Linux 32-bit. It looks to work just fine and does the trick - problem solved. If one needs something other than Linux you will need to request the backport for your platform. The bug affects more than SDO_GEORASTER so I would recommend this patch as mandatory.
    Moving along my next issue is that Oracle MapViewer does not seem to work with uber SDO_GEOMETRY.
    MapViewer and uber (Very Large) SDO_GEOMETRY upgrade not compatible?
    This is again a bit of a nonstarter as my client has a couple of applications using MapViewer against production instances. I dutifully submitted an SR on the matter yesterday. Almost to be expected the tech immediately asked why I would want to have such a large geometry and why don't I thin it! :)
    It's no fun being in the middle on these things. I suppose I could ring up FEMA or Fish and Wildlife and tell them Oracle Corporation thinks their polygons are crazy. But as they are ESRI-shop folks, I think they would just be puzzled by the statement. If anyone has an opinion on how to broach the topic nicely with public agencies, I would love to hear it. Do we need to create a "citizens against huge polygons" lobby in DC? Again the ESRI shop folks are smirking.
    The other issue in the thread started by Dalibor where the old imp/exp is erratically failing on the table creation statements is probably going to need an SR too. The work around is to just append the data and create your tables by hand ahead of time. On a more positive note ArcSDE seems utterly oblivious to the type change and I have not found any problems using ESRI tools to move stuff around.
    Cheers,
    Paul

  • New(?) pattern looking for a good home

    Hi everyone, this is my second post to sun forums about this, I initially asked people for help with the decorator and strategy pattern on the general Java Programming forum not being aware that there was a specific section for design pattern related questions. Since then I refined my solution somewhat and was wondering if anyone here would take a look. Sorry about the length of my post, I know it's best to keep it brief but in this case it just seemed that a fully functional example was more important than keeping it short.
    So what I'd like to ask is whether any of you have seen this pattern before and if so, then what is it called. I'm also looking for some fresh eyes on this, this example I wrote seems to work but there are a lot of subtleties to the problem so any help figuring out if I went wrong anywhere is greatly appreciated. Please do tell me if you think this is an insane approach to the problem -- in short, might this pattern have a chance at finding a good home or should it be put down?
    The intent of the pattern I am giving below is to modify behavior of an object at runtime through composition. In effect, it is like strategy pattern, except that the effect is achieved by wrapping, and wrapping can be done multiple times so the effect is cumulative. Wrapper class is a subclass of the class whose instance is being wrapped, and the change of behavior is accomplished by overriding methods in the wrapper class. After wrapping, the object "mutates" and starts to behave as if it was an instance of the wrapper class.
    Here's the example:
    public class Test {
         public static void main(String[] args) {
              double[] data = { 1, 1, 1, 1 };
              ModifiableChannel ch1 = new ModifiableChannel();
              ch1.fill(data);
              // ch2 shifts ch1 down by 1
              ModifiableChannel ch2 = new DownShiftedChannel(ch1, 1);
              // ch3A shifts ch2 down by 1
              ModifiableChannel ch3A = new DownShiftedChannel(ch2, 1);
              // ch3B shifts ch2 up by 1, tests independence from ch3A
              ModifiableChannel ch3B = new UpShiftedChannel(ch2, 1);
              // ch4 shifts ch3A up by 1, data now looks same as ch2
              ModifiableChannel ch4 = new UpShiftedChannel(ch3A, 1);
              // print channels:
              System.out.println("ch1:");
              printChannel(ch1);
              System.out.println("ch2:");
              printChannel(ch2);
              System.out.println("ch3A:");
              printChannel(ch3A);
              System.out.println("ch3B:");
              printChannel(ch3B);
              System.out.println("ch4:");
              printChannel(ch4);
         public static void printChannel(Channel channel) {
              for(int i = 0; i < channel.size(); i++) {
                   System.out.println(channel.get(i) + "");
              // Note how channel's getAverage() method "sees"
              // the changes that each wrapper imposes on top
              // of the original object.
              System.out.println("avg=" + channel.getAverage());
    * A Channel is a simple container for data that can
    * find its average. Think audio channel or any other
    * kind of sampled data.
    public interface Channel {
         public void fill(double[] data);
         public double get(int i);
         public double getAverage();
         public int size();
    public class DefaultChannel implements Channel {
         private double[] data;
         public void fill(double[] data) {
              this.data = new double[data.length];
              for(int i = 0; i < data.length; i++)
                   this.data[i] = data;
         public double get(int i) {
              if(i < 0 || i >= data.length)
                   throw new IndexOutOfBoundsException("Incorrect index.");
              return data[i];
         public double getAverage() {
              if(data.length == 0) return 0;
              double average = this.get(0);
              for(int i = 1; i < data.length; i++) {
                   average = average * i / (i + 1) + this.get(i) / (i + 1);
              return average;
         public int size() {
              return data.length;
    public class ModifiableChannel extends DefaultChannel {
         protected ChannelModifier modifier;
         public void fill(double[] data) {
              if (modifier != null) {
                   modifier.fill(data);
              } else {
                   super.fill(data);
         public void _fill(double[] data) {
              super.fill(data);
         public double get(int i) {
              if(modifier != null)
                   return modifier.get(i);
              else
                   return super.get(i);
         public double _get(int i) {
              return super.get(i);
         public double getAverage() {
              if (modifier != null) {
                   return modifier.getAverage();
              } else {
                   return super.getAverage();
         public double _getAverage() {
              return super.getAverage();
    public class ChannelModifier extends ModifiableChannel {
         protected ModifiableChannel delegate;
         protected ModifiableChannel root;
         protected ChannelModifier tmpModifier;
         protected boolean doSwap = true;
         private void pre() {
              if(doSwap) { // we only want to swap out modifiers once when the
                   // top call in the chain is made, after that we want to
                   // proceed without it and finally restore doSwap to original
                   // state once ChannelModifier is reached.
                   tmpModifier = root.modifier;
                   root.modifier = this;
                   if(delegate instanceof ChannelModifier)
                        ((ChannelModifier)delegate).doSwap = false;
         private void post() {
              if (doSwap) {
                   root.modifier = tmpModifier;
              } else {
                   if(delegate instanceof ChannelModifier)
                             ((ChannelModifier)delegate).doSwap = true;
         public ChannelModifier(ModifiableChannel delegate) {
              if(delegate instanceof ChannelModifier)
                   this.root = ((ChannelModifier)delegate).root;
              else
                   this.root = delegate;
              this.delegate = delegate;
         public void fill(double[] data) {
              pre();
              if(delegate instanceof ChannelModifier)
                   delegate.fill(data);
              else
                   delegate._fill(data);
              post();
         public double get(int i) {
              pre();
              double result;
              if(delegate instanceof ChannelModifier)
                   result = delegate.get(i);
              else
                   result = delegate._get(i);
              post();
              return result;
         public double getAverage() {
              pre();
              double result;
              if(delegate instanceof ChannelModifier)
                   result = delegate.getAverage();
              else
                   result = delegate._getAverage();
              post();
              return result;
         public int size() {
              //for simplicity no support for modifying size()
              return delegate.size();
    public class DownShiftedChannel extends ChannelModifier {
         private double shift;
         public DownShiftedChannel(ModifiableChannel channel, final double shift) {
              super(channel);
              this.shift = shift;
         @Override
         public double get(int i) {
              return super.get(i) - shift;
    public class UpShiftedChannel extends ChannelModifier {
         private double shift;
         public UpShiftedChannel(ModifiableChannel channel, final double shift) {
              super(channel);
              this.shift = shift;
         @Override
         public double get(int i) {
              return super.get(i) + shift;
    Output:ch1:
    1.0
    1.0
    1.0
    1.0
    avg=1.0
    ch2:
    0.0
    0.0
    0.0
    0.0
    avg=0.0
    ch3A:
    -1.0
    -1.0
    -1.0
    -1.0
    avg=-1.0
    ch3B:
    1.0
    1.0
    1.0
    1.0
    avg=1.0
    ch4:
    0.0
    0.0
    0.0
    0.0
    avg=0.0

    jduprez wrote:
    Hello,
    unless you sell your design better, I deem it is an inferior derivation of the Adapter pattern.
    In the Adapter pattern, the adaptee doesn't have to be designed to support adaptation, and the instance doesn't even know at runtime whether it is adapted.
    Your design makes the "modifiable" class aware of the modification, and it needs to be explicitly designed to be modifiable (in particular this constrains the implementation hierarchy). Overall DesignPattern are meant to provide flexibility, your version offers less flexibility than Adapter, as it poses more constraint on the modifiable class.
    Another sign of this inflexibility is your instanceof checks.
    On an unrelated note, I intensely dislike your naming choice of fill() vs _fill()+, I prefer more explicit names (I cannot provide you one as I didn't understand the purpose of this dual method, which a good name would have avoided, by the way).
    That being said, I haven't followed your original problem, so I am not aware of the constraints that led you to this design.
    Best regards,
    J.
    Edited by: jduprez on Mar 22, 2010 10:56 PMThank you for your input, I will try to explain my design better. First of all, as I understand it the Adapter pattern is meant to translate one interface into another. This is not at all what I am trying to do here, I am trying to keep the same interface but modify behavior of objects through composition. I started thinking about how to do this when I was trying to apply the Decorator pattern to filter some data. The way I would do that in my example here is to write an AbstractChannelDecorator that delegates all methods to the Channel it wraps:
    public abstract class AbstractChannelDecorator implements Channel {
            protected Channel delegate;
    ...// code ommitted
         public double getAverage() {
              return delegate.getAverage();
    ...// code ommitted
    }and then to filter the data I would extend it with concrete classes and override the appropriate methods like so:
    public class DownShiftedChannel extends AbstractChannelDecorator {
         ...// code ommitted
         public double get(int i) {
              return super.get(i) - shift;
           ...// code ommitted
    }(I am just shifting the data here to simplify the examples but a more realistic example would be something like a moving average filter to smooth the data).
    Unfortunately this doesn't get me what I want, because getAverage() method doesn't use the filtered data unless I override it in the concrete decorator, but that means I will have to re-implement the whole algorithm. So that's pretty much my motivation for this, how do I use what on the surface looks like a Decorator pattern, but in reality works more like inheritance?
    Now as to the other points of critique you mentioned:
    I understand your dislike for such method names, I'm sorry about that, I had to come up with some way for the ChannelModifier to call ModifiableChannel's super's method equivalents. I needed some way to have the innermost wrapped object to initiate a call to the topmost ChannelModifier, but only do it once -- that was one way to do it. I suppose I could have done it with a flag and another if/else statement in each of the methods, or if you prefer, the naming convention could have been fill() and super_fill(), get() and super_get(), I didn't really think that it was that important. Anyway, those methods are not meant to be used by any other class except ChannelModifier so I probably should have made them protected.
    The instanceof checks are necessary because at some point ChannelModifier instance runs into a delegate that isn't a ChannelModifier and I have to somehow detect that, because otherwise instead of calling get() I'd call get() which in ModifiableChannel would take me back up to the topmost wrapper and start the whole call chain again, so we'd be in infinite recursion. But calling get() allows me to prevent that and go straight to the original method of the innermost wrapped object.
    I completely agree with you that the example I presented has limited flexibility in supporting multiple implementations. If I had two different Channel implementations I would need two ModifiableChannel classes, two ChannelModifiers, and two sets of concrete implementations -- obviously that's not good. Not to worry though, I found a way around that. Here's what I came up with, it's a modification of my original example with DefaultChannel replaced by ChannelImplementation1,2:
    public class ChannelImplementation1 implements Channel { ... }
    public class ChannelImplementation2 implements Channel { ... }
    // this interface allows implementations to be interchangeable in ChannelModifier
    public interface ModifiableChannel {
         public double super_get(int i);
         public double super_getAverage();
         public void setModifier(ChannelModifier modifier);
         public ChannelModifier getModifier();
    public class ModifiableChannelImplementation1
              extends ChannelImplementation1
              implements ModifiableChannel {
         ... // see DefaultChannel in my original example
    public class ModifiableChannelImplementation2
              extends ChannelImplementation1
              implements ModifiableChannel { ...}
    // ChannelModifier is a Channel, but more importantly, it takes a Channel,
    // not any specific implementation of it, so in effect the user has complete
    // flexibility as to what implementation to use.
    public class ChannelModifier implements Channel {
         protected Channel delegate;
         protected Channel root;
         protected ChannelModifier tmpModifier;
         protected boolean doSwap = true;
         public ChannelModifier(Channel delegate) {
              if(delegate instanceof ChannelModifier)
                   this.root = ((ChannelModifier)delegate).root;
              else
                   this.root = delegate;
              this.delegate = delegate;
         private void pre() {
              if(doSwap) {
                   if(root instanceof ModifiableChannel) {
                        ModifiableChannel root = (ModifiableChannel)this.root;
                        tmpModifier = root.getModifier();
                        root.setModifier(this);
                   if(delegate instanceof ChannelModifier)
                        ((ChannelModifier)delegate).doSwap = false;
         private void post() {
              if (doSwap) {
                   if(root instanceof ModifiableChannel) {
                        ModifiableChannel root = (ModifiableChannel)this.root;
                        root.setModifier(tmpModifier);
              } else {
                   if(delegate instanceof ChannelModifier)
                             ((ChannelModifier)delegate).doSwap = true;
         public void fill(double[] data) {
              delegate.fill(data);
         public double get(int i) {
              pre();
              double result;
              if(delegate instanceof ModifiableChannel)
    // I've changed the naming convention from _get() to super_get(), I think that may help show the intent of the call
                   result = ((ModifiableChannel)delegate).super_get(i);
              else
                   result = delegate.get(i);               
              post();
              return result;
         public double getAverage() {
              pre();
              double result;
              if(delegate instanceof ModifiableChannel)
                   result = ((ModifiableChannel)delegate).super_getAverage();
              else
                   result = delegate.getAverage();
              post();
              return result;
         public int size() {
              return delegate.size();
    public class UpShiftedChannel extends ChannelModifier { ...}
    public class DownShiftedChannel extends ChannelModifier { ... }

  • Looking for an application that has great time stretch capabilities

    Hello all,
    I have Logic Pro 8 and I am still learning its capabilities. I do alot of sound design stuff and am looking for a spectacular time stretching application. I've fiddled with Logic's time stretch algorithms but so far I'm not thrilled with the results. I'm interested in really stetching things out to create new sounds. Here is a great example from Nine Inch Nails. It's a remix of Eraser off of Further Down The Spiral.
    This particular remix version is called "Erased, Over, & Out." Here is an amazon link to have a listen.
    http://www.amazon.com/gp/product/B000VZOBGI/ref=dmmu_dptrk11?ie=UTF8&qid=1212078077&sr=1-1
    The vocals are EXTREMELY stretched out and take the form of a pad almost.
    He is actually saying, "Erase me." Keep in mind the original vocal is only about 3 seconds long. I'd like to provide that example but the Amazon sample isn't long enough to get to that part. Maybe download it to listen. It's called Eraser, it's from "The Downward Spiral" album. (1994)
    Now, does anyone know if Logic is capable of getting this good a result like the sample I provided? I've seen PT Elastic Time stretching capabilities and it sounds awesome, at least what I've heard from their demo video on elastic time. Can the same results be achieved in Logic Pro 8?
    If not, what plugin do you recommend that could achieve this EXTREME time stretching stuff. Or, does anyone know what Nine Inch Nails used to get that effect on the vocals for that particular remix? I just think it's a limitless tool for sound design, but right now I'm feeling limited in the sound quality department. I know I can keep stretching and stretching and stretching the sample region but that sounds a digital mess when I get it to the point were I want it, so please, don't suggest I do that, unless you know a way to make it sound REALLY GOOD in Logic using the algorithms that come with the program. Step by step info would be lovely.
    Thanx in advance...
    TTW13

    ported to the Mac
    it's on my other Mac
    is their a Mac version?
    It seems a safe bet, eh..?
    I'm pretty sure the mac link was posted to that thread I linked to, it's where I found out about it...
    http://www.bigbluelounge.com/forums/viewtopic.php?t=35638&highlight=&sid=c2dcbfd c2af80e0fdf9da04b91319e1d

  • Looking for a reference manual that actually describes in detail all attributes for a command

    I am using CS5. I am teaching myself Photoshop using their book, "Classroom in a book".
    Thought the lessons, they will suggest specific setting for command attributes without describing exacting what these values mean.. For example, on page 88 they give the student specific settings for the Refine Edge dialog bog under the Quick Selection tool. They say, "To prepare the edge for a drop shadow, set Smooth to 24, Feather to 0.5, Contrast to 12, and Shift Edge to -21". Note: I am calling things like Smooth, Feather, Contrast, etc command attributes.
    My question is not about this specific example, but all these attribute values for all the commands. For example, what does Shift Edge really do when negative or positive. My question is: does a reference, either a book, pdf, or a non-Adobe product (probably a book) exist that actually explains when each attribute actually does for every command? Basically looking for a boring detailed reference guide so when I am using a specific command, I can read about the attributes for the specific command and understand what the values really mean and not guess my trial and error.
    Thanks in Advance,
    LouF

    Lou Fuchs wrote:
    For example, I just happen to have my book open to page 113 and it is showing me (at the bottom of the page) the dialog box for Layer Style.  I was hoping for a reference manual that explains every option in the that dialog box in detail and tells you what each option means and how it effects the image.
    I hope this clarifies what I am looking for.
    Here's a little snippet from the info about Layer Styles in the Adobe manual.
    Layer style options
    To the top
    Altitude For the Bevel and Emboss effect, sets the height of the light source. A setting of 0 is equivalent to ground level, 90 is directly above the layer. Angle Determines the lighting angle at which the effect is applied to the layer. You can drag in the document window to adjust the angle of a Drop Shadow, Inner Shadow, or Satin effect.
    Anti-alias Blends the edge pixels of a contour or gloss contour. This option is most useful on small shadows with complicated contours. Blend Mode Determines how the layer style blends with the underlying layers, which may or may not include the active layer. For example, an inner shadow blends with the active layer because the effect is drawn on top of that layer, but a drop shadow blends only with the layers beneath the active layer. In most cases, the default mode for each effect produces the best results. See Blending modes. Choke Shrinks the boundaries of the matte of an Inner Shadow or Inner Glow prior to blurring. Color Specifies the color of a shadow, glow, or highlight. You can click the color box and choose a color. Contour With solid-color glows, Contour allows you to create rings of transparency. With gradient-filled glows, Contour allows you to create variations in the repetition of the gradient color and opacity. In beveling and embossing, Contour allows you to sculpt the ridges, valleys, and bumps that are shaded in the embossing process. With shadows, Contour allows you to specify the fade. For more information, see Modify layer effects with contours. Distance Specifies the offset distance for a shadow or satin effect. You can drag in the document window to adjust the offset distance. Depth Specifies the depth of a bevel. It also specifies the depth of a pattern. Use Global Light This setting allows you to set one “master” lighting angle that is then available in all the layer effects that use shading: Drop Shadow, Inner Shadow, and Bevel and Emboss. In any of these effects, if Use Global Light is selected and you set a lighting angle, that angle becomes the global lighting angle. Any other effect that has Use Global Light selected automatically inherits the same angle setting. If Use Global Light is deselected, the lighting angle you set is “local” and applies only to that effect. You can also set the global lighting angle by choosing Layer Style > Global Light. Gloss Contour Creates a glossy, metallic appearance. Gloss Contour is applied after shading a bevel or emboss. Gradient Specifies the gradient of a layer effect. Click the gradient to display the Gradient Editor, or click the inverted arrow and choose a gradient from the pop-up panel. You can edit a gradient or create a new gradient using the Gradient Editor. You can edit the color or opacity in the Gradient Overlay panel the same way you edit them in the Gradient Editor. For some effects, you can specify additional gradient options. Reverse flips the orientation of the gradient, Align With Layer uses the bounding box of the layer to calculate the gradient fill, and Scale scales the application of the gradient. You can also move the center of the gradient by clicking and dragging in the image window. Style specifies the shape of the gradient. Highlight or Shadow Mode Specifies the blending mode of a bevel or emboss highlight or shadow. Jitter Varies the application of a gradient’s color and opacity. Layer Knocks Out Drop Shadow Controls the drop shadow’s visibility in a semitransparent layer. Noise Specifies the number of random elements in the opacity of a glow or shadow. Enter a value or drag the slider. Opacity Sets the opacity of the layer effect. Enter a value or drag the slider. Pattern Specifies the pattern of a layer effect. Click the pop-up panel and choose a pattern. Click the New Preset button          to create a new preset pattern based on the current settings. Click Snap To Origin to make the origin of the pattern the same as the origin of the document (when Link With Layer is selected), or to place the origin at the upper-left corner of the layer (if Link With Layer is deselected). Select Link With Layer if you want the pattern to move along with the layer as the layer moves. Drag the Scale slider or enter a value to specify the size of the pattern. Drag a pattern to position it in the layer; reset the position by using the Snap To Origin button. The Pattern option is not available if no patterns are loaded. Position Specifies the position of a stroke effect as Outside, Inside, or Center. Range Controls which portion or range of the glow is targeted for the contour. Size Specifies the radius and size of blur or the size of the shadow. Soften Blurs the results of shading to reduce unwanted artifacts. Source Specifies the source for an inner glow. Choose Center to apply a glow that emanates from the center of the layer’s content, or Edge toapply a glow that emanates from the inside edges of the layer’s content. Spread Expands the boundaries of the matte prior to blurring.
    Style Specifies the style of a bevel: Inner Bevel creates a bevel on the inside edges of the layer contents; Outer Bevel creates a bevel on the outside edges of the layer contents; Emboss simulates the effect of embossing the layer contents against the underlying layers; Pillow Emboss simulates the effect of stamping the edges of the layer contents into the underlying layers; and Stroke Emboss confines embossing to the boundaries of a stroke effect applied to the layer. (The Stroke Emboss effect is not visible if no stroke is applied to the layer.)
    Technique Smooth, Chisel Hard, and Chisel Soft are available for bevel and emboss effects; Softer and Precise apply to Inner Glow and Outer Glow effects.
    Smooth Blurs the edges of a matte slightly and is useful for all types of mattes, whether their edges are soft or hard. It does not preserve detailed features at larger sizes. Chisel Hard Uses a distance measurement technique and is primarily useful on hard-edged mattes from anti-aliased shapes such as type. It preserves detailed features better than the Smooth technique.
    Chisel Soft Uses a modified distance measurement technique and, although not as accurate as Chisel Hard, is more useful on a larger range of mattes. It preserves features better than the Smooth technique. Softer Applies a blur and is useful on all types of mattes, whether their edges are soft or hard. At larger sizes, Softer does not preserve detailed features.
    Precise Uses a distance measurement technique to create a glow and is primarily useful on hard-edged mattes from anti-aliased shapes
    such as type. It preserves features better than the Softer technique. Texture Applies a texture. Use Scale to scale the size of the texture. Select Link With Layer if you want the texture to move along with the layer as the layer moves. Invert inverts the texture. Depth varies the degree and direction (up/down) to which the texturing is applied. Snap To Origin makes the origin of the pattern the same as the origin of the document (if Link With Layer is deselected) or places the origin in the upper-left corner of the layer (if Link With Layer is selected). Drag the texture to position it in the layer.

  • CLAD with MS in Electronics/Electrical engg looking for better opportunities

    Priyanka Chaudhary
    ob/4 wardens' residence medical boys hostel campus, near white church colony, Indore(M.P)-452001
    [email protected]
    OBJECTIVE:
    ==========
    Seeking position as Labview Developer
    EDUCATION
    ==========
    Master of Science in Electrical Engineering
    University of Kentucky, Lexington, KY.( G.P.A. 3.5/4.0)
    May 2010
    B.E. Electronics and Communications Engineering
    Rajiv Gandhi Technical University, India
    First Class with Honors G.P.A. 4.0/4.0
    June 2007
    SKILL SET
    ==========
    Programming Languages: Labview (2011,2010,8.6),Simulink,VHDL, Verilog , C , C++, VC++, MATLAB C#,HTML, ASP.Net 4.0, Assembly Language
    Development and Simulation Tools: Labview, Xilinx ISE, ModelSim, MATLAB, MS office Suite, VS .Net
    Hardware: NI cRIOs, NI CDAQ, NI C-form thermocouple ,IEPE and strain gauge modules etc.,8085/8086 Processors, 8051/8951 Microcontrollers
    WORK EXPERIENCE
    ===============
    1)Assistant Manager, VE Commercial Vehicles Ltd. (OEM), Pithampur, July 2010-Present
    -Certified Labview Associate Developer (CLAD) [Serial Number:100-311-4045; Issue date: Dec 29,2011;Expiration date: Dec 28 2013]
    -Part of the software development team in the Vehicle Validation and testing Division
    -Developed and deployed software in Labview for Testing Rigs required in the Fatigue and Endurance Labs.
    -Attended Labview Core1 & Core2 training and preparing for Certified Labview Associate Developer exam (CLAD).
    -Projects include:
    --software development for Clutch endurance test using Master/Slave architecture;
    --system identification, iteration and drive-file generation from RLD for Cabin testing on Moog Controller;
    --Test Sequence generation for suspension leaf spring endurance and Front Axle endurance tests;
    --R&D of Stewart’s Platform(Multi-Axis Shaker Table) in Simulink using Pro-E/CAD model and validation of the model parameters for Rig development in Labview.
    --Single-handedly designed and deployed endurance test application on FPGA target using Labview 2011 using Modbus as communication protocol between variable frequency drives(ABB) and motors.
    -Designed software utility manuals for rig operators.
    2) IT consultant, Libsys Consultancy, Chicago, USA ,Sep 2010-March 2011
    -Worked as Dot net developer apprentice/consultant for client (Thomson Reuters ) in Minnesotta,USA.
    -Worked on Models –View-Controllers architecture to design Web applications.
    3)Graduate Research Assistant, University of Kentucky, May 2008 - Aug 2010
    Thesis : Spheroid detection in 2D images using Circular Hough Transform
    -Collaborated with National Cancer Institute and the dept. of Opthalmology to prepare High Content –High Throughput Screening Assay (3D-ECSA) analysis platform.
    -Ran an automated test bench with a motorized camera (VC++, MFC) for the assay analysis and image acquisition.
    -Synthesized data stochastically similar to original, to increase the databank.
    -Developed algorithm (MATLAB) to detect spheroids in data images using Circular Hough Transform.
    -Demonstrated measures to classify identified spheroids according to shape and symmetry.
    -Involved extensive application of Image Processing techniques.
    4)Graduate Assistant, Graduate Housing Resident Manager, University of Kentucky,Feb 2010- May 2010
    Responsibilities include, resolution of conflicts, inspections, attending to resident requests and acting as a bridge between the 100+ residents and housing body.
    ACADEMIC PROJECTS
    ==================
    1)Design and implementation of 2 special purpose processors,Spring 2008
    -Designed two special purpose processors in VHDL in Xilinx ISE
    -These were non-programmable and were designed to execute a repetitive custom logic
    -Simulation was done in ModelSim and tested on Xilinx Virtex 5.1 FPGA.
    2)Wireless Datacommunication between terminals using Frequency hopped spread spectrum,Spring 2007
    -Prepared and etched PCBs for circuit base;
    -Soldered HT-12 encoders/decoders, transceivers and PLL ICs to the PCBs to form a FHSS transceiver circuit
    -Involved electronics and communication principles.
    -Demonstrated the working of the units for duplex communication
    PUBLICATIONS
    =============
    Correlation based swarm trackers for 3-dimensional manifold mesh formation
    SPIE April 13,2009. Vol:7340 2009
    RELEVANT COURSEWORK
    ===================
    Graduate:
    Digital Signal Processing, Deterministic systems, Real-time Embedded systems, VHDL, Antenna Design, Solid state electronics,
    Electromagnetic field theory.
    Undergraduate:
    Mobile Communications, Satellite Communications, Fiber Optics, Microwave Circuits, Data Compression and Encryption, Microprocessors and Microcontrollers, Microelectronics, Digital Logic Design, Electronics
    Attachments:
    Priyanka_Chaudhary_resume_LABVIEW.pdf ‏130 KB

    Message Edité par salimo le 11-04-2009 04:36 PM
    ~~~~~~~~~~~~~~~~~~Looking for a LABVIEW JOB (In EUROPE)>~~~~~~~~~~~~~~~~~~
    **The Best Way To Predict**The**Future Is To Invent It**

  • Anyone looking for Software Engineer(LabVIEW/C++) opportunity in Pittsburgh, PA?

    Hello, my name is Nicholas Ricchiuto and I am looking for a Software Engineer (LabVIEW/C++) for a company in Pittsburgh Called Acutronic.  If you know of any Software Engineers looking for work, please pass along my information below, I could sure use your help.  Thank you very much! -Nick
    Nicholas Ricchiuto
    B&V Stafffing Inc
    724-933-5218
    [email protected]
    ATE Software Engineer (Pittsburgh, PA –RIDC Park)You will be responsible for a challenging new product development. In this role you will implementcomplex automated test algorithms and data acquisition software for inertial sensors and MEMS(micro-electro-mechanical systems) inertial devices.Your roles/responsibilities will include:• Translate marketing inputs into detailed requirements and planning documents• Design, code, debug, document and maintain software code in LabVIEW and C/C++/C#.• Plan and execute software releases• Plan and accomplish goals relying on experience and judgment Your qualifications/skills should include:• BS in Electrical Engineering, Computer Engineering or related field; MS preferred• 7+ years experience in automation, data acquisition, or related software development• Experience with National Instruments LabVIEW, Test Stand, and related tools• Experience developing user interfaces with National Instruments tools• Experience with defect tracking, configuration management and version control tools• Knowledge of C, C++, and real time operating system functionality a plus• Familiarity with all aspects of automation engineering including mechanical, electrical, controls, andinstrumentation disciplines• Experience with serial communication and networks in general• Experience integrating automation systems• Good knowledge of computer hardware and systems• Understanding of software process issues• Familiarity with SCRUM or other agile software development a plus• Experience in testing inertial sensors a plus• Must be a US Citizen or Permanent Resident

    Please post this to the LabVIEW Job Openings board.

  • Looking for Source Codes

    hi everyone!
    I am looking for source codes with the following characteristics:
    - significant data processing (like fft or image processing algorithms)
    - use of just a few classes and/or packages (it must not use awt or swing)
    - code is not so big (200 lines of code at most)
    - program is multithreaded (it is not necessary)
    If anyone could help me, I would appreciate a lot.
    Thanks.

    Building a portfolio are we?

  • Experienced Professional Looking for LabVIEW Job offer anywhere in USA

    Hi,
    I am Sreekanth Komatineni from Virginia looking for some job offer. Here is my brief experience.
    Here is my Skillset.  
    Skills: LabVIEW, VB, VC++, C/C++, and Java with good customer interaction.
    Experience Summary:
    Over 7 years of Industry experience with 4yrs purely on LabVIEW in design and development of customized PC based Embedded Applications, Industrial automations, Data acquisition and Control Systems using LABVIEW 7.1/8.0/8.2/8.5 and NI DAQ Cards in the defense, manufacturing and telecommunication industries.
    Experience with PMD USB Interface cards, GPIB Interface, VISA and Serial I/O Interface, Code Interface Nodes, Call Library Function nodes to interface with third party and custom Dlls, Multi-channel Data acquisition, Software Triggering, Analysis of acquired data with mathematical algorithms, Controlling and Monitoring applications, MODBUS and ASCII, I2C, UART communication programming on PC side, good at PC based Automations.
    Willing to relocate anywhere in USA.
    Thanks and Regards
    Sreekanth. Komatineni
    [email protected]

    Hi Harold,
    Please see my email to you & proceed accordingly.
    Good luck!
    - Partha
    LabVIEW - Wires that catch bugs!

  • 4D Technology is looking for a software/controls engineer with a strong background in LavView

    Software/Controls Engineer
    Do you want to work on cutting edge technology? Are you the type of person who keeps up with new software and hardware technology in your spare time? If so, 4D Technology, a worldwide leader in innovative optical metrology products, has a position for you on our engineering team.
    4D is currently looking for a self-motivated Software/Controls Engineer to lead efforts in automation, image acquisition and data processing across several product lines. A successful candidate must demonstrate strong skills in software engineering and hardware interface. The ideal candidate will have experience with LabView, C or C#, image acquisition, hardware motion control and a strong commitment to producing quality products. This is a chance for you to be part of a diverse team delivering internationally recognized, innovative products, to learn new skills under the guidance of expert engineers, and to feel valued for your passion in delivering innovative, quality hardware and software.
    The essential responsibilities of this key position are:
    Rapid prototyping of hardware interfaces (motion control and image acquisition), data analysis algorithms, and GUI’s
    Architect, code and release automation hardware and software for custom products and applications
    Enhance and maintain existing LabVIEW and C applications
    Produce production line testing tools.
    Our team needs your enthusiasm and dedication to help us grow. We require a B.S. degree or higher in Engineering, Physics or Computer Science. The following skills and traits are highly desirable:
    Experience with hardware interfacing, motion control, image acquisition
    Solid knowledge of object oriented software design
    Proficiency with LabVIEW
    Experience with C#/.net and C++
    Experience with embedded processors, PCB layout, electrical design
    4+ years of practical work experience in engineering
    Capable of working both independently and as part of a team
    Excellent written and oral communication skills.
    This is an exciting opportunity to be part of successful and motivated team.  If you’re looking for a job that encourages creativity and innovation and you derive personal satisfaction from seeing your hard work result in commercial products that are used throughout the world, this is the place for you. Located in Tucson, Arizona, 4D Technology offers competitive a salary and a comprehensive benefits package.
    Reward yourself with one of the best moves of your career. Send your résumé to:
    Director, Human Resources
    4D Technology Corporation
    3280 E. Hemisphere Loop, Suite 146
    Tucson, AZ 85706
    FAX: 520-294-5601
    or email résumé to: HR @ 4dtechnology.com.

    Hi sir,
    I am currently working in a MNC, as an engineer. I am keenly interested in Labview, and i know basics of it.
    Attached is my resume. I would like to work with Labview.
    Regards,
    Pooja karnani
    Attachments:
    RESUME.docx ‏30 KB

  • Looking for tree drawing API

    Hello
    I made a k-children tree algorithm in Java. Now I'm looking for a ready-made API/classes for drawing the tree.
    Anyone knows any good tree drawing APIs?
    I searched google for hours and couldn't find anything.
    Thank you

    Here are some links that you may find usefull:
    An interactive canvas component where you can paint any shapes:
    https://jcanvas.dev.java.net/
    A graph library:
    http://jung.sourceforge.net/
    Another graph library:
    http://graph.netbeans.org/

  • Look for XML Tutorial

    Hello,
    I am looking for a tutorial on laoding external images using
    XML. I created a menu with flash that lists catagories for products
    and a marketing banner with in the fash movie.
    I need to take those images and place them outside of the
    flash movie so the client can update the products or the image for
    the banner and also to reduce the size of the Flash .swf.
    I am not having anyluck finding a tutorial that will help me
    with this. I have downlaoded the XML basics from gotoand learn.com
    but it just was not what I am looking for.
    Thanks for your help in advanced,
    invtech

    Ok, I don't yet see the underlying algorithm, but I am still
    looking.
    Tracy

  • Help: looking for Serialization suggestions (on strategies and products)

    Hello,
    I'm looking for suggestions on how to tackle a Serialization issue. Suggestions on tools to use, and approaches for "do it yourself" coding are both needed.
    I need to serialize a large evolving codebase with a minimum of rewriting (overriding readObject... for each class is out of the question, likewise embeding metadata comments for all classes is also out of the question). Currently the codebase is using Java's default Object Serialization (to a binary array). This is completely unacceptable for the long term (object versioning and migrations are a nightmare).
    As such I've determined that JSX/JSX2 looks like a feasable solution (the pricing is nice; the mechanism wrappers readObject, writeObject (so a minimum of recoding); they automatically map objects back and forth and don't require Objects to be constrained (with meta data, being a given format (such as beans...))). I was leaning towards Object -> XML because:
    1) it is easy to read and therefore parse / upgrade / version
    2) it would fit into many different types of databases without need of specalized decompositions (just put the whole object in a data cell)
    however, management really wants to go Object -> tables. I am a bit leery about this but I keep hearing "we have no plans to move to any data technology other than RDB". Given those assumptions, it kind of pulls me in another direction. Namely a table serialization would have greatly enhanced performance (one could scan for objects with the highest value of some instance var quite quickly; with the other scheme it would be rather expensive...). So now I'm looking for suggestions on products that serialize Object -> table. The product at www.objectmatter.com looks like it might work well, but the licencing seems expensive. JSX/JSX2 might have Object -> table capability in the future but not now.
    Likewise, loathe as I am to reinvent a complex wheel, I am looking for strategies on rolling my own object -> table mapping code.
    On the one hand I think I might implement the serialization in a way similar to JSX (wrapper readObject...) as this would seem to give a direct route to the relevant instance variables.... On the other hand this would be even more slow than Java's current serialization, and I think I might go a bit blind parsing the binary stream. I could write some "stand-alone" that scans .java or .class files and auto generates a mapping file, then I could use some tool like Castor (however, given the complexities of some of my classes I think they might break Castor). The big problem is that my classes are complex and ugly (inner classes, anon classes...) so I get the feeling developing an algorithm for serializing them will be difficult and error prone. I was thinking of using Reflection and some guesses on getter/setters but now I'm thinking of using Java's security model (basically to turn all instance variables into "public" ones for my serialization routine).

    An update:
    Actually Castor didn't look as "full featured" as I originally thought...
    So for now I'm trying to write my own persistance manager. Its along the same lines as the article "Using Reflection to Automatically Map Objects to a Database" (which can be found at URL http://www.ajug.org/meetings/2001/DBMap.html ).
    The cool thing is that I have permission to make my work open source (I think that will help my company and others). My project goals are:
    1) create the persistance manager
    -the manager can be easily extended for any backing DB (within reason); so not just RDB but LDAP.... This will enable the user to easily "plug & play" DB technologies.
    -the manager will have a very simple (and limited) API similar to
    serialization. Basically one will be able to save, delete, and
    retrieve objects without needing predefined DB-Object mapping
    files.
    Likewise the user will be insulated from DB specific details.
    I am going to add a "context" which is basically a level of
    transactionalism (either the backing db supports it or you code it
    into your db wrapper layer).
    There will NOT be much in the way of query support.
    2) hook into existing "bridge" technologies
    -when I get done I want to see if I can get my auto-mapping & my
    persistence manager... to "cooperate" with stuff like Castor,
    Hibernate, OJB, TJDO.... This is a much lower priority but I see
    it as strategic for the long term growth of the project (it will
    enable users to transition from "dumb" auto-maps to "smart" custom
    ones seemlessly
    I'm going to open another thread about this on the forum.

  • Looking for must-read programming reference material

    I am relatively new to programming and would like a reading list of must-read programming references. I know several languages, but I am not looking for materials about any particular language; instead, I would like to read up on more fundamental things such as programming concepts, programming algorithms, programming paradigms,  computer architectures, programming style, etc. What are the classic, must-read books and articles that I should begin reading? Thank you for your suggestions.

    I've never used assembly for a 64-bit system, but I'm sure most of the same concepts apply.  In fact I'm pretty sure any change is analogous to the change from 16 to 32 which wasn't that drastic.
    With assembly language, learning the core concepts is the biggest step, the difference between 16 and 32 bit wasn't like learning a new language, rather it was just having more and bigger registers.  I don't do any actual programming in assembly - I dabbled a bit for fun, but I dont 'use' it.  But the concepts I learned through assembly have helped me in every other programming language.  Given that, I'm sure a book on 32-bit assembly would be useful for a 64-bit system.
    Consider too that assembly is a bit of a dying art.  Pair that with the fact that 64-bit systems are relatively new, and there won't be many people who can write a quality book about their lifetime of experience with 64-bit assembly.
    Lastly, (in this very disorganized listing of barely related points), 32-bit to 64-bit is also a trivial change compared to the various instruction sets that have been used.  If/when you understand the similarities and differences between the various assembly languages and instruction sets, is when you really start using the strengths of your particular architecture.  (I'm tempted to make a Matrix reference "There is no spoon").  My favorite assembly languages, just for the fun of it, are One-Instruction-Set systems.  When I realized that one single instruction used repeatedly can create the diversity of programs we use ... I was in geek heaven.  I put that day on par with learning about hardware-software interchangability, and the fact that our modern computers are made up of a vast series of only one type of logic gate ... and the day I learned some lamba calculus ... (end geekgasm)
    Anyhow, learning to 'think' in assembly language can help all your programming (IMHO) regardless of whether you ever speak to your computer in that language.  So, yes, I'd say the 32-bit book would be worth reading even if your current computer is 64.

Maybe you are looking for