Nearest Neighbor Interpolation

I'm trying to enable this in After Effects, it seems silly that it's not included standard like it is in Photoshop.  I suppose I could get around it by exporting the image sequence with a restrictive palette, but it distorts the colors more than necessary, and it is an extra unnecssary step I'd like to avoid.
So, if someone could give me a clue as to how to get this working in AE, either through a menu option I'm not seeing, an effect or a plug-in, that'd be great.
If not, if someone could let me know if it were possible to make a plug in that does this, if someone would be willing to make said plug-in or direct me to a community that specializes in plug-ins, that'd be fantastic.
And in terms of effects, using a simple choke is great, but it only effects the alpha layer of the animation, not the inside.  There was another effect I used, though I can't remember what it was (you set it to a certain amount of color levels, and that just picks random colors to fill in, so not cool).
I'm working with pixel art, and everything works perfect except for the fact that it blurs pixels that it shouldn't be blurring.

You have no reason to be rude. 
I've been using Aftter Effects for several years, I know my way around it, however, this particular issue escapes me. There isn't a reliable answer anywhere, hense why I asked the series of questions.  I do realize that pixel art isn't After Effects intended usage, however, it contains many other features that makes animation quick and easy for multiple sprites that use similar animations.  Except I either have to accept the blurring pixels or take an unnecessary amount of time to correct them.  All which could be avoided if I could just find a way to remove the blurring.  Interpolation can be applied after pixels have already been rendered,  I know this, because there are many programs that can apply this to images that already exist (though they resize them, which is fine, because I end up doing that anyways).  So a plugin that handles this inside of after effects shouldn't be that far-fetched.  Perhaps you could learn a thing or two about interpolation, Here is a good place to start. http://en.wikipedia.org/wiki/Image_scaling
And rather than assuming I have limited experience, you could've just been an adult and said "I don't know" or just not have said anything at all.  Your post was insulting, and not the least bit helpful.

Similar Messages

  • The nearest neighbor interpolation problem

    Hello,
    After my Photoshop CS6 update I have problem with the Nearest Neighbor interpolation. It was working correctly before. I was setting the General and Free Transform preferences first and rotating pictures by 45 degrees then. It was always giving me pixel-perfect results in PS CS6. Now that stopped working. Am I missing something? I have made video showing my results and all the steps I am making:
    How about you? What your results are after rotating any picture by 45 degrees in the Nearest Neighbor interpolation mode? I would be happy to know what is wrong now and how to achieve good results. I will appreciate any help.

    What your results are after rotating any picture by 45 degrees in the Nearest Neighbor interpolation mode?
    The same mess with Ps 13.0.1 on OS X 10.6.8.
    Very interesting that the Free Transform looks good until it is actually commited.
    I don't know whether it was different in 13.0

  • Exporting a movie with the settings "nearest neighbor"

    Hello everybody!
    I am trying to upscale a movie from an old video game. The resolution is 320x240 and should be upscaled to 1440x1080. The problem is that I don't want Adobe Premiere Pro 6 to upscale it using a chroma subsampling method (http://ingomar.wesp.name/2011/04/dosbox-gameplay-video-capture.html), instead I want to use something similar to the option "nearest neigbor" (the one you have in Adobe Photoshop when you resize images). Why I want this is because I want to keep the pixels from the video game sharp. Is this possible to do?

    And if you take a screen cap, import it into Photoshop and upscale by a factor 4 (with Nearest Neighbour) the result is amazing!
    Actually, I find that up-rezzing with the Nearest Neighbor algorithm to be about the lowest quality of any of the algorithms. It came first, and is basically a holdover from about PS version 2.5. Bicubic interpolation was added later, and then Bicubic Smoother and Bicubic Sharper.
    However, I am always working with continuous tone, high-rez digital photographs, and not screen-caps, so perhaps my material is not the ultimate to judge Nearest Neighbor?
    Still, for a 16x increase, about the only thing that I can suggest (and this is for Stills, and not Video) would be Genuine Fractals (once Human Softaware, but acquired by another company). Still, that is beyond the max limit that I would be comfortable with.
    Others have mentioned Red Giant's Magic Bullet Instant HD, and I would download the trial, then test. That might be "as good as it get."
    Good luck,
    Hunt

  • Mountains and nearest neighbor do not mix

    Note - this is not an Oracle question - but something as food for thought. Appropriate I think as graph is now a big component of the product...
    While on vacation in Colorado last week, we decided to head up to Crested Butte to see a local play production. My wife, searching for a local motel on a site I will not name, found one with great ratings that was quite reasonable and about "30 miles" away. Not being from Colorado, she went ahead and booked it, and then told me where it was. I shook my head. That is no where near 30 miles away, at least by road - it is on the other side of the mountain!
    Obviously this company uses a point to point with nearest neighbor "as the crow flies" method. While simple points might work reasonably well for small areas, say in a city road grid, it is a horrible solution for larger areas and places like Colorado. With mountains that take hours to drive around, those 30 miles or so turn out to be 90 miles of road and take about 3 hours to drive based on posted speeds!
    Needless to say, I was upset. After spending almost an hour on the phone to get this all straightened out and the bill credited, I thought I'd point this out. Not only for "buyer beware" - but mainly as a good example of what not to do when designing map-based systems for consumers. KISS is usually a good approach, but in this case it is a horrible one when a road network with speeds AKA network data model graph solution is required.
    Bryan

    Good point.
    Thinking further, even in a city grid you have one-way streets, parks, longer blocks, etc., which can make the real distance (by road) much further than "as the crow flies". So I'm not sure nn is really good for any such "close to me" analysis tool.
    And yes thanks, had a great vacation. Miss the cool weather from my native state!
    Bryan

  • Cloud of Points Tree /Nearest Neighbor

    Hi guys;
    I'm using a binary tree to construct a tree of two dimensional points .here is how i construct the tree :
    the tree is constructed by successive subdivision of the Plan by semi-Lines accross a given point.
    the first subdivision is done by say a horizantal line across a given point.the plan is then split into two regions : the upper and lower half plans. the points in the upper plan will be in the right subtree and those in the lower plan will go into the left subtree.
    in the next step the plan will be subdiveded by a VERTICAL semi-line at this time : the points at the left of this line will be placed in the left subtree and those on the right will be placed to right subtree.
    Now i managed to write an insert() method for this :
    static Tree insert(Point p, Tree a, boolean vertical)
         if (a == null)
         return new Tree(null, p, null);
         boolean estAGauche =
         (vertical) ? (p.x < a.p.x) : (p.y < a.p.y);
         if (estAGauche)
         Tree g = insert(p, a.filsG, !vertical);
         return new Tree(g, a.p, a.filsD);
         else
         Tree d = insert(p, a.filsD, !vertical);
         return new Tree(a.filsG, a.p, d);
         static Tree insert(Point p, Tree a)
         return insert(p, a, false);
         }Now i want to tackle another problem : given the tree is filled with a cloud of points.
    if i pick a given point in the cloud ,i want to find the nearest neighbor of this point using the tree construct i described above.
    is this possible ? How can I implement it ?
    Many thanks for helping !

    this is because i will be dealing with a verylarge
    number of points so efficiency here is definitelya
    crucial issue...what do you think josiah ?Well, I've used that little algorithm for a global
    map with, say, 1e6 known points
    and the response was always almost intantaneous. It
    depends on the distribution
    of your known points, i.e. you can easily draw
    pathetic situations on a piece of
    paper but on average the nearest point is found
    withing a few searches (after
    finding those two lines using an O(log(n)) binary
    search ...
    Two dimensional locality doesn't help you much when
    you're using quad-trees;
    as has been said before in this thread: two nearest
    points may be miles apart
    in the tree.
    What does help (altough not much) is, after sorting
    the points on increasing Y
    values, do a minor sort using Gray-codes on the X
    coordinate. But that's for a
    later discussion ;-)
    kind regards,
    Jos
    Well, I've used that little algorithm for a global
    map with, say, 1e6 known points
    and the response was always almost intantaneous. It
    depends on the distribution
    of your known points, i.e. you can easily draw
    pathetic situations on a piece of
    paper but on average the nearest point is found
    withing a few searches (after
    finding those two lines using an O(log(n)) binary
    search ...
    What you say is very encouraging so i will try your little algorithm for sure.
    any way At this first stage of my application i'm not demanding some extra ordinary work to be acheived , and this one would be surely very sufficient..
    Two dimensional locality doesn't help you much when
    you're using quad-trees;
    as has been said before in this thread: two nearest
    points may be miles apart
    in the tree.
    you right !
    What does help (altough not much) is, after sorting
    the points on increasing Y
    values, do a minor sort using Gray-codes on the X
    coordinate. But that's for a
    later discussion ;-)
    something interesting i'll be patiently waiting for ;)
    kind regards,
    JosMany thanks ,

  • Nearest Neighbor Query takes 7 minutes

    Hello there, First time poster on the forums. 
    I've been looking into spatial comparison recently and have come across a few problems. The query I run takes 7 minutes to return the nearest neighbor. 
    My table that has the Geographical locations is of the following structure and has 1.7 million rows. 
    CREATE TABLE [dbo].[PostCodeData](
    [OutwardCode] [varchar](4) NOT NULL,
    [InwardCode] [varchar](3) NOT NULL,
    [Longitude] [float] NULL,
    [Latitude] [float] NULL,
    [GeoLocation] [geography] NULL,
    CONSTRAINT [PK_PostCodeData] PRIMARY KEY CLUSTERED
    [OutwardCode] ASC,
    [InwardCode] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    I have another table with many records on which I only have a Geography point (which I call Geolocation) and I'm trying to get the postcode from the PostCodeData table based on the nearest [GeoLocation] naturally.
    This is my query at the moment :
    SELECT top 2 [PostCode]
    ,[Geolocation]
    , (select top 1 pc.InwardCode from PostCodeData pc order by pc.GeoLocation.STDistance(bg.Geolocation)) found_post_code
    FROM [tbl_potatoes] bg
    where bg.Geolocation is not null
    This query is taking 7 minutes and as you can see I'm only doing this for 2 (top 2) records from the burning_glass table. What would happen if I wanted to process the whole table which has like 700k rows.
    What I've tried: 
    1. Created a spatial index.
    2. Followed a post somewhere on these forums about applying it as a hint (WITH: WITH (INDEX(ixs_postcode)))
    It didn't let me apply the hint and gave the following error : 
    Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN.
    Any help is appreciated. 

    Just before the end of the day yesterday, a colleague of mine spotted the missing 'Where' in the subquery and we added it in.
    The query now looks as such : 
    UPDATE top(200) tbl_potatoes
    SET PostCode =
    select top 1 pc.OutwardCode
    from PostCodeData pc
    where pc.GeoLocation.STDistance(Geolocation) < 1000
    order by pc.GeoLocation.STDistance(Geolocation)
    WHERE Geolocation is not null;
    The problem is, this query still takes a while. It now takes 3min27seconds for 200 rows. Not that this bit of math would be accurate in any way however if it takes 207seconds to do 200 rows, to do 300,00 it will most likely take somewhere between 80 to 90
    hours. 
    This is hardly going to be something that is going to work.
    That was the update - the SELECT statement :
    SELECT top 200 [PostCode]
    ,[Geolocation]
    , (select top 1 pc.OutwardCode
    from PostCodeData pc
    where pc.GeoLocation.STDistance(bg.Geolocation) < 1000
    order by pc.GeoLocation.STDistance(bg.Geolocation)) found_post_code
    FROM [tbl_potatoes] pot
    where Geolocation is not null
    Takes just 23seconds for 200 records. Meaning it would take around 10 hours for 300k.
    I tried Isaacs second example where he uses the STBuffer(10000) but it leads even more time in the select statement and quite frankly I don't get what is going on in his 3rd example where he talks about his declarative syntax.

  • K Nearest Neighbor Algorithm Code in Java

    I am looking for a code in Java for K nearest neighbor algorithm (considering clusters if possible which is able to detect outlier clusters).
    Thanks!
    Edited by: win13 on Oct 1, 2007 11:54 AM
    Edited by: win13 on Oct 1, 2007 12:00 PM

    interface MS{ // metric space
      double dist(MS p); // distance between this point and argument
    MS[] kNN(MS data, MS target, int k){
      MS[] res = new MS[k]; double[] d = new double[k];
      int n = 0; // number of element in the res
      // check inputs for validity
      if(data.length == 0 || k<=0) return res; // bail on bad input
      double dd = target.dist(data[0]); // load first one into list
      res[n] = data[0]; d[n++] = dd;
      // go through all other data points
      for(int i = 1; i<data.length; i++){
        if( n<k || d[k-1] > (dd = target.dist(data))){ //add one
    if(n<k){res[n] = data[i]; d[n++] = dd;}
    int j = n-1;
    while(j>0 && dd < data[j-1]){ // slide big data up
    res[j] = res[j-1]; d[j] = d[j-1]; j--;
    res[j] = data[i]; d[j] = dd;
    return res;
    As I said, I don't feel that this code is that difficult. I would be more concerned as to whether the data admits the particular definition of outlier that you have selected.
    It is a mistake to assume that one particular definition of a cluster or an outlier will work in all cases. For example using the definition you supply, if you have one single mega cluster of a thousand elements located within a foot of one another here on earth and you have about 10 other "clusters', each with but a single element, each a single light year apart but all somewhere out near the andromeda galaxy, you will conclude that cluster centers are on the average about one light year apart but that there is one bad outlier, that one way over there on earth.
    But, hey it's your data. Go nuts.
    Just for the record, I don't typically test code that I type into the forum so what I have supplied may not even compile and certainly should not be assumed to be correct. You will need to read it, and treat it as an outline for the code that you will need to write.
    Enjoy                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Nearest neighbor search with quadtrees

    hi guys;
    i'm interested in implementing an algorithm to watch for possible collisions between celestial planets...
    this algorihm consists of checking for a given planet with coordinates(x,y) if some other planet in the system came closer than a given distance...this algorithm should perform better than the naive one which consists of cheking the distance against every other planet and which is o(n) (n is the number of planets in the system)
    it would be preferable to use a quadtree for a nearest neighbor search algorithm to implement this collision watch algorithm...
    please help me ! some sample code would be very welcomed
    thanks

    [url http://forum.java.sun.com/thread.jsp?thread=571601&forum=31]Cross-post.

  • K-nearest neighbors in MLT - any example?

    Hi, 
    There are currently no example on the LabVIEW MLT about the k-NN vi.
    We would like to use k-NN and would like to refer to some examples. 
    We couldn't get the distance control VI to work, and we are a bit puzzled by the fact that "examples" only accepts 1D arrays (our data is 2400 vectors, each of size 1x256).
    Our understanding of the VI is that "distance" specifies which type of norm, "examples" is the labeled data, "sample" is the new, unlabeled data. 
    When running with the current setup, there seems to be an error with the distance control.
    Thanks for your help in advance, 
    N&A
    Attachments:
    kNN pb.PNG ‏9 KB

    Hi Pallikaranai,
    I attempted several more times to work with the MLT but couldn't find any way, even getting onto NI themselves who tried to help but couldn't. But, I did get help from a LabVIEW user and a paper that was able to help me use a method which would be quite similar to KNN. Basically I had databases of 'knocking' or 'tapping' data and I wanted to compare my live signal to these databases to see if it was a 'tap' or a 'knock'.
    I put my live signal and database amplitudes thorugh this method:
    Wavelet transform
    Covariance
    SVD
    find log base 10 of Vector S of SVD
    find Mean, Max and  Min values of this and add togther to find Value X
    compare this value X to the same values of the databses after they were put through the same method and the smallest difference was the database
    that the event belonged in.
    I have also attached my code for my DB showing how I did it in LabVIEW. I hope its of help, it might not at all. NI had a look over this method themselves too
    and they said it should be ok. Are you applying to a signal yourself or anything similar?
    Attachments:
    SampleSignals.vi ‏223 KB

  • CS4 NOT capable of sharp displays at all zoom levels

    I must have been asleep, until now, and missed the significance and importance of what follows.
    In post #11 here:
    http://forums.adobe.com/thread/375478?tstart=30
    on 19 March 2009 Chris Cox (Adobe Photoshop Engineer - his title on the old forums) said this, in a discussion regarding sharpness in CS4:
    "You can't have perfectly sharp images at all zoom levels.". Unfortunately, my experience with CS4 since its release late last year has repeatedly confirmed the correctness of this statement.
    What makes this statement so disturbing is that it contradicts an overwhelming amount of the pre- and post-release promotional advertising of CS4 by Adobe, to the effect that the OpenGL features of CS4 enable it to display sharp images at all zoom levels and magnifications. What is surprising is that this assertion has been picked up and regurgitated in commentary by other, sometimes highly experienced, Ps users (some unconnected with, but also some directly connected with, Adobe). I relied upon these representations when making my decision to purchase the upgrade from CS3 to CS4. In fact, they were my principal reason for upgrading. Without them, I would not have upgraded. Set out in numbered paragraphs 1 to 6 below is a small selection only of this material.  
    1. Watch the video "Photoshop CS4: Buy or Die" by Deke McClelland (inducted into the Photoshop Hall of Fame, according to his bio) on the new features of CS4 in a pre-release commentary to be found here:
    http://fyi.oreilly.com/2008/09/new-dekepod-deke-mcclelland-on.html
    Notice what he says about zooming with Open GL: "every zoom level is a bicubically rendered thing of beauty". That, when viewed with the zooming demonstrated, can only be meant to convey that your image will be "sharp" at all zoom levels. I'm sure he believes it too - Deke is someone who is noted for his outspoken criticism of Photoshop when he believes it to be deserved. It would seem that he must not have experimented and tested to the extent that others posting in this forum have done so.
    2. Here's another Adobe TV video from Deke McClelland:
    http://tv.adobe.com/#vi+f1584v1021
    In this video Deke discusses the "super smooth" and "very smooth" zooming of CS4 at all zoom levels achieved through the use of OpenGL. From the context of his comments about zooming to odd zoom levels like 33.33% and 52.37%, it is beyond doubt that Deke's use of the word "smooth" is intended to convey "sharp". At the conclusion of his discussion on this topic he says that, as a result of CS4's "smooth and accurate" as distinct from "choppy" (quoted words are his) rendering of images at odd zoom levels (example given in this instance was 46.67%), "I can actually soft proof sharpening as it will render for my output device".
    3. In an article by Philip Andrews at photoshopsupport.com entitled 'What's New In Adobe Photoshop CS4 - Photoshop 11 - An overview of all the new features in Adobe Photoshop CS4',
    see: http://www.photoshopsupport.com/photoshop-cs4/what-is-new-in-photoshop-cs4.html
    under the heading 'GPU powered display', this text appears :
    "Smooth Accurate Pan and Zoom functions – Unlike previous versions where certain magnification values produced less than optimal previews on screen, CS4 always presents your image crisply and accurately. Yes, this is irrespective of zoom and rotation settings and available right up to pixel level (3200%)." Now, it would be a brave soul indeed who might try to argue that "crisply and accurately" means anything other than "sharply", and certainly, not even by the wildest stretch of the imagination, could it be taken to mean "slightly blurry but smooth" - to use the further words of Chris Cox also contained in his post #11 mentioned in the initial link at the beginning of this post.
    4. PhotoshopCAFE has several videos on the new features of CS4. One by Chris Smith here:
    http://www.photoshopcafe.com/cs4/vid/CS4Video.htm
    is entitled 'GPU Viewing Options". In it, Chris says, whilst demonstrating zooming an image of a guitar: "as I zoom out or as I zoom in, notice that it looks sharp at any resolution. It used to be in Photoshop we had to be at 25, 50 , 75 (he's wrong about 75) % to get the nice sharp preview but now it shows in every magnification".
    5. Here's another statement about the sharpness of CS4 at odd zoom levels like 33.33%, but inferentially at all zoom levels. It occurs in an Adobe TV video (under the heading 'GPU Accererated Features', starting at 2 min 30 secs into the video) and is made by no less than Bryan O'Neil Hughes, Product Manager on the Photoshop team, found here:
    http://tv.adobe.com/#vi+f1556v1686
    After demonstrating zooming in and out of a bunch of documents on a desk, commenting about the type in the documents which is readily visible, he says : "everything is nice and clean and sharp".
    6. Finally, consider the Ps CS4 pdf Help file itself (both the original released with 11.0 and the revised edition dated 30 March 2009 following upon the release of the 11.0.1 update). Under the heading 'Smoother panning and zooming' on page 5, it has this to say: "Gracefully navigate to any area of an image with smoother panning and zooming. Maintain clarity as you zoom to invididual pixels, and easily edit at the highest magnification with the new Pixel Grid." The use of the word "clarity" can only mean "sharpness" in this context. Additionally, the link towards the top of page 28 of the Help file (topic of Rotate View Tool) takes you to yet another video by Deke McClelland. Remember, this is Adobe itself telling you to watch this video. 5 minutes and 40 seconds into the video he says: "Every single zoom level is fluid and smooth, meaning that Photoshop displays all pixels properly in all views which ensures more accurate still, video and 3D images as well as better painting, text and shapes.". Not much doubt that he is here talking about sharpness.
    So, as you may have concluded, I'm pretty upset about this situation. I have participated in another forum (which raised the lack of sharp rendering by CS4 on several occasions) trying to work with Adobe to overcome what I initially thought may have been only a problem with my aging (but nevertheless, just-complying) system or outdated drivers. But that exercise did not result in any sharpness issue fix, nor was one incorporated in the 11.0.1 update to CS4. And in this forum, I now read that quite a few, perhaps even many, others, with systems whose specifications not only match but well and truly exceed the minimum system requirements for OpenGL compliance with CS4, also continue to experience sharpness problems. It's no surprise, of course, given the admission we now have from Chris Cox. It seems that CS4 is incapable of producing the sharp displays at all zoom levels it was alleged to achieve. Furthermore, it is now abundently clear that, with respect to the issue of sharpness, it is irrelevant whether or not your system meets the advertised minimum OpenGL specifications required for CS4, because the OpenGl features of CS4 simply cannot produce the goods. What makes this state of affairs even more galling is that, unlike CS3 and earlier releases of Photoshop, CS4 with OpenGL activated does not even always produce sharp displays at 12.5, 25, and 50% magnifications (as one example only, see posts #4 and #13 in the initial link at the beginning of this post). It is no answer to say, and it is ridiculous to suggest (as some have done in this forum), that one should turn off OpenGL if one wishes to emulate the sharp display of images formerly available.

    Thanks, Andrew, for bringing this up.  I have seen comments and questions in different forums from several CS4 users who have had doubts about the new OpenGL display functionality and how it affects apparent sharpness at different zoom levels.  I think part of the interest/doubt has been created by the over-the-top hype that has been associated with the feature as you documented very well.
    I have been curious about it myself and honestly I didn't notice it at first but then as I read people's comments I looked a little closer and there is indeed a difference at different zoom levels.  After studying the situation a bit, here are some preliminary conclusions (and I look forward to comments and corrections):
    The "old", non-OpenGL way of display was using nearest-neighbor interpolation.
    I am using observation to come to this conclusion, using comparison of images down-sampled with nearest-neighbor and comparing them to what I see in PS with OpenGL turned off.  They look similar, if not the same.
    The "new", OpenGL way of display is using bilinear interpolation.
    I am using observation as well as some inference: The PS OpenGL preferences have an option to "force" bilinear interpolation because some graphics cards need to be told to force the use of shaders to perform the required interpolation.  This infers that the interpolation is bilinear.
    Nothing is truly "accurate" at less than 100%, regardless of the interpolation used.
    Thomas Knoll, Jeff Schewe, and others have been telling us that for a long time, particularly as a reason for not showing sharpening at less than 100% in ACR (We still want it though ).  It is just the nature of the beast of re-sampling an image from discrete pixels to discrete pixels.
    The "rule of thumb" commonly used for the "old", non-OpenGL display method to use 25%, 50%, etc. for "accurate" display was not really accurate.
    Those zoom percentages just turned out to be less bad than some of the other percentages and provided a way to achieve a sort of standard for comparing things.  Example: "If my output sharpening looks like "this" at 50% then it will look close to "that" in the actual print.
    The "new", OpenGL interpolation is certainly different and arguably better than the old interpolation method.
    This is mainly because the more sophisticated interpolation prevents drop-outs that occurred from the old nearest-neighbor approach (see my grid samples below).  With nearest-neighbor, certain details that fall into "bad" areas of the interpolated image will be eliminated.  With bilinear, those details will still be visible but with less sharpness than other details.  Accuracy with both the nearest-neighbor and bilinear interpolations will vary with zoom percentage and where the detail falls within the image.
    Since the OpenGL interpolation is different, users may need to develop new "rules of thumb" for zoom percentages they prefer when making certain judgements about an image (sharpening, for example).
    Note that anything below 100% is still not "accurate", just as it was not "accurate" before.
    As Andrew pointed out, the hype around the new OpenGL bilinear interpolation went a little overboard in a few cases and has probably led to some incorrect expectations from users.
    The reason that some users seem to notice the sharpness differences with different zooms using OpenGL and some do not (or are not bothered by it) I believe is related to the different ways that users are accustomed to using Photoshop and the resolution/size of their monitors.
    Those people who regularly work with images with fine details (pine tree needles, for example) and/or fine/extreme levels of sharpening are going to see the differences more than people who don't.  To some extent, I see this similar to people who battle with moire: they are going to have this problem more frequently if they regularly shoot screen doors and people in fine-lined shirts.   Resolution of the monitor used may also be a factor.  The size of the monitor in itself is not a factor directly but it may influence how the user uses the zoom and that may in turn have an impact on whether they notice the difference in sharpness or not.  CRT vs LCD may also play a role in noticeability.
    The notion that the new OpenGL/bilinear interpolation is sharp except at integer zoom percentages is incorrect.
    I mention this because I have seen at last one thread implying this and an Adobe employee participated who seemed to back it up.  I do not believe this is correct.  There are some integer zoom percentages that will appear less sharp than others.  It doesn't have anything to do with integers - it has to do with the interaction of the interpolation, the size of the detail, and how that detail falls into the new, interpolated pixel grid.
    Overall conclusion:
    The bilinear interpolation used in the new OpenGL display is better than the old, non-OpenGL nearest-neighbor method but it is not perfect.  I suspect actually, that there is no "perfect" way of "accurately" producing discrete pixels at less than 100%.  It is just a matter of using more sophisticated interpolation techniques as computer processing power allows and adapting higher-resolution displays as that technology allows.  When I think about it, that appears to be just what Adobe is doing.
    Some sample comparisons:
    I am attaching some sample comparisons of nearest-neighbor and bilinear interpolation.  One is of a simple grid made up of 1 pixel wide lines.  The other is of an image of a squirrel.  You might find them interesting.  In particular, check out the following:
    Make sure you are viewing the Jpegs at 100%, otherwise you are applying interpolation onto interpolation.
    Notice how in the grid, a 50% down-sample using nearest-neighbor produces no grid at all!
    Notice how the 66.67% drops out some lines altogether in the nearest-neighbor version and these same lines appear less sharp than others in the bilinear version.
    Notice how nearest-neighbor favors sharp edges.  It isn't accurate but it's sharp.
    On the squirrel image, note how the image is generally more consistent between zooms for the bilinear versions.  There are differences in sharpness though at different zoom percentages for bilinear, though.  I just didn't include enough samples to show that clearly here.  You can see this yourself by comparing results of zooms a few percentages apart.
    Well, I hope that was somewhat helpful.  Comments and corrections are welcomed.

  • How can I merge two TIFF images in one...?

    I need some help please, I am looking for a way to "resize" black & white single TIFF images.
    The process I need to do is like cutting a small image and paste it over a new blank letter-size image (at 300 dpi), like a template.
    Or better yet, is there a way to do something like this...?
    Open image...
    image.*width* = 2550;
    image.*height* = 3300;
    image.save();Some APIs and topics in the internet do or talk about resizing, but the final images get stretched or shrinked and I need them not to do so at all.
    Also, I do not need to display the images, only to get the TIFF images processed and saved back to a file.
    How can I do this with Java and JAI? Unfortunately I am almost new to this and I don't know how difficult it might be to deal with images.

    If 2550 x 3300 isn't the original aspect ratio of the image, then the image is going to looked streched or shrinked in at least one dimension. There is no way around that. It would be like resizing a 2 pixel by 2 pixel image into a 3 pixel by 6 pixel image. The image would look like it's height shrunk or it's width stretched. Had I resized it to 3 pixels by 3 pixels or 6 pixels by 6 pixels, though, then it wouldn't look shrunken or streched.
    Open image...
    image.*width* = 2550;
    image.*height* = 3300;
    image.save();*1)* To open a TIFF image you can use the javax.swing.ImageIO class. It has these static methods
    read(File input)
    read(ImageInputStream stream)
    read(InputStream input)
    read(URL input) You can use which ever method you want. But first you need to install [JAI-ImageIO|https://jai-imageio.dev.java.net/binary-builds.html]. The default ImageReaders that plug themselves into the ImageIO package are BMP, PNG, GIF, and JPEG. JAI-ImageIO will add TIFF, and a few other formats.
    The downside is that if clients want to you use your program on their machine then they to will need to install JAI-ImageIO to read the tiffs. To get around this, you can go to your Java/jdk1.x.x_xx/jre/lib/ext/ folder and copy the jai_imageio.jar file (after you've installed JAI-ImageIO). You can also obtain this jar from any one of the zip files of the [daily builds|https://jai-imageio.dev.java.net/binary-builds.html#Daily_builds]. If you add this jar to your program's classpath and package it together with your program, then clients won't need to install JAI-ImageIO and you'll still be able to read TIFF's. The downside of simply adding the jar to the classpath is that you won't be able to take advantage of a natively accelerated JPEG reader that comes with installing JAI-ImageIO (instead, ImageIO will use the default one).
    *2)* Once you've installed [JAI-ImageIO|https://jai-imageio.dev.java.net/binary-builds.html] and used ImageIO.read(...), you'll have a BufferedImage. To resize it you can do the following
    BufferedImage newImage = new BufferedImage(2550,3300,BufferedImage.TYPE_BYTE_BINARY);
    Graphics2D g = newImage.createGraphics();
    g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);
    g.drawImage(oldImage,0,0,2550,3300,null);
    g.dispose();Here, I simply drew the old image (the one returned by ImageIO.read(...)) onto a new BufferedImage object of the appropriate size. Because you said they were black and white TIFF's, I used BufferedImage.TYPE_BYTE_BINARY, which is a black and white image. If you decide to use one the BufferedImage types that support color, then a 2550x3330 image would require at least 25 megabytes to hold into memory. On the other hand, a binary image of that size will only take up about one meg.
    I specified on the graphics object that I wanted Bilinear Interpolation when scaling. The default is Nearest Neighbor interpolation, which while fast, dosen't look very good. Bilinear offers pretty good results scaling both up or down at fast speeds. Bicubic interpolation is the next step up. If you find the resized image to be subpar, then come back and post. There are a couple of other ways to resize an image.
    Note, however, if 2550 x 3300 is not the same aspect ratio as the the TIFF image you loaded, then the resized image will look shrunk or stretched along one dimension. There is absolutely no way around this no matter what resizing technique you use. You'll need an image whose original dimensions are in a 2550/3300 = .772 ratio if you want the resized image to not look like it's streched (you can crop the opened image if you want).
    *3)* Now we save the "newImage" with the same class we read images with: ImageIO . It has these static methods
    write(RenderedImage im, String formatName, File output)
    write(RenderedImage im, String formatName, ImageOutputStream output)
    write(RenderedImage im, String formatName, OutputStream output)You'll suply the resized BufferedImage as the first parameter, "tiff" as the second parameter and an appropriate output for the third parameter. It's pretty much a one line statement to read or write an image. All in all, the whole thing is about 7 lines of code. Not bad of all.
    Now as for the 300 dpi thing, there is a way to set the dpi in the Image's metadata. I'm pretty good at reading an image's metadata, but I've never really tried writing out my own metadata. I know you can set the dpi, and I have a somewhat vague idea how it might be done, but it's not something I've tried before. I think I'll look more into it.

  • How to create a grid based gradient in Illustrator

    I created this randomly degenerating "bitmap" gradient by hand in Illustrator. I want to replicate the effect to fill different sized spaces, but not have to do it by hand every time. Any ideas?

    Here's one way:
    1. Create a 20-pixel tall grayscale file
    2. Select the Gradient tool and set the mode to Dissolve
    3. Drag a black-to-transparent gradient across the canvas, repeating as necessary to get the density you want
    4. Enlarge 2000% or so using Nearest Neighbor interpolation
    5. Save the file, place it in Illustrator, and use Image Trace on it

  • Printing big Image throws OutOfMemoryError

    Hello,
    I am new to "Java Printing", this is my first attempt to print somthing in Java, I am trying to print several JTextPanes and a big PNG image as a background for them. Sometimes everything work fine, but sometimes it throws java.lang.OutOfMemoryError, full message is: Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space. I use JDK 1.5_06.
    I use the following code to print Image and JTextPanes:
    PrinterJob printerJob = PrinterJob.getPrinterJob();
    printerJob.setPrintService(servicesName);
    PrintRequestAttributeSet pras = new HashPrintRequestAttributeSet();
    pras.add(new PrinterResolution(200, 200, ResolutionSyntax.DPI));
    PageFormat pf = printerJob.defaultPage();
    Paper paper = pf.getPaper();
    paper.setSize(1654D,2339D);//A4 page size with 200 DPI
    paper.setImageableArea(0D,0D,1654D,2339D);
    pf.setPaper(paper);
    printerJob.setPrintable(documentPanel, pf);
    printerJob.print(pras);documentPanel is instance of class DocumentPanel:
    /*Panel that contains my JTextPanes */
    public class DocumentPanel extends JPanel implements Printable{
    /*overriden method print*/
    public int print(Graphics g, PageFormat pf, int index) throws PrinterException {
            if (index > 0) { /* We have only one page, and 'page' is zero-based */
                    return NO_SUCH_PAGE;
                Graphics2D g2d = (Graphics2D) g;
                g2d.translate(leftMargin, topMargin);
                //draw the background image
                g2d.drawImage(iipmt().getImage(),0,0,null);
                g2d.scale(1.667D, 1.667D); /*this is to make children  JTextPanes bigger.
                As I see after printing  this line doesn�t affect size of image, which is set above.*/
                /* Now print all children - JTextPanes without parent*/
                this.printChildren(g2d);
                /* tell the caller that this page is part of the printed document */
                return PAGE_EXISTS;
    static ImageIcon iipmt(){
             return  new ImageIcon("image.png");
    } If I change image to another one with smaller size and use 72 DPI instead of 200 my program always work fine. Also it seams that extending java heap space with -Xmx256 can solve this problem (I don't know exactly because error doesn't repeat always and I can�t determine some special conditions for this error).
    I don�t want to extend heap space and I think it is not necessary for printing images, so I would like to know, is there better way to print Images mixed with Swing components? Or what mistakes did I make?

    A printer setting of 600 dpi doesn't mean 600 pixels per inch. It's a relative quantity of how much ink you want the printer to use to faithfully reproduce the pixels on the page. So for example, you start to achieve photo quality around 250 pixels per inch, but would need to set the printer to 1200+ dpi if you ever hope for it to print out correctly. Trying to achieve 600 pixels per inch would be suicide for your printer.
    That being said, you're not actually doing anything with this dpi or ppi stuff. In your code you're ultimately just drawing the image, scaled to the page's size using nearest neighbor interpolation, and printing it at 72 pixels per inch. In the code you commented out, you have the interpolation fine but miss the mark on the scaling. Specifically, you scale the graphics correctly, but then two lines later you set the transform to something else.
    Try this:
    double scale = 72/...;  //be reasonable, 600 won't work, 72 pixels-per-inch may be just fine
    Graphics2D g2d = (Graphics2D) printerGraphics;
    g2d.scale(scale,scale);
    g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION,
                         RenderingHints.VALUE_INTERPOLATION_BICUBIC);
    g2d.drawImage(image,xMargin,yMargin,
            (int) Math.floor(pageFormat.getWidth()/scale),
            (int) Math.floor(pageFormat.getHeight()/scale),null); This should (in theory) scale the image to the page's size using bicubic interpolation, and print it at what ever pixels-per-inch you specify. The crappy output you were getting could of been the nearest neighbor interpolation, the 72 ppi, or some combination of both.

  • ImageIO: scaling and then saving to JPEG yields wrong colors

    Hi,
    when saving a scaled image to JPEG the colors are wrong (not just inverted) when viewing the file in an external viewer (e.g. Windows Image Viewer or The GIMP). Here's the example code:
    public class Main {
         public static void main(String[] args) {
              if (args.length < 2) {
                   System.out.println("Usage: app [input] [output]");
                   return;
              try {
                   BufferedImage image = ImageIO.read (new File(args[0]));
                   AffineTransform xform = new AffineTransform();
                   xform.scale(0.5, 0.5);
                   AffineTransformOp op = new AffineTransformOp (xform, AffineTransformOp.TYPE_BILINEAR);
                   BufferedImage out = op.filter(image,null);
                   ImageIO.write(out, "jpg", new File(args[1]));
    /* The following ImageViewer is a JComponent displaying
    the buffered image - commented out for simplicity */
                   ImageViewer imageViewer = new ImageViewer();
                   imageViewer.setImage (ImageIO.read (new File(args[1])));
                   imageViewer.setVisible (true);
              catch (Exception ex) {
                   ex.printStackTrace();
    Observations:
    * viewing this JPEG in an external viewer displays the colors wrong, blue becomes reddish, skin color becomes brown, blue becomes greenish etc.
    * when I skip the transformation and simply write the input 'image' instead the colors look perfect in an external viewer!
    BufferedImage image = ImageIO.read (new File(args[0]));
    ImageIO.write (image, "jpg", new File(args[1]));
    * when I do the scale transformation but store as PNG instead the image looks perfect in external viewers!
    BufferedImage out = op.filter(image,null);
    ImageIO.write(out, "png", new File(args[1]));
    * viewing the scaled JPEG image with The GIMP produces "more intense" (but still wrong) colors than when viewing with the Windows Image Viewer - I suspect that the JPEG doesn't produce plain RGB values when decompressed (another color space used then sRGB? double instead of int values?)
    * Loading the saved image and display it in a JComponent shows the image fine:
    class ViewComponent extends JComponent
         private Image image;
         protected void paintComponent( Graphics g )
              if ( image != null )
    // image looks okay!
                   g.drawImage( image, 0, 0, this );
         public void setImage( BufferedImage newImage )
              image = newImage;
              if ( image != null )
                   repaint();
    * Note that I've tried several input image formats (PNG, JPEG) and made sure that they were stored as RGB (not CMYK or the like).
    * Someone else already mentioned that the RGB values as read from an JPG image are wrong, but no answer in this thread - might be connected with this problem: http://forum.java.sun.com/thread.jspa?forumID=20&threadID=612542
    * I tried the jdk1.5.0_01 and jdk1.5.0_04 on Windows XP.
    Any suggestions? Is this a bug in the ImageIO jpeg plugin? What am I doing wrong? Better try something like JAI or JIMI? I'd rather not...
    Thanks a lot! Oliver
    p.s. also posted to comp.lang.java.programmers today...

    Try using TYPE_NEAREST_NEIGHBOR
    rather than
    TYPE_BILINEARI was a bit quick with saying that this doesn't work - I had extended my example code which made it fail (see later), but here's a working version first (note: I use the identity transform only for simplicity and to show that it really doesn't matter whether I scale, rotate or shear)):
    // works, but only for TYPE_NEAREST_NEIGHBOR interpolation
    Image image = ImageIO.read (new File(args[0]));
    AffineTransform xform = new AffineTransform();
    AffineTransformOp op = new AffineTransformOp (xform, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
    BufferedImage out = op.filter(image, null);
    ImageIO.write(out, "jpg", new File(args[1]));The problem: we restrict ourselves to nearest neighbor interpolation, which is especially visible when scaling up (or rotate, shear).
    Now when I change the following, it doesn't work anymore, the stored image has type=0 instead of 5 then (which obviously doens't work with external viewers):
    // doesn't work, since an extra alpha value is introduced, even
    // for TYPE_NEAREST_NEIGHBOR
    BufferedImage out = op.filter(image, op.createCompatibleDestImage(image, ColorModel.getRGBdefault()));Intuitively I would say that's exactly what I want, and RGB image with data type int (according to JavaDocs). That it has an alpha channel is a plus - or is it?
    I think this extra alpha value is the root of all evil: JPEG doesn't support alpha, but I guess ImageIO still mixes this alpha value somehow into the JPEG data stream - for ImageIO that's not a problem, the JPEG data is decompressed correctly (even though the alpha values have become meaningless then, haven't checked that), but other JPEG viewers can't manage this ARGB format.
    This also explains why writing to PNG worked, since PNG supports alpha channels.
    And obviously an AffineTransformOp silently changes the image format from RGB to ARGB, but only for TYPE_BILINEAR and TYPE_CUBIC, not for TYPE_NEAREST_NEIGHBOR! Even though I can imagine why this is done like this (it's more efficient to calculate with 32 bit ints than with 24 bit packed values, hence the extra alpha byte...) I would at least expect the JPEG writer to ignore this extra alpha value - at least with the default settings and unless otherwise told with extra parameters! Now my code gets unnecessary complicated.
    So how do I scale an image using bilinear (or even bicubic) interpolation, so that it get's displayed properly with external viewers? I found the following code working:
    // works, but I need an extra buffer and draw operation - UGLY
    // and UNNECESSARILY COMPLICATED (in my view)
    BufferedImage image = ImageIO.read (new File(args[0]));
    AffineTransform xform = new AffineTransform();
    AffineTransformOp op = new AffineTransformOp (xform, AffineTransformOp.TYPE_BILINEAR);
    BufferedImage out = op.filter(image, null);
    // create yet another buffer with the proper RGB pixel structure
    // (no alpha), draw transformed image 'out' into this buffer 'out2'          
    BufferedImage out2 = new BufferedImage (out.getWidth(), out.getHeight(),
                                                             BufferedImage.TYPE_INT_RGB);
    Graphics2D g = out2.createGraphics();
    g.drawRenderedImage(out, null);
    ImageIO.write(out2, "jpg", args[1]);This is already way more complicated than the initial code - left alone that I need to create an extra image buffer, just to get rid of the alpha channel and do an extra drawing.
    I've also tried to supply a BufferedImage as 2nd argument to the filter op to avoid the above:
    ICC_ColorSpace (iccColorSpace = new ICC_ColorSpace (ICC_Profile.getInstance(ColorSpace.CS_sRGB)
    BufferedImage out = op.filter(image, op.createCompatibleDestImage(image, new DirectColorModel (iccColorSpace), 24, 0xff0000, 0x00ff00, 0x0000ff, 0x000000, false, DataBuffer.TYPE_INT)));  But then the filter operation failed ("Unable to transform src image") and I was beaten by the sheer possibilities of color spaces, ICC profiles, so I quickly gave up... and hey, I "just" want to load, scale and save to JPG!
    The last option was getting a JPEG writer and its ImageWriteParam, trying to set the output image type:
    iwparam.setDestinationType(ImageTypeSpecifier.createFromBufferedImageType(BufferedImage.TYPE_INT_RGB));But again I failed, the resulting image type was still type=0 (and not 5). So I gave up, realising that the Java API is still overly complex and "too design-patterned" for simple tasks :(
    If anyone has still a good idea how to get rid of the alpha channel when writing to JPEG in a simple way (apart from creating an extra buffer as above, which works for me now...)... you're very welcome to enlighten me ;)
    Cheers, Oliver
    p.s. Jim, I will assign you the "DukeDollars", since your hint was somewhat correct :)

  • Problem drawing tiled image

    I have a tiled buffred image. I have implemented caching and tiling myself i.e. the complete image (that can be as big as 100+MB) is tiled into 512x512 tiles and i am keeping them in a cache implemented as hashmap. The cache size is adjusted based on the available memory.
    My logic is that when the user scrolls the image i figure out the tiles that needed to be shown to the user and then see if i can find it in the cache, in case of a cache miss i create the tile in memory (using createCompatibleImage()). Once the tile is created i stuff it in my cache. After making sure the required tiles are available i then simply use drawImage(tile,x,y,null) to draw the tiles inside the paintComponent() method. My problem is that the tiles get drawn erratically i.e. if the viewport requires six tiles to be drawn then every so often few of the tiles do not get drawn. I have made sure that the my program does call drawImage() six times once for each tile - but it seems that despite drawImage call the actual drawing has a mind of its own.
    I do not suspect my tile creation or cachig code since when someof the tiles are not shown properly all i have to do is to move mouse over the undrawn area and the tiles appear immediately.
    My code is spread all over so it is difficult to produce all the relevant code in here, so i will give some code snippet:
    // Code for creating a new tile
    public createTile()
    GraphicsConfiguration gc = Utility.getDefaultConfiguration();
    BufferedImage overlayImage = gc.createCompatibleImage(width, height);
    int ii = 0;
    int jj = 0;
         for (int i= 0; i<width; i++){                
              for (int j=0; j<height; j++){
                   int rgb = ....;//the rgb is calculated based on some specific logic.
                   overlayImage.setRGB(ii,jj,rgb);
                   jj++;
              ii++;
              jj = 0;
    // After creating the image I then resize the image to handle zoom in/zoom out
    // using nearest neighbor interpolation. I don't suspect the resize code so
    // have not reporduce it here.
         return (resizeImage(overlayImage,newWidth,newHeight)).
    ====================================================
    // Extract of some of my paint code is given below.
    //I am not calling super.paintComponent() because i am redrawing the
    // whole screen with my tiled image.
      public void paintComponent(Graphics g)
           Graphics2D g2d = (Graphics2D)g;
         Tile[]tiles = getChosenTiles();
                   for (int i=0; i<tiles.length; i++){
                        g2d.drawImage(tiles.getImage(), tiles[i].x, tiles[i].y, null);

    Since nobody replied to the post - i had no option but to dig into the problem myself. Good thing is that I managed to find the solution.
    As it turned out my logic was as below:
    Step 1: find out the tiles that need to be shown on the viewport.
    Step 2: obtain these tiles from the cache
    Step 2a: If tiles not found in cache then create the tiles in memory.
    Step 3: for (int i=0;i<tiles.length; i++) {             
                    g2d.drawImage(tiles.getImage(), null, tiles[i].x, tiles[i].y);
    The problem was that for some reason the Java graphics drawing could not keep up with the for loop. What i did was to move the drawing logic inside Step 2. i.e. I would get a tile and then draw it immediately after getting it. In other words the logic was changed to
    Step 1: find out the tiles that need to be shown on the viewport
    Step 2:
                 for (int i=0;i<tiles.length; i++) {
                     BufferedImage image = tiles.getImage();
    if (image == null) {
    image = createTileImage(tiles[i]);
    tiles[i].setImage(image);
    g2d.drawImage(tiles[i].getImage(), null, tiles[i].x, tiles[i].y);
    This solved my problem.
    -km

Maybe you are looking for

  • Downloading Past Purchases

    Can someone please explain what this means? I can't download my past purchases whenever I feel like it? This computer is already associated with an Apple ID. You can download past purchases on this computer with just one Apple ID every 90 days. This

  • Mathscript and object oriented code m code

    Hello, I have some existing code written in Matlab with object-oriented programming that I would like to use with Mathscript RT nodes in LabVIEW. Does NI have any plan to support object-oriented programming in mathscript RT nodes in the future? In th

  • Are there any way to know TLS alert (certificate_expired) from client?

    Hello folks, It's first time for me to implement TLS server with Java, and need your suggestion about TLS alert protocol. RFC2256 defines TLS alert, for example, for "certificate_expired". I could not find description about TLS alert in JSSE document

  • Can't reload Facetime

    I am unable to reload Facetime from Store.  Get Error 1004 and multiple requests for password but no download. It appears in my purchased tab but when I select install I get error.

  • HiAll, My email address is changing

    Hi There, Does anyone know the quickest and easiest way to let all my contacts know that my email address is changing? Thank you, Julie