Negative coordinates

I am building an application.
I use a Map of Holland and want the center of Holland to have the coordinates 0.0 and the upper left corner -x,-y. x and y are variables for the coordinates.
Can plz somebody help me ????

Well, you can write a small method that translate your "Holland" coordinates into java coordinates and vice-versa.
Something like:
private final int X_OFFSET = 500;
private final int Y_OFFSET = 500;
public Point toJavaCoord(Point hollandCoord) {
   Point javaCoord = new Point(hollandCoord);
   javaCoord.translate(-X_OFFSET, -Y_OFFSET);
   return javaCoord;
public Point toHollandCoord(Point javaCoord) {
   Point hollandCoord = new Point(javaCoord);
   hollandCoord.translate(X_OFFSET, Y_OFFSET);
   return hollandCoord;
// and use it like this
Point pHolland  = new Point(0,0);
Point pJava     = this.toJavaCoord(pHolland);
Point pHolland2 = this.toHollandCoord(pJava);

Similar Messages

  • Negative coordinates in Graphics2d

    Hi,
    I have to apply some transformations to a BufferedImage that I draw on a JPanel; to do this, I use a matrix in which I store the scale, translation and rotation factors (similarly to the process using AffineTransformation).
    When I draw a point on that image, I would know if the point belongs to the image or it's out of the bounds and, if necessary, translate it.
    So far, I have only tested if the point's coordinates fall within the width/heigth of the image, but this has turned out unsuccessful because I've realized that there are also negative coordinates.
    Any advice is welcome! Thanks

    import java.awt.*;
    import java.awt.geom.*;
    import javax.swing.*;
    public class XPoints extends JPanel {
        Rectangle origRect = new Rectangle(50, 100, 200, 150);
        Shape xShape;
        AffineTransform at;
        Point[] pts;
        public XPoints() {
            int[][] coords = {
                {  75, 120, 185, 210 },
                { 190, 155, 125, 225 }
            pts = new Point[coords[0].length];
            for(int i = 0; i < coords[0].length; i++) {
                pts[i] = new Point(coords[0], coords[1][i]);
    protected void paintComponent(Graphics g) {
    super.paintComponent(g);
    Graphics2D g2 = (Graphics2D)g;
    g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
    RenderingHints.VALUE_ANTIALIAS_ON);
    if(at == null) initTransform();
    g2.setPaint(Color.blue);
    g2.draw(origRect);
    for(int i = 0; i < pts.length; i++) {
    g2.fill(new Ellipse2D.Double(pts[i].x-1.5, pts[i].y-1.5, 4, 4));
    // Draw transformed primitives.
    g2.setPaint(Color.red);
    g2.draw(xShape);
    Point2D.Double p = new Point2D.Double();
    for(int i = 0; i < pts.length; i++) {
    at.transform(pts[i], p);
    g2.fill(new Ellipse2D.Double(p.x-1.5, p.y-1.5, 4, 4));
    private void initTransform() {
    double x = 125;
    double y = 100;
    double theta = Math.toRadians(45);
    double cx = origRect.getCenterX();
    double cy = origRect.getCenterY();
    at = AffineTransform.getTranslateInstance(x,y);
    at.rotate(theta, cx, cy);
    xShape = at.createTransformedShape(origRect);
    public Dimension getPreferredSize() {
    return new Dimension(400,400);
    public static void main(String[] args) {
    JFrame f = new JFrame();
    f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
    f.add(new XPoints());
    f.pack();
    f.setLocation(100,100);
    f.setVisible(true);

  • Sat Nav - Navigate To Coordinates ?

    Hi, This is my first post on the forum So be nice !
    I work for Coca-Cola Enterprises and im trying to find a sat nav app for the BB 8820 that allows me to enter coordinates that i can then navigate too.
    This is because our vehicle tracking service tracks by coordinates, and auditors need to be able to enter the coordinates and navigate to the vehicle. Ideally in decimal degrees format.
    Paying for the software is not a problem providing I can get a free trial first!
    Ive tried nav4all which is free and allows coordinate input, but doesnt seem to allow negative coordinates! Which is stupid becuase half the world has negative coordinates.
    Any ideas on nav4all or anyother software thats available?!
    Thanks,
    Adam Cooper

    Please see this link :
    http://www.smart2go.com/en/help/faq/#jump55
    You can find Nokia's contact details here :
    http://www.nokia.com/A4126575
    Regards,
    Edward

  • Dual monitor setup - need laptop on the left

    Hi All
    I'm running a dual monitor setup - specifically a HP G72 laptop and a widescreen monitor. I use the monitor as the default and sometimes I want to use the Laptop as a secondary screen for watching TV etc. My desktop is Enlightenment, I just changed to it from Gnome. My issue is that I want the laptop to be on the left of the monitor - that's how it is physically positioned - but when I switch the Laptop screen on all my applications shift over to the laptop screen. I've written 2 scripts, attached to hotkeys, one to turn the laptop off and the other to turn the laptop on. The script to turn the laptop on is as follows:
    xrandr --output LVDS1 --mode 1600x900 --pos 1600x0 --rotate normal
    --output VGA1 --primary --mode 1920x1080 --pos 0x0 --rotate normal --left-of LVDS1
    This works fine, applications stay where they should be, except that the physical position is incorrect; VGA1 should be to the right of LVDS1 like this:
    xrandr --output LVDS1 --mode 1600x900 --pos 1600x0 --rotate normal
    --output VGA1 --primary --mode 1920x1080 --pos 0x0 --rotate normal --right-of LVDS1
    However, when I use the second script all of the applications shift to the laptop but at least the physical position is correct. The issue appears to be something to do with laptops, almost as though when it's a laptop the default monitor is forced to be on the left. Has anybody got any suggestions how to get around this issue?
    Thanks
    Richard

    This is due to your window manager treating the two monitors as a larger 'desktop' area - that combined with your specification of the pos=0x0 you are telling it to do this.
    If you have only the VGA output on, the top left corner of the VGA monitor is 0x0.  If you open a window, and it is placed 10 pixels in and down from the top left, it is at coordinates 10x10 in the x session.  Then you turn on the laptop monitor and specify that the top left corner of the laptop monitor should now bo considered 0x0 - but nothing is done to move the windows, so the window with coordinates 10x10 is now near the top left of the laptop monitor.
    I see 3 possible ways of acheiving what you want:
    1) swap the "pos" parameters in your xrandr commands so that the monitor on the left is always at 0x0 and the one on the right is always at 1920x0 (or whatever).  This will leave a large area of available X desktop that is not visible - I don't know how your window manager will treat this.  It might try to map windows on the laptop screen space even though that screen is not on.  I know openbox is easy to configure where it tries to place windows.
    2) Keep the VGA monitor always at 0x0, but place the laptop monitor at a negative offset when you want it on.  I don't actually know if X allows negative coordinates like that, but it's worth a shot.
    3) Configure your window manager, or use a different window manager, so that it is aware of the two screens and treats them reasonably (e.g. moves windows when the desktop area is changed).
    Last edited by Trilby (2014-07-12 12:15:42)

  • Dual monitors - popup locations

    I have RoboHelp HTML 2002 Build 949. I have a secondary
    monitor that is on the left of the primary monitor so the x
    coordinates are negative. When I open a help window on the
    secondary monitor, and have text popups setup to appear when a
    button or a piece of text is clicked, the popups appear on the
    primary monitor with '0' x-coordinate and correct y-coordinate.
    If the secondary monitor were to be on the right side of the
    primary monitor, the popup appears correctly. If the help window is
    on the primary monitor, the popup appears correctly. So my
    conclusion is that RoboHelp HTML 2002 is not handling negative
    coordinates situation (which arises in the case of dual monitors
    when the primary monitor is on the right side) properly. Or there
    is something that I can do programmatically.
    Is there anything that can be done by me to fix this problem?
    And/or is it true that RoboHelp HTML 2002 is not handling negative
    coordiates situation?
    Thanks!
    Rajesh

    Rajesh,
    I use RoboHelp 2000, and the application doesn't handle the
    dual monitor well. For instance, I can't get it to use the second
    monitor as a default position. I think the application was written
    before dual monitors became commonplace. I think your only option
    would be to edit the BSSCDHTM.JS file - functions like
    BSSCPopup_ResizeAfterLoad. Unfortunately, I can't tell you how to
    do this because I'm not very fluent in javascript, especially when
    it comes to the built in functions to get existing document
    properties.
    John

  • How can I change where an applet focuses its view

    I'm writing a scrolling marquee applet and in order to scroll up, out of sight or scroll Left out of sight, I have to be able to draw my strings with negative x & y values which seem to not be allowed.
    I figure if I draw my marquee with the upper left corner located at around 300, 50, (instead of 0,0 ) I won't have to worry about drawing to negative coordinates.
    Is there anyway to set the applet so that when it opens in a browser, the upper left hand corner of the visible applet would begin at 300, 50 instead of 0, 0, by default?
    Charlene English

    when the text is at 0,0... it is still in view of the user. 0,0 is, so far, the extreme upper right hand corner of the scrolling marquee. If I wanted the text to scroll off screen to the left, I would have to start assigning negative values to the x component of drawString. This is the reason I want to offset the whole marquee by 300, 50. My Marquee is 350, 30 in size, so if I offset my Marquee without being able to control the view of the applet, I'd have an applet on my page (650, 80) in size and the marquee would be in the lower right hand corner.
    What I'm shooting for is an applet that is (350, 30) in size with it's upper left hand corner located at 300, 50.
    -Charlene

  • What may cause jitter in a graph?

    A simple VI to convert a string into a dot matrix text display (XY graph) worked fine initially.  After seemingly minor cleanup in the block diagram, the text jitters back and forth during updates (see attached vi).  A subtle error is suspected. Help would be appreciated.
    Attachments:
    StrToDots.vi ‏36 KB

    Ah, the free property to reference bug ... oh well.
    Just adding a loop and a small wait seems to calm things down, i tried with 100ms, and didn't see any jitter then, but it could feel slightly sluggish if you're writing fast. At 33 (30 fps) or 40 (25 fps) it jittered some, at 50 (20 fps) it was rare for me.
    Some testing shows it's the XY graph resizing that's the issue, if you set it to a fixed size you wont get an issue. If you dont change the minimum scale it wont be much issue. Look at this slight modification, instead of using negative coordinates i subtract the minimum, normalizing it to 0+ so i dont have to change the minimum scale. I haven't seen a single glitch this way.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • RePost: center movie clips dynamically, as3.0 ?

    I have a question that was somewhat answered but I now have some problems with, as far as implementing the solution is concerned.  I posted in August and I have revisited the file only now to see if I can create the changes necessary to center my mc's dynamically as they are drawn on the stage at runtime.  They need to center to themselves rather than having their registration points in the top left corner, per display object.
    Here is the thread:
    http://forums.adobe.com/message/5760947#5760947
    Thanks in advance for any responses.
    -markerline

    I attempted to reply yesterday but the forums became under maintenance (as we all re-logged in after).
    What I tried to post before the system went down was to give a single example as you had mentioned. To remove the complexity of the rest of the app so this can be understood alone and then implemented into your larger system.
    Centering via container in code is very simple as long as you can grab a hold of the shape in code as well.
    Here's a complete AS3 example of centering a single object inside a container. I just want you to paste this into a new AS3 doc so you can tell me you understand how it works. After that, the more complex multi-object container in a container approach comes in:
    e.g. 2 squares rotating:
    import flash.display.Shape;
    import flash.utils.Timer;
    import flash.events.TimerEvent;
    import flash.display.Sprite;
    // start rotation loop using a Timer to turn all objects
    var moveTimer:Timer = new Timer(10,0);
    // function to rotate objects
    moveTimer.addEventListener(TimerEvent.TIMER, handleTimer);
    // draw a rect shape
    var redrect:Shape = new Shape();
    redrect.graphics.lineStyle(3,0x0,1,true);
    redrect.graphics.beginFill(0xFF0000);
    redrect.graphics.drawRect(0,0,100,100);
    redrect.graphics.endFill();
    addChild(redrect);
    // position at x:100 / y:100 on stage
    redrect.x = 150;
    redrect.y = 200;
    // now draw a blue square but with a container while centering the shape
    // container
    var blueContainer:Sprite = new Sprite();
    addChild(blueContainer);
    // draw a rect shape
    var bluerect:Shape = new Shape();
    bluerect.graphics.lineStyle(3,0x0,1,true);
    bluerect.graphics.beginFill(0x0000FF);
    bluerect.graphics.drawRect(0,0,100,100);
    bluerect.graphics.endFill();
    blueContainer.addChild(bluerect);
    // position in center of container (subtract half width/height)
    //-------------centering code----------------
    bluerect.x -= bluerect.width / 2;
    bluerect.y -= bluerect.height / 2;
    // position container
    blueContainer.x = 400;
    blueContainer.y = 200;
    // start timer which invokes function below
    moveTimer.start();
    // rotate the red rect (upper left reg) and blue (objects centered);
    function handleTimer(e:TimerEvent):void
              // just rotate both to see the registration
              redrect.rotation += 2;
              blueContainer.rotation += 2;
    Now do understand I know I can draw my bluerect with negative coordinates to achieve the same thing inside the shape (e.g. -50,-50,100,100) but the point here is containing potentially a complex object into a single object so the entire outer contents can be measured and rotated from a single center point. That comes after this simple code is understood.

  • I3 bar: dual monitor, all workspace tabs on primary

    First of i3 is Awesome (just switched)
    Question:
    with named workspaces in a dual monitor setup I have i3 status bar set to display only on the external monitor, which works great, except for the fact that the only workspace tabs that display in the bar are those related to the external monitor.
    If I show the bar on laptop monitor as well, of course the workspace tabs relevant to laptop display.
    So, is there a way to get all workspace tabs displaying in status bar on a single monitor?
    Also, have not been able to find out how to reverse alignment of status items and workspace tabs. Would prefer status items aligned left and workspace tabs right -- possible to pull this off with new JSON protocol?
    Thanks for ideas

    This is due to your window manager treating the two monitors as a larger 'desktop' area - that combined with your specification of the pos=0x0 you are telling it to do this.
    If you have only the VGA output on, the top left corner of the VGA monitor is 0x0.  If you open a window, and it is placed 10 pixels in and down from the top left, it is at coordinates 10x10 in the x session.  Then you turn on the laptop monitor and specify that the top left corner of the laptop monitor should now bo considered 0x0 - but nothing is done to move the windows, so the window with coordinates 10x10 is now near the top left of the laptop monitor.
    I see 3 possible ways of acheiving what you want:
    1) swap the "pos" parameters in your xrandr commands so that the monitor on the left is always at 0x0 and the one on the right is always at 1920x0 (or whatever).  This will leave a large area of available X desktop that is not visible - I don't know how your window manager will treat this.  It might try to map windows on the laptop screen space even though that screen is not on.  I know openbox is easy to configure where it tries to place windows.
    2) Keep the VGA monitor always at 0x0, but place the laptop monitor at a negative offset when you want it on.  I don't actually know if X allows negative coordinates like that, but it's worth a shot.
    3) Configure your window manager, or use a different window manager, so that it is aware of the two screens and treats them reasonably (e.g. moves windows when the desktop area is changed).
    Last edited by Trilby (2014-07-12 12:15:42)

  • How does the jvm handle drawing images outside an applet window?

    I have a question about an applet I want to optimize.
    Essentially the applet allows the user to scroll through a large map while viewing only a small portion of the map through the applet window.
    The map has a large array of circles with coordinates on the map that are drawn in their appropriate place in the applet window when the user scrolls the map.
    As it stands, the paint method loops through every circle in the array to draw them, but most of them are not visible in the window and end up getting drawn in a negative coordinate or a very large coordinate.
    My question is are these circles that are not seen in the applet window but are still calling drawCircle in the paint method putting a strain on the computer's graphics card?
    Would it be better if I looped through all circles and only called the drawCircle method on circles whose coordinates would be visible? Or would the extra step of checking each circle's coordinates before drawing be not worth as efficient?
    Any help would be appreciated.

    If the amount of stuff being rendered outside of the clip rectangle is small then it's usually not worth it to attempt to figure out what's being clipped and not draw it. If the clip rectangle represents only a small portion of the entire canvas that can be drawn, however, it's usually worthwhile to put in some logic to only draw what is necessary.
    Take JTextArea, for example. It could be displaying a text document thousands of lines long. Instead of rendering every line of text on every repaint operation, it contains the following logic:
    1. Get the current clip bounds (i.e. what part of the text area is "dirty" and must be repainted).
    2. Figure out what lines are displayed in the clip bounds. Some lines may be only partially visible (a line "halfway" scrolled down), but they need to be repainted too. If word wrap is disabled this is a very quick and cheap operation, but if word wrap is enabled, it's a little more complex.
    3. Only repaint those lines.
    This way, in the best-case scenario JTextArea only repaints a single line (the line the user is typing in). Worst case, it repaints the number of lines that can fit on the screen. But it never repaints too much.
    Anyway, I guess my advice would be: If it's cheap and easy to determine if something is out of the clip bounds, do it. If it's difficult to determine, do it only if you have a noticeable performance issue in your rendering code.

  • Mouse Scroll Wheel not working when iTunes is on secondary monitor

    Hi,
    This is a pretty convoluted problem, but I run two monitors (from the one RADEON videocard) and it seems that the iTunes7 window won't recognize input from my mouse's scroll wheel when the window is positioned on my "secondary" monitor.
    I know, it's weird, but it's the case.
    I guess this counts more as a bug report than anything else, but I'm hoping something can be done about this soon!
    M

    Good call. I can confirm that the scroll wheel works any time that your mouse is in a positive coordinate location, does not work in a negative coordinate location. It doesn't matter whether the window straddles the two monitors - only what the mouse coordinate is. This applies for left monitors and (not nearly as common) monitors on top.
    This goes to show that it's generally a bad idea to rewrite your own scrolling windows - you're bound to introduce bugs for some combination of input/output devices.

  • Visual representations of color spaces

    I copied these diagrams from the program called Color Space ...
    Can be downloaded from here: COULEUR.ORG
    I think that it draws spectral locus on xyY by just plotting matching x, y and Y values for every interval of monochromatic lights on the spectrum ... just like we draw the locus line in XYZ by plotting matching tristimulus values X, Y and Z.
    Because the underlying data for the most saturated colors human can see is CIE XYZ color matching functions, I think that we can draw the line of spectral locus in every model originated from CIE XYZ by just making necessary transformations to the XYZ tristimulus values for the intended model.
    3D representation of the entire human gamut is more tricky, I think. I will ask about it, after getting your confirmation about the spectrum locus.

    A set of three numbers in xyY or XYZ defines a color. 
    How does it look? How can it be reproduced?
    A good occasion to go back to the roots.
    The CIE color system is essentially based on the work of Hermann Graßmann,
    one of the greatest scientists in the 19th century [1].
    Mostly his four laws are quoted [2], but it boils down to this [3]:
    "The whole set of color stimuli constitutes a linear vector space,
    named tristimulus space."
    Colors behave like vectors in a three-dimensional space. A color is
    characterized by three numbers. A fourth number would be redundant
    (linearly dependent). A color is not characterized by just one spectrum. For
    one color exists an arbitrary number of different spectra, called metamers.
    The explanation of Graßmann's laws via spectra [4] is wrong.
    Nobody knows what a color 'really' is. Acccording to the philosopher Kant,
    we don't know what any object in the space 'really' is. We have just an
    impression by our senses, eventually enhanced by instruments.
    Colors are described by comparison with reference colors. For length and
    weight we need references as well: the meter, the kilogramm.
    Reference colors are (for instance) the three CIE primaries, spectral colors
    with well defined wavelengths (it's not important, that the first experiments
    were executed with a different set). Let's say R,G,B.
    According to Graßmann one needs exactly three 'primaries'. This should
    have ended a long dispute: three or four?, but it didn't.
    Now we can use these spectral colors as base vectors of a coordinate system.
    According to Kant, humans have an a-priori idea of space (without any proof),
    where we can position objects. By intuition the axes are orthogonal.
    Any other color can be described by a linear combination of the three primaries,
    according to Graßmann exactly in a vector space. Color matching means:
    find the three weights for the primaries to match a given (numerically unknown)
    color.
    Unfortunately, one needs for the matching of some colors negative weight factors,
    which is possible by the concept of vector space, but somewhat disturbing for
    real color mixing.
    In RGB we can describe a new vector space by base vectors X,Y,Z, which form
    a non-orthogonal coordinate system. All colors have non-negative coordinates X,Y,Z
    and this construct has the funny feature, that luminance is identical with Y,
    whereas the 'imaginary primaries' X and Z don't have luminance. Mathematically
    this is not a problem.
    The relation between the two spaces is given by a matrix Cxr and its Inverse:
    X=(X,Y,Z)'  (column vectors)
    R=(R,G,B)'
    X=Cxr R
    R=Crx X =Cxr^-1 X
    Conventionally we draw XYZ as a cartesian coordinate system (with orthogonal axes),
    and R,G,B is the set of non-orthogonal base vectors – the arrangement has been swapped.
    There is no natural law, which of the coordinate systems has to be shown as a cartesian.
    XYZ is universal. Other RGB-systems with new primaries can be added, either as
    working spaces like sRGB, aRGB or pRGB (ProPhotoRGB, which uses two non-physical
    primaries, which doesn't surprise, because we got used to entirely non-physical primaries
    X,Y,Z),  or device RGB systems like monitor spaces. Each RGB system is related to XYZ
    using a matrix, thus we can transform as well from one RGB-system to another.
    Now it's possible to render a threedimensional visualization of a color space by appropriate
    colors, for instance aRGB. But for a monitor these colors would be clipped almost at the
    sRGB boundary.
    Of course it's not possible to render pRGB correctly, because the blue and green primary
    are outside the human gamut, and the red primary would be too dark. Therefore it's wise,
    to use pRGB only for regions or real world colors.
    There might be still an objection: the reference system contains only spectral colors.
    How are the little saturated colors looking? This had been clarified by Graßmann: the vector
    addition of any two colors is valid mathematically and by appearance.
    These explanations may also help to end the dispute about the 'basic colors', especially
    for painting. Many artists have invented their own system. The answer is simple: basic
    colors are like primaries. Their location in the XYZ-space defines, which colors can be
    created with positive weight factors or mixing ratios. By the way: one shouldn't worry here
    about the difference between additive and subtractive color mixing.
    One may have serious doubts, whether Graßmann's laws and the whole CIE colorimetry
    is really valid, especially considering numerous optical illusions [5] and all these special
    effects in color appearance, like Helmholtz-Kohlrausch and other nonlinearities.
    It's perhaps surprising, that CIE colorimetry works so very well for image processing and
    device calibration for monitors, printers and cameras.
    Finally I would like to mention another scientist, which is not as popular as others,
    probably because his work is mathematically difficult: Jozef B.Cohen [6].   
    His work has been continued by William Thornton, Michael Brill and James Worthey
    (for a further search).
    Best regards --Gernot Hoffmann
    [1] Hermann Günther Graßmann (Grassmann)
    http://en.wikipedia.org/wiki/Hermann_Grassmann
    [2]
    http://de.wikipedia.org/wiki/Gra%C3%9Fmannsche_Gesetze
    [3] Claudio Oleari
    http://www.slidefinder.net/o/oleari10/32197498/p3
    (6th slide).
    [4]
    http://en.wikipedia.org/wiki/Grassmann%27s_law_%28optics%29
    [5]
    http://www.michaelbach.de/ot/index.html
    [6]
    Jozef B.Cohen
    Visual Color and Color Mixture
    http://books.google.de/books?id=W8QeI5di7t4C&pg=PR13&lpg=PR13&dq=cohen+color&source=bl&ots =aYGoI0I99g&sig=IJEgdJjk3yDhi-1NvTD4GdyUIXk&hl=de&sa=X&ei=614hVM_dFsrMyAPbzoDwCA&ved=0CGsQ 6AEwCA#v=onepage&q=cohen%20color&f=false

  • GeoRaster querying returns negative and invalid cell coordinates

    Hi!
    I'm using Oracle 10.2.0.1 and loading raster data into GeoRaster - loading all works but when querying cell coordinates I get negative and unexpected values. Here is my process:
    -- Create GeoRaster table
    DROP TABLE tm;
    CREATE TABLE tm (id NUMBER, description VARCHAR2(50), image SDO_GEORASTER);
    -- Create trigger to keep metadata up to date
    EXECUTE sdo_geor_utl.createDMLTrigger('TM', 'IMAGE');
    -- Create raster data table
    DROP TABLE tm_rdt;
    CREATE TABLE tm_rdt OF SDO_RASTER
    (PRIMARY KEY (rasterID, pyramidLevel, bandBlockNumber, rowBlockNumber, columnBlockNumber))
    TABLESPACE tbsp
    NOLOGGING LOB(rasterBlock)
    STORE AS lobseg (TABLESPACE tbsp CHUNK 8192 CACHE READS NOLOGGING PCTVERSION 0 );
    -- Grant privs on file locations
    exec dbms_java.grant_permission( 'PUBLIC', 'SYS:java.io.FilePermission', 'D:\Users\tm.tif','read,write' );
    exec dbms_java.grant_permission( 'TEST', 'SYS:java.io.FilePermission', 'D:\Users\tm.tif','read,write' );
    exec dbms_java.grant_permission( 'MDSYS', 'SYS:java.io.FilePermission', 'D:\Users\tm.tif','read,write' );
    exec dbms_java.grant_permission( 'PUBLIC', 'SYS:java.io.FilePermission', 'D:\Users\tm.tfw','read,write' );
    exec dbms_java.grant_permission( 'TEST', 'SYS:java.io.FilePermission', 'D:\Users\tm.tfw','read,write' );
    exec dbms_java.grant_permission( 'MDSYS', 'SYS:java.io.FilePermission', 'D:\Users\tm.tfw','read,write' );
    -- Initialise GeoRaster object and import image
    DECLARE
    l_gr SDO_GEORASTER;
    BEGIN      
    -- Initialise GeoRaster objects
    INSERT INTO tm (id, description, image)
    VALUES (1, 'TM', sdo_geor.init('TM_RDT'));
    -- Import the images
    SELECT image INTO l_gr FROM tm WHERE id = 1 FOR UPDATE;
    SDO_GEOR.IMPORTFROM(l_gr, 'spatialExtent=true', 'TIFF', 'file', 'D:\Users\tm.tif', 'WORLDFILE', 'file', 'D:\Users\tm.tfw,3035');
    UPDATE tm SET image = l_gr WHERE id = 1;
    COMMIT;
    END;
    DECLARE
    gr sdo_georaster;
    BEGIN
    SELECT image INTO gr FROM tm WHERE id = 1 FOR UPDATE;
    sdo_geor.setModelSRID(gr, 3035);
    gr.spatialExtent := sdo_geor.generateSpatialExtent(gr);
    UPDATE tm SET image = gr WHERE id = 1;
    COMMIT;
    END;
    The raster file was originally a .MAP file which I converted to a .TIF file via ArcCatalog 9.2.
    I created a World File whose contents are:
    5000
    0
    0
    -5000
    -1697500
    2697500
    The .MAP file and .TIF file have the following Spatial Reference (Oracle SID 3035):
    PROJCS["PCS_Lambert_Azimuthal_Equal_Area",GEOGCS["GCS_User_Defined",DATUM["D_User_Defined",SPHEROID["User_Defined_Spheroid",6378388.0,0.0]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Lambert_Azimuthal_Equal_Area"],PARAMETER["False_Easting",0.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",9.0],PARAMETER["Latitude_Of_Origin",48.0],UNIT["Meter",1.0]
    In the Raster file:
    Columns, Rows = 680, 810
    Number of Bands = 1
    Cellsize = 5000,5000
    Extent Top = 2700000
    Extent Left = -1700000
    Extent Right = 1700000
    Extent Bottom = -1350000
    Origin Location = Upper Left
    So, loaded the raster into Oracle all ok; but when I try and find the cell coord for a real-world coord in WGS84 (location of a point in Iceland), it returns invalid cell coords:
    SELECT     sdo_geor.getCellCoordinate(t.image, 0, SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-19.4833, 64.6833, 0), NULL, NULL))
    FROM      tm t;
    Returns: SDO_NUMBER_ARRAY(-443, 930) which isn't a true cell coord in my raster??
    Also, if I test out to get the corners of the raster:
    SELECT sdo_geor.getCellCoordinate(image, 0, sdo_geometry(2001, 3035, sdo_point_type(1700000,2700000,null), null,null)) FROM tm WHERE id=1;
    Should return the top right corner at (680,0), it actually returns: SDO_NUMBER_ARRAY(0, 680) - the other way round?!?!
    I'm slowing going mad on this, so any ideas/thoughts would be greatly appreciated!!!

    Hi,
    I just did some test. The results you got actually was right. The point (-19.4833, 64.6833, 0) you give is in 8307. After it is transformed into coordinate system 3035, the system your georaster in. We get the model coordinates (2954416.89, 4914665.16, 0). The model coordinate of your up-left corner cell of your georaster is (-1697500, 2697500), so the cell coordinates of the give point is col = int((2954416.89 - (-1697500))/5000) = int(930.38) = 930, and row = int((4914665.16 - 2697500)/(-5000)) = int(-443.43) = -430. That is what you got. The point you give is outside of your georaster on the top.
    SQL> select sdo_cs.transform(
    sdo_geometry(2001, 8307, SDO_POINT_TYPE(-19.4833, 64.6833, 0), null,null),
    3035) from dual; 2 3
    SDO_CS.TRANSFORM(SDO_GEOMETRY(2001,8307,SDO_POINT_TYPE(-19.4833,64.6833,0),NULL,
    SDO_GEOMETRY(2001, 3035, SDO_POINT_TYPE(2954416.89, 4914665.16, 0), NULL, NULL)
    For point (1700000,2700000,null), it is at the top-right corner. Its cell coordinates should be (0,680). Be noticed, the cell coordinates has order of ROW, COLUMN.
    Please let me know if you have further question.

  • How can I get the coordinate in the layer coordinate system while doing the iterate?

    I found that the position(x,y) the iterate function tell me is not right sometimes.
    Actually,it happens when the top of the layer is negative. The "y"is in the comp coordinate system(maybe?) but not the layer.In another word, the position with y=0 is not the top.
    More specifically, I have got the color map of a layer. I am able to get the color of any pixel in the layer. But I can't set the color one for one when I use "iterate",because there is an offset.
    Am I supposed to get the offset? Is there any way to access the whole layer directly if I use iterate function?

    so indeed it's the offset at play.
    AE will diminish the buffer at 20% increments, and not for each pixel that
    goes out of sight. it's not really in relation to the upper left corner,
    it's all about what part of your layer buffer is out of sight.
    there's also an "iterate offset function". check it out. perhaps it will
    save you some trouble.
    On Fri, Dec 19, 2014 at 2:45 PM, Hinanawi Tenshi <[email protected]>

  • How can I get the X and Y coordinates of an object in Xcode, ApplescriptObjc?

    How can I get the value of a specified object's X and Y coordinates in Xcode, using ApplescriptObjc? I'm hoping for something like:
    myObject's currentPosition()
    // Which would return {150, 100} for the X and Y of that object.

    Actually this is straight from basic physics
    Assuming that the x and y values you get using that AppleScript/Objective-C code is accurate, to move the object you would do
    set x to x + dx * speed
    set y to y + dx * speed
    with dx and dy being an integer with a value of -1 or 1 which indicates the direction the object is moving. Remember the origin in OS X is the middle of the frame or screen with positive x and y values moving to the upper right quadrant, and negative x and y values moving to the lower left. Your basic Cartesian coordinate system. Speed is also an interger and models the speed of the object.
    Once you've set x and y to their new values you would write  them back to the object.
    This is so I can find how much distance (in pixels) is being travelled when I animate an object to move from one position to another, to apply it to my desired velocity (of pixels per second), to find out how long the object should take to move.
    This part has got me stumped.
    To make an object move you need to change its x and y coordinates. To get smooth animation you need to change them on a regular schedule at a rate fast enough to avoid jerky motion. Given the refresh rate of most monitors and other factors a rate of 60 times per second gives good results. Between each tick of the clock you change the x and y coordinates by some amount, the size of the amount will model speed. Add a small change and the object moves slowly, add a big amount and the object moves quickly. Keep the amount of change constant and the object will move at a constant speed. Increase (or decrease) the amount of change between each tick and the object will accelerate (or decelerate)

Maybe you are looking for