Eye Position

Hi, I'm a Java3D new user. I have a problem, because I'm working with objects whose linear measures are milimeters, and when scene is created, the objects are far away from eye position. Otherwise, if objects are very bigs, eye position is very close. How can I calculate the eye position so that objects in scene were always inside image plate, independently of linear measures?

Still waiting a response....

Similar Messages

  • Left and Right eye views

    I have written an application that displays the left eye view in one window, A, and the right eye in another window, B.
    Extract from the code:
    >>>
    Canvas3D canvas3D_A = new Canvas3D(gc);
    canvas3D_A.setMonoscopicViewPolicy(View.LEFT_EYE_VIEW); // <--
    Canvas3D canvas3D_B = new Canvas3D(gc);
    canvas3D_B.setMonoscopicViewPolicy(View.RIGHT_EYE_VIEW); // <--
    View view = new View();
    view.addCanvas3D(canvas3D_A);
    view.addCanvas3D(canvas3D_B);
    ViewPlatform viewPlatform = new ViewPlatform();
    view.attachViewPlatform(viewPlatform);
    <<<
    The problem is that the left eye image is translated to the left and the right eye image to
    the right. I seems like the distances correspond to PhysicalBoby
    left eye position : (-0.033, 0.0, 0.0)
    and
    right eye position : (0.033, 0.0, 0.0)
    I want the images to be centered while preserving the stereo effect. Which parametes etc shall I manipulate?

    Sorry if a probably used the "wrong" terminology.
    In fact you can watch these image with "polarized glasses" as well.
    I also initially though that J3D would provide the option of directly doing the job.
    One day may son got polarized glasses (colorized with Red and Blue) from a childrens comics magazine and I also wondered if it was possible to easily create such kind of images with Java 3D since, as you also did, J3D offers the facility to set the right/left monscopic eye view.
    I didn't manage to find a direct instruction to render the combined result so i simply used the algorithm consisting in:
    set the right monoscopic view
    get the rendered Canvas3D image ( as a BufferedImage) and apply a Java image filter (i.e. colorize with red color)
    set the left monoscopic view
    get the rendered image and apply a Java image filter (i.e. colorize with blue color)
    merge both BufferedImages (red and blue) in a single image.
    ..and finally watch the result with the polarised glasses.

  • Qosmio F750 and external 3D monitor

    I can connect the IR emitter for use nvidia active shutter glasses and a 3D external monitor ? such as a video projector 3D?
    How do I install the driver emitter?

    To be honest Im a little bit confused.
    Not quite sure what you do exactly.
    Let's clarify some basic things.
    The Qosmio F750 "glasses-free 3D" notebook was equipped with special internal autostereoscopic 3D display which does not require additional usage of 3D glasses.
    The technology is called active lens which use the internal webcam to scan and control the eye position (face tracking) in order to ensure best 3D effect.
    If you want to watch 3D on external monitor, you need a monitor which would support 3D technology (120Hz) and special 3D Vision 2 Wireless Glasses Kit because the Nvidia 3D Vision technology is designed for use with active shutter glasses and 120Hz monitors.
    You can of course also play BD 3D movies using the integrated Blu-ray optical drive and the bundled player also on an external 3D TV connected to the notebooks HDMI 1.4 video output.
    The 3D TVrequires 3D glasses too.

  • Increase in java objects ?

    We have 2 different instances of jmap output. We see increase in object like java.lang.string, byte[],java.lang.object, java.util.TreeMap,java.util.Hashamp etc. Is this something to be worrying about? What is the difference shallow and retained heap in each of the instances?

    I was mistaken
    setScale turned out not to be so magic.
    I still need to increase distance between eye position and image plate to be able to view scene objects located at the z=0 or closer.
    So I still need to find value of Z-axis translation.
    I think frustums should be used, but I never made deals with them.

  • Does rotation matrix keeps perpendicularity of two vectors?

    I know my question is quite silly but let me explain my problem...
    During the execution of my program, I wrote my own "lookAt" function wich takes into parameters two 3D points (eye position and view position) and a up vector. Everything is working great but, at one time, when I need to rotate my whole thing, I get a strange mistake I don't know how to solve :
    I compute my two vectors (up vector and the view one (view - eye)), there I check this two vectors are perpendicular (v1.dot(v2) == 0). Then I multiply my two vectors by my rotation matrix and I get my two new vectors. But, when I check if the two vectors are perpendicular, v1.dot(v2) now equals to something like -x.xxxxxxxxxxxxxxxE-XX, XX like 16 or 17. I know the problem is coming from the precision of my numbers, but does someone knows how I can solve it to conserve the perpendicularity?
    I really need this perpendicularity because I need to compute a new transform Matrix and with my problem, this one is not congruent, so I get an error.
    Thanks!!

    i did the same and had an error too, post you calculation code and i will try to help if i can

  • Coordinate system

    I am having trouble getting my local-world-view-screen matrix setup right.
    First, coordinate system is:
    +x to the right
    +y down
    +z into the screen
    right? this is what I get from Vector3D class.
    But now that I am thinking of it DirectX and OpenGL have different coordinate systems so how does Molehill work that out?  Do I need to know what API I am runnging to transform to screen space?

    Yes things are starting to look better now. Thanks you both.  I incorrectly assumed the coordinates system was as described in the Vector3D class docs.
    I have a cube with each side a different color and now it appears oriented as expected.
    Now I am appending a view-screen matrix built using lookAtRH().  I create an eye position Vector3D that rotates around the Up/Y axis, looking at (0,0,0) and there is nothing on screen.  I looked at my DirectX engine lookAtRH() equivalent function and notice that in the Molehill implementation there is:
    _w,x = _x.dotProduct(eye);
    _w,y = _y.dotProduct(eye);
    _w,z = _z.dotProduct(eye);
    _w,w = 1.0;
    but in the directX version it is expressed as the negative of the dot products, i.e.:
    _w,x = -_x.dotProduct(eye);
    _w,y = -_y.dotProduct(eye);
    _w,z = -_z.dotProduct(eye);
    _w,x  = 1.0;
    so I tried this and the cube appears on screen!!, however the camera is not rotating around the Y axis it is rotating around the cube off-axis.
    Have you used the lookAtRH() method and had good results, or are you using your own math?
    Does the matrix from lookAtRH() get appended to the stack or do I invert this matrix and append it?
    I am not an expert at the math, but have been able to work this out on PS2, PSP, OpenGL, DirectX, WPF, and now it is Molehill. I know that the PerspectiveMatrix3D.as is in progress because I have gotten it from several sources/example programs and I see for instance that the OrthoRH() method has a typo/bug in it that was fixed in the teapot example.
    Any help is appreciated!!

  • Color banning on raw files on Nikon D800e & Canon Mark ii 5d.

    I have a stand alone photoshop CS6.  I am getting color banning in my raw files not my jpegs on both of these cameras.  Any ideas?

    Dennis,
    The secure disk is the one used in the Nikon 800E, the camera store sent me a 32gb Pro Master when I bought the camera, when I started seeing this problem pop up I thought it was the disk so I bought a new one a SanDisk 32gb but the problem stayed.  So I thought the camera was the one with the problem, it is now at the Nikon repair place.  The other weird thing about this problem is that the color banding will only pop up in the images at the front end of the shoot.  In the first 30 shots, and no more then a few images would be effected in that 30. 
    No after updating the drivers of the video card and changing to the 16 bit I opened the downloaded Canon & Nikon files on the computer and the problem was still there but when I open the files on the card with the computer reader there is no problem.
    I did do the memory diagnosing and it showed no problems.  I also went into Bridge and pruge the cache.
    As for the position of the shot of the young lady's feet.  I have a lot of full body Mom wanted some close up's and since my Nikon is in the shop I was using my Canon and I don't own a zoom lens for the Canon.  But cropping takes care of that.
    Canon uses a Compact flash and Nikon uses a Secure disk card so no to that question.
    Curt,
    When you say a single purpose USB card reader are you talking about one dedicated to Compact Flash and one just for Secure Disk?  Also I always, ALWAYS format my cards in CAMERA.
    Dennis, I wouldn't use the corupt photo because of her eye position but I would crop like this.
    I cannot wait until I get my Nikon back to see if all what you guys have helped me with has fixed the problem.  I will be watching my raw files to see if something pops back up.  If I think of anything else that seems to be linked to this problem I will post here.
    Kathy

  • Raytracing a Java3D scene's View

    Hi,
    I have a simple Java3D modelling app, which I am trying to hook up a basic Raytracer to.
    Scene Graph looks something like the following:
                     VirtualUniverse, etc
                              |
                           Locale
                              |
        ContentBranchGroup         ViewsBranchGroup
                |                          |
               ...                 TransformGroup [*]
                                           |
                                     ViewPlatform  -  ViewI have a simple mouse pan behaviour, a mouse zoom behaviour, and a mouse orbit behaviour that all modify the Transform Group marked with a [*] above, thereby changing the "camera" by which the user looks at the scene. The behaviours work very well.
    I am trying to hook the raytracer up to "look" through this camera. (I have all the other raytracer stuff working)
    The raytracer requires the following:
    - The camera eye position
    - The camera target (look at) position
    - The camera up vector
    I am trying to extract this information from the [*] marked TransformGroup's Transform3D object but it is very difficult. I can get the eye position using the transform3D.get(Vector3d) method.
    I have tried so many ways to get the target and up vectors. I know the principles to get them but the Transform3D get() methods are to say the least, a nightmare, for me to work with. CAN ANYBODY HELP ME?
    If it helps I have managed to get the Euler angles for the camera rotation (converting the rotation Quaternion)...
    What next? Please...!
    Thanks in advance

    Yes, I understand what you are trying to do. Essentially, what it requires is that you do some linear algebra from the position and orientation of the camera object. So, you can get the following:
    Camera position:
    the actual point location of the camera in the virtual world:
    viewTransform3D.get( Vector3f position );
    Camera "look at" position:
    I assume you don't actually need the point on the X-Y plane (or at least, you shouldn't), but a point along the path in which the camera points (we call the VPN, or view-plane normal). So, in essence, when you take the "look at point" and subtract the "camera position point", you get a vector in 3d space which tells you the direction the camera is pointing. Whichever one you need, you will have to calculate that. More on this in a moment.
    Camera "up vector":
    This is another one you will have to calculate based on the location of the camera and its current orientation. This one again is a vector based on the original "up" location (also called VUP). More on this one right now...
    A good way to resolve this issue is to calculate these vectors based on two points at the beginning of your application, and use them to calcuate the camera's focus point (via VPN) and the camera's up vector. Its pretty simple and easy to implement. Basically, when you set up your camera (view platform), you set the camera position to some default location. When you set this default location, create 2 points which represent the end of the up vector and the end of the VPN, respectively. Then, whenever the transform group which contains the camera's transform is manipulated, simply take that Transform3D and use its matrix to modify the locations of your two initial points, thus giving you the end of the up vector and the end of the vpn transformed by the same transform used to rotate the camera. Then simply subract the camera's location point from each of these and you have the up vector and the vpn, ready for your ray tracer.
    I've included some sample code to show you what I'm talking about. Check out the canvas' mouse listener to find the calculations for the VUP and VPN. It sounds like your implementation requires a look at Point3d, so what you could do is just take the camera's location and add some multiple of the VPN to get a look at point.
    Anyway, here's the code...
    import com.sun.j3d.utils.behaviors.vp.OrbitBehavior;
    import com.sun.j3d.utils.geometry.ColorCube;
    import com.sun.j3d.utils.universe.SimpleUniverse;
    import java.awt.Dimension;
    import java.awt.GraphicsConfiguration;
    import java.awt.GraphicsDevice;
    import java.awt.GraphicsEnvironment;
    import javax.media.j3d.AmbientLight;
    import javax.media.j3d.Appearance;
    import javax.media.j3d.BoundingSphere;
    import javax.media.j3d.BranchGroup;
    import javax.media.j3d.Canvas3D;
    import javax.media.j3d.DirectionalLight;
    import javax.media.j3d.Group;
    import javax.media.j3d.GraphicsConfigTemplate3D;
    import javax.media.j3d.Light;
    import javax.media.j3d.Material;
    import javax.media.j3d.PhysicalBody;
    import javax.media.j3d.PhysicalEnvironment;
    import javax.media.j3d.PointLight;
    import javax.media.j3d.PolygonAttributes;
    import javax.media.j3d.Shape3D;
    import javax.media.j3d.TransformGroup;
    import javax.media.j3d.Transform3D;
    import javax.media.j3d.View;
    import javax.media.j3d.ViewPlatform;
    import javax.media.j3d.TransformGroup;
    import javax.swing.BoxLayout;
    import javax.swing.JFrame;
    import javax.swing.JPanel;
    import java.awt.event.MouseAdapter;
    import java.awt.event.MouseEvent;
    import javax.vecmath.Color3f;
    import javax.vecmath.Color4f;
    import javax.vecmath.Point3d;
    import javax.vecmath.Point3f;
    import javax.vecmath.Vector3d;
    import javax.vecmath.Vector3f;
    * The <code>SimpleViewCalculator</code> class calculates the up vector and the
    * view plane normal for a camera based on its current position and the
    * transformation of two default up vector and VPN reference points.
    * @version 1.0
    public class SimpleViewCalculator
      extends JFrame
      // CONSTRUCTOR
       * Creates a new <code>SimpleViewCalculator</code> instance.
      public SimpleViewCalculator()
        // Create the canvas and add it to the universe.
        canvas = initializeCanvas();
        universe = new SimpleUniverse( canvas );
        // Create the content for the virtual universe.
        worldRoot = createWorldObjects();
        worldRoot.addChild( createColorCube( 0.0f, 0.0f, 0.0f, 5.0f ) );
        universe.addBranchGraph( worldRoot );
        universe.getViewingPlatform().
          setViewPlatformBehavior( createOrbitBehavior() );
        // Set the clipping distances for the view.
        universe.getViewer().getView( ).setBackClipDistance( 1000.0 );
        universe.getViewer().getView( ).setFrontClipDistance( 0.1 );  
        setCameraPosition( 0.0f,  0.0f, 50.0f );
        upPoint = new Point3d( 0.0, 1.0, 0.0 );
        lookAtPoint = new Point3d( 0.0, 0.0, -1.0 );
           this.getContentPane().setLayout(
          new BoxLayout( this.getContentPane(), BoxLayout.Y_AXIS ) );
        this.getContentPane().add( canvas );
        canvas.addMouseListener(
          new MouseAdapter()
             * The mouseClicked on this mouse listener is where the work for
             * calculating the up vector and the view plane normal.
            public void mouseClicked( MouseEvent event )
              TransformGroup viewPlatformTG =
                universe.getViewingPlatform().getViewPlatformTransform();
              Transform3D cameraT3D = new Transform3D();
              Vector3d cameraPos = new Vector3d();
              viewPlatformTG.getTransform( cameraT3D );
              cameraT3D.get( cameraPos );
              // Calculate the "up" vector.
              Point3d newUpPoint = new Point3d();
              cameraT3D.transform( upPoint, newUpPoint );
              Vector3d upVector = new Vector3d();
              upVector.x = newUpPoint.x - cameraPos.x;
              upVector.y = newUpPoint.y - cameraPos.y;
              upVector.z = newUpPoint.z - cameraPos.z;
              upVector.normalize();
              System.err.println( "new up vector = " + upVector );
              // Calculate the "vpn" or direction the camera is pointing.
              Point3d newLookAtPoint = new Point3d();
              cameraT3D.transform( lookAtPoint, newLookAtPoint );
              Vector3d vpn = new Vector3d();
              vpn.x = newLookAtPoint.x - cameraPos.x;
              vpn.y = newLookAtPoint.y - cameraPos.y;
              vpn.z = newLookAtPoint.z - cameraPos.z;
              vpn.normalize();
              System.err.println( "new view plane normal = " + vpn );
        this.setVisible( true );
        this.setSize( new Dimension( 300, 375 ) );   
      // METHODS
       * Initialize a Canvas3D with default graphics configuration for user's
       * system.
       * @return a <code>Canvas3D</code> value
      private Canvas3D initializeCanvas()
        GraphicsConfigTemplate3D tmpl = new GraphicsConfigTemplate3D( );
        GraphicsEnvironment env =
          GraphicsEnvironment.getLocalGraphicsEnvironment( );
        GraphicsDevice device = env.getDefaultScreenDevice( );
        GraphicsConfiguration config = device.getBestConfiguration( tmpl );
        Canvas3D canvas3D = new Canvas3D( config );
        return canvas3D; 
       * Create a orbit behavior for the canvas and simple universe.
      private OrbitBehavior createOrbitBehavior()
        OrbitBehavior orbit =
          new OrbitBehavior( canvas, OrbitBehavior.REVERSE_ALL );
        BoundingSphere bounds =
          new BoundingSphere( new Point3d( 0.0, 0.0, 0.0 ), 1000000000 );
        orbit.setSchedulingBounds( bounds );
        orbit.setTranslateEnable( true );
        orbit.setZoomEnable( true );
        orbit.setProportionalZoom( true );
        orbit.setReverseZoom( true );
        return orbit;
       * Create the objects that will appear in the virutal world and add them to a
       * branch group which is returned to the calling method.
       * @return a <code>BranchGroup</code> value
      private BranchGroup createWorldObjects()
        BranchGroup root = new BranchGroup();
        root.setCapability( BranchGroup.ALLOW_DETACH );
        root.setCapability( Group.ALLOW_CHILDREN_EXTEND );
        root.setCapability( Group.ALLOW_CHILDREN_READ );
        root.setCapability( Group.ALLOW_CHILDREN_WRITE );
        return root;
       * Create a color cube of <code>scale</code> size, attached to a branch group.
       * This method returns a color cube attached to that branch group.
      private BranchGroup createColorCube( float x, float y, float z, double scale )
           BranchGroup colorCubeRoot = new BranchGroup();
           colorCubeRoot.setCapability( BranchGroup.ALLOW_DETACH );
           TransformGroup colorCubeTG = new TransformGroup();
           Transform3D colorCubeT3D = new Transform3D();
           colorCubeT3D.set( new Vector3f( x, y, z ) );
           colorCubeTG.setTransform( colorCubeT3D );
           colorCubeRoot.addChild( colorCubeTG );
           ColorCube cube = new ColorCube( scale );
           colorCubeTG.addChild( cube );
           return colorCubeRoot;
       * Set the position of the camera.
      private void setCameraPosition( float x, float y, float z )
        Transform3D vt = new Transform3D();
        Point3d eye = new Point3d( x, y , z );
        Point3d  center = new Point3d( 0, 0, 0 );
        Vector3d up = new Vector3d( 0.0, 1.0, 0.0 );   
        vt.lookAt( eye, center, up );
        vt.invert( );
        vt.setTranslation( new Vector3d( eye.x, eye.y, eye.z ) );
        universe.getViewer().getViewingPlatform( ).
          getViewPlatformTransform( ).setTransform( vt );
       * Main method for application.
      public static void main( String[] args )
        new SimpleViewCalculator();     
      // ATTRIBUTES
      /** A Simple 3D virtual world. */
      private SimpleUniverse universe;
      /** Component to paint contents of virtual world. */
      private Canvas3D canvas;
      /** Allows mouse rotations, etc. */
      private OrbitBehavior orbitBehavior;
      /** View object for simple universe. */
      private View view;
      /** Branch group for adding world objects. */
      private BranchGroup worldRoot;
      /** Branch group for attaching lights. */
      private BranchGroup lightRoot;
      /** Point light for illuminating scene. */
      private PointLight pointLight;
      /** A point used to calcuate the up vector. */
      private Point3d upPoint;
      /** A point used to calculate the VPN. */
      private Point3d lookAtPoint;
    }Hope that helps. You should check this on your own to make sure no mistakes were made, I didn't have too much time to test it.
    Cheers,
    Greg

  • Setting up Eclipse for C++ OpenGL development on Arch (Help, please!)

    I've recently switched over to Arch (and love it) from Windows 7, and am trying to start developing in OpenGL using C++. I've used Eclipse a lot on the Windows side for Java development, but I'm having trouble referencing the OpenGL libraries in the C++ stuff.
    My code is as follows:
    #include <GL/gl.h>
    #include <GL/freeglut.h>
    int main(int argc, char **argv) {
    glutInit(&argc, argv);
    return 0;
    I don't believe the problem's there because the code won't even begin to compile.
    I tried to find and select the libraries I need in the project properties, but I really have no idea what I'm doing in there. In the current state of things, this is my console output:
    **** Build of configuration Debug for project CGame ****
    make all
    Building target: CGame
    Invoking: Cross G++ Linker
    g++ -L/usr/include/GL -o "CGame" ./main.o -l/usr/include/GL
    /usr/bin/ld: cannot find -l/usr/include/GL
    collect2: error: ld returned 1 exit status
    make: *** [CGame] Error 1
    **** Build Finished ****
    This is what some parts of Eclipse's configuration look like:
    Can anybody lend me a hand? I've been searching around on the Internet for a little while now, but I'm getting tired of all the shitty information there is. Many thanks if you can point out how to include the OpenGL/freeglut libraries in my project. If you want to see how my Eclipse stuff is configured, I can show you over Skype or TeamViewer.
    Last edited by BLACKwave (2012-04-04 21:37:47)

    Got it working? Great!
    Personally, I use the GLFW window library for OpenGL coding. You can get it by:
    pacman -S glfw
    triangle.cpp
    //========================================================================
    // This is a small test application for GLFW.
    // The program opens a window (640x480), and renders a spinning colored
    // triangle (it is controlled with both the GLFW timer and the mouse).
    //========================================================================
    #include <stdio.h>
    #include <stdlib.h>
    #include <GL/glfw.h>
    int main( void )
    int width, height, x;
    double t;
    // Initialise GLFW
    if( !glfwInit() )
    fprintf( stderr, "Failed to initialize GLFW\n" );
    exit( EXIT_FAILURE );
    // Open a window and create its OpenGL context
    if( !glfwOpenWindow( 640, 480, 0,0,0,0, 0,0, GLFW_WINDOW ) )
    fprintf( stderr, "Failed to open GLFW window\n" );
    glfwTerminate();
    exit( EXIT_FAILURE );
    glfwSetWindowTitle( "Spinning Triangle" );
    // Ensure we can capture the escape key being pressed below
    glfwEnable( GLFW_STICKY_KEYS );
    // Enable vertical sync (on cards that support it)
    glfwSwapInterval( 1 );
    do
    t = glfwGetTime();
    glfwGetMousePos( &x, NULL );
    // Get window size (may be different than the requested size)
    glfwGetWindowSize( &width, &height );
    // Special case: avoid division by zero below
    height = height > 0 ? height : 1;
    glViewport( 0, 0, width, height );
    // Clear color buffer to black
    glClearColor( 0.0f, 0.0f, 0.0f, 0.0f );
    glClear( GL_COLOR_BUFFER_BIT );
    // Select and setup the projection matrix
    glMatrixMode( GL_PROJECTION );
    glLoadIdentity();
    gluPerspective( 65.0f, (GLfloat)width/(GLfloat)height, 1.0f, 100.0f );
    // Select and setup the modelview matrix
    glMatrixMode( GL_MODELVIEW );
    glLoadIdentity();
    gluLookAt( 0.0f, 1.0f, 0.0f, // Eye-position
    0.0f, 20.0f, 0.0f, // View-point
    0.0f, 0.0f, 1.0f ); // Up-vector
    // Draw a rotating colorful triangle
    glTranslatef( 0.0f, 14.0f, 0.0f );
    glRotatef( 0.3f*(GLfloat)x + (GLfloat)t*100.0f, 0.0f, 0.0f, 1.0f );
    glBegin( GL_TRIANGLES );
    glColor3f( 1.0f, 0.0f, 0.0f );
    glVertex3f( -5.0f, 0.0f, -4.0f );
    glColor3f( 0.0f, 1.0f, 0.0f );
    glVertex3f( 5.0f, 0.0f, -4.0f );
    glColor3f( 0.0f, 0.0f, 1.0f );
    glVertex3f( 0.0f, 0.0f, 6.0f );
    glEnd();
    // Swap buffers
    glfwSwapBuffers();
    } // Check if the ESC key was pressed or the window was closed
    while( glfwGetKey( GLFW_KEY_ESC ) != GLFW_PRESS &&
    glfwGetWindowParam( GLFW_OPENED ) );
    // Close OpenGL window and terminate GLFW
    glfwTerminate();
    exit( EXIT_SUCCESS );
    Then you can compile the code above by:
    g++ triangle.cpp -o triangle -l GLFW -l GL
    It just draws a rotating triangle, but should get you started on something more exiting.
    Last edited by krigun (2012-04-04 23:39:45)

  • Transperancy and effeciency

    im posting here code that i got out of the internet on how to creat transperant window in C#.. try it , it looks cool and not too heavy on cpu..
    Why doesnt java have somthing like that?, a true transperacny.. discuss.
    using System;
    using System.Drawing;
    using System.Collections;
    using System.ComponentModel;
    using System.Windows.Forms;
    namespace TransparentForms
        /// <summary>
        /// Summary description for Form2.
        /// </summary>
        public class Transparent : System.Windows.Forms.Form
            internal System.Windows.Forms.GroupBox GroupBox1;
            internal System.Windows.Forms.Button cmdApply;
            internal System.Windows.Forms.NumericUpDown udOpacity;
            internal System.Windows.Forms.Label Label1;
            /// <summary>
            /// Required designer variable.
            /// </summary>
            private System.ComponentModel.Container components = null;
            public Transparent()
                // Required for Windows Form Designer support
                InitializeComponent();
                // TODO: Add any constructor code after InitializeComponent call
            /// <summary>
            /// Clean up any resources being used.
            /// </summary>
            protected override void Dispose( bool disposing )
                if( disposing )
                    if(components != null)
                        components.Dispose();
                base.Dispose( disposing );
            #region Windows Form Designer generated code
            /// <summary>
            /// Required method for Designer support - do not modify
            /// the contents of this method with the code editor.
            /// </summary>
            private void InitializeComponent()
                this.GroupBox1 = new System.Windows.Forms.GroupBox();
                this.cmdApply = new System.Windows.Forms.Button();
                this.udOpacity = new System.Windows.Forms.NumericUpDown();
                this.Label1 = new System.Windows.Forms.Label();
                this.GroupBox1.SuspendLayout();
                ((System.ComponentModel.ISupportInitialize)(this.udOpacity)).BeginInit();
                this.SuspendLayout();
                // GroupBox1
                this.GroupBox1.Controls.AddRange(new System.Windows.Forms.Control[] {
                                                                                        this.cmdApply,
                                                                                        this.udOpacity,
                                                                                        this.Label1});
                this.GroupBox1.Location = new System.Drawing.Point(12, 75);
                this.GroupBox1.Name = "GroupBox1";
                this.GroupBox1.Size = new System.Drawing.Size(268, 116);
                this.GroupBox1.TabIndex = 4;
                this.GroupBox1.TabStop = false;
                // cmdApply
                this.cmdApply.Location = new System.Drawing.Point(172, 64);
                this.cmdApply.Name = "cmdApply";
                this.cmdApply.Size = new System.Drawing.Size(80, 24);
                this.cmdApply.TabIndex = 5;
                this.cmdApply.Text = "Apply";
                this.cmdApply.Click += new System.EventHandler(this.cmdApply_Click);
                // udOpacity
                this.udOpacity.Increment = new System.Decimal(new int[] {
                                                                            5,
                                                                            0,
                                                                            0,
                                                                            0});
                this.udOpacity.Location = new System.Drawing.Point(88, 32);
                this.udOpacity.Name = "udOpacity";
                this.udOpacity.Size = new System.Drawing.Size(48, 21);
                this.udOpacity.TabIndex = 4;
                this.udOpacity.Value = new System.Decimal(new int[] {
                                                                        50,
                                                                        0,
                                                                        0,
                                                                        0});
                // Label1
                this.Label1.Location = new System.Drawing.Point(20, 36);
                this.Label1.Name = "Label1";
                this.Label1.Size = new System.Drawing.Size(56, 16);
                this.Label1.TabIndex = 3;
                this.Label1.Text = "Opacity:";
                // Form2
                this.AutoScaleBaseSize = new System.Drawing.Size(5, 14);
                this.ClientSize = new System.Drawing.Size(292, 266);
                this.Controls.AddRange(new System.Windows.Forms.Control[] {
                                                                              this.GroupBox1});
                this.Font = new System.Drawing.Font("Tahoma", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((System.Byte)(0)));
                this.Name = "Form2";
                this.Text = "A Transparent Form";
                this.GroupBox1.ResumeLayout(false);
                ((System.ComponentModel.ISupportInitialize)(this.udOpacity)).EndInit();
                this.ResumeLayout(false);
            #endregion
            private void cmdApply_Click(object sender, System.EventArgs e)
                this.Opacity = (double)udOpacity.Value / 100;
            [STAThread]
            static void Main()
                Application.Run(new Transparent());
    }

    i used the jna api with jgl "java gl api" written by robin.. it is open source pure java implementation of open gl and glut calls. just google it and download the jar and save it to java/jre/lib/ext or ur classpath then compile this code and see transparent 3D graphics on ur desktop..
    this code is one of the examples in jgl .. i just modified it to use JFrame and jna calls to add transperancy.
    Bilal El Uneis
    *  movelight.java
    *  This program demonstrates when to issue lighting and
    *  transformation commands to render a model with a light
    *  which is moved by a modeling transformation (rotate or
    *  translate).  The light position is reset after the modeling
    *  transformation is called.  The eye position does not change.
    *  A sphere is drawn using a grey material characteristic.
    *  A single light source illuminates the object.
    *  Interaction:  pressing the left mouse button alters
    *  the modeling transformation (x rotation) by 30 degrees.
    *  The scene is then redrawn with the light in a new position.
    import java.awt.Frame;
    import javax.swing.*;
    import java.io.IOException;
    import java.lang.String;
    import java.lang.System;
    import jgl.GL;
    import jgl.GLUT;
    import jgl.GLCanvas;
    import com.sun.jna.examples.WindowUtils;
    public class movelight extends GLCanvas {
        private int spin = 0;
         *  Initialize material property, light source, lighting model,
         *  and depth buffer.
        private void myinit () {
             myGL.glClearColor (0.0f, 0.0f, 0.0f, 0.0f);
         myGL.glShadeModel (GL.GL_SMOOTH);
         myGL.glEnable (GL.GL_LIGHTING);
         myGL.glEnable (GL.GL_LIGHT0);
         myGL.glEnable (GL.GL_DEPTH_TEST);
         *  Here is where the light position is reset after the modeling
         *  transformation (glRotated) is called.  This places the
         *  light at a new position in world coordinates.  The cube
         *  represents the position of the light.
        public void display () {
             float position [] = {0.0f, 0.0f, 1.5f, 1.0f};
         myGL.glClear (GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
         myGL.glPushMatrix ();
             myGLU.gluLookAt (0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
             myGL.glPushMatrix ();
                  myGL.glRotated ((double)spin, 1.0, 0.0, 0.0);
              myGL.glLightfv (GL.GL_LIGHT0, GL.GL_POSITION, position);
              myGL.glTranslated (0.0, 0.0, 1.5);
              myGL.glDisable (GL.GL_LIGHTING);
              myGL.glColor3f (0.0f, 1.0f, 1.0f);
              myUT.glutWireCube (0.1);
              myGL.glEnable (GL.GL_LIGHTING);
             myGL.glPopMatrix ();
             myUT.glutSolidTorus (0.275, 0.85, 8, 15);
         myGL.glPopMatrix ();
         myGL.glFlush ();
        public void myReshape (int w, int h) {
            myGL.glViewport (0, 0, w, h);
            myGL.glMatrixMode (GL.GL_PROJECTION);
            myGL.glLoadIdentity ();
         myGLU.gluPerspective (40.0, (float)w/(float)h, 1.0, 20.0);
            myGL.glMatrixMode (GL.GL_MODELVIEW);
            myGL.glLoadIdentity ();
        /* ARGSUSED2 */
        public void mouse (int button, int state, int x, int y) {
         switch (button) {
             case GLUT.GLUT_LEFT_BUTTON:
              if (state == GLUT.GLUT_DOWN) {
                  spin = (spin + 30) % 360;
                  myUT.glutPostRedisplay ();
              break;
             default:
              break;
        /* ARGSUSED1 */
        public void keyboard (char key, int x, int y) {
         switch (key) {
             case 27:
              System.exit(0);
             default:
              break;
        public void init () {
         myUT.glutInitWindowSize (500, 500);
         myUT.glutInitWindowPosition (0, 0);
         myUT.glutCreateWindow (this);
         myinit ();
         myUT.glutDisplayFunc ("display");
         myUT.glutReshapeFunc ("myReshape");
         myUT.glutMouseFunc ("mouse");
         myUT.glutKeyboardFunc ("keyboard");
         myUT.glutMainLoop ();
        static public void main (String args[]) throws IOException
             try
                   System.setProperty("sun.java2d.noddraw", "true");
            }catch(Exception e){}
         final JFrame mainFrame = new JFrame ();
         mainFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
         mainFrame.setSize (508, 527);
         movelight mainCanvas = new movelight ();
         mainCanvas.init();
         mainFrame.add (mainCanvas);
         WindowUtils.setWindowAlpha(mainFrame, .7f);
         mainFrame.setVisible (true);
    }

  • Does LabVIEW take over keyboard presses?

    I'm using a C api in my LabVIEW application through a call-library-function-node. I'd like to be able to gather keypress info in one of the functions in the DLL, but it seems like LabVIEW is hijacking the values gathered by the C function from the keyboard. Any info on this? Thanks!

    Thanks for all the replies! I'll respond in order:
    "Instead of a DLL, can you compile your code into an executable, and then call it with System Exec?"
    Haven't tried this yet. The reason I'm using a DLL is that I'm using an API for an out-of-the-box device (an eye-position tracker), and I've written several functions that I need to access, so it would be kind of a pain to write code for multiple executables, but I can do this if it comes down to it.
    "Is there a reason you don't want to monitor the keyboard in LabVIEW? Look at the Connectivity>Input Device Control palette."
    "Another way to monitor the keyboard or mouse in LV is to use an event structure. There are events and filter events for keyboard and mouse."
    Putting these two together since they have the same answer... Long story short, I can't, once the function I'm calling is triggered correctly, it's waiting for user-input on the keyboard and thus the input read by labview will go unnoticed (I tried this several different ways).
    "IIRC, CLNs execute in the UI thread.  It seems to me that if you use a CLN to call something that also is trying to access the UI thread, they could deadlock (but I have never tried this).  Could that be the issue?"
    I'm actually a novice C coder and self-taught labviewer, this sound super intriguing, EXPLAIN PLEASE!
    Finally, I'd like to mention that I have made a small amount of progress on my own with this issue. I believe that I need to create a window using something like the function CreateWindowEx, part of the native windows user interface functions, because when one creates a window, one also makes a process to handle events like user-input. I suspect that whatever process is created by labview to handle user input in the VI is rightly in charge of handling key-presses (I also suspect that creating an executable might get around this as well since the exe might create a process to handle UI), so it's more like my DLL is actually the one trying to hijack the key-presses, not the other way around. If this is so, I should be able to simply create a dummy window (drawing it off screen so I can still see all my front-panel stuff in LabVIEW)  to handle the UI and then kill it when I'm done. I'm trying to do this at present, but my novice C coder status is making it a bit slow going. I was hoping some LabVIEW engineer or somebody else might have dealt with this type of issue in the past and could simply send me some code to make it all a moot point...
    Again, thanks for all the replies and suggestions!

  • Translating viewplatform causes trouble

    My scene is setup like this:
    * I have a 2D textured plane on z=0 from x,y=1,1 to x,y=-1,-1
    * There are a couple of objects located on the plane, basically pyramids, pointing upwards (to z+), but their base is at z=0, on the plane, during the course of the application they are moving across the plane, being translated along the x/y axis, they are scaled down when the scene is setup
    After that is setup, I add some behaviors to the viewplatform transform (universe.getViewingPlatform().getViewPlatformTransform()), behaviors like MouseTranslate.
    When I use the behaviors to move the eye position, the 2D plane seems to turn and the pyramids sink through the surface or (when on the other side) rise above the surface. So, instead of just the camera moving (which is what I want), the different objects move through eachother while the camera is moving.
    The same thing happens when I attach the behaviors to the rootTransform object.
    Can anybody tell me how to shut this behavior off, as in, move the camera without changing the rest of the scene. Thnx.
    The pyramid geometry:
      Geometry createGeometry(int face, Color3f color, Color3f topColor) {
        TriangleArray geomList = new TriangleArray(3, TriangleArray.COORDINATES | TriangleArray.COLOR_3);
        Point3f point = new Point3f();
        switch (face) {
        case 0:
          point.set(-1.0f, 1.0f, 0.0f);
          geomList.setCoordinate(0, point);
          geomList.setColor(0, color);
          point.set(-1.0f, -1.0f, 0.0f);
          geomList.setCoordinate(1, point);
          geomList.setColor(1, color);
          point.set(0.0f, 0.0f, 2.0f);
          geomList.setCoordinate(2, point);
          geomList.setColor(2, topColor);
          break;
        case 1:
          point.set(1.0f, -1.0f, 0.0f);
          geomList.setCoordinate(0, point);
          geomList.setColor(0, color);
          point.set(1.0f, 1.0f, 0.0f);
          geomList.setCoordinate(1, point);
          geomList.setColor(1, color);
          point.set(0.0f, 0.0f, 2.0f);
          geomList.setCoordinate(2, point);
          geomList.setColor(2, topColor);
          break;
        case 2:
          point.set(-1.0f, -1.0f, 0.0f);
          geomList.setCoordinate(0, point);
          geomList.setColor(0, color);
          point.set(1.0f, -1.0f, 0.0f);
          geomList.setCoordinate(1, point);
          geomList.setColor(1, color);
          point.set(0.0f, 0.0f, 2.0f);
          geomList.setCoordinate(2, point);
          geomList.setColor(2, topColor);
          break;
        case 3:
          point.set(1.0f, 1.0f, 0.0f);
          geomList.setCoordinate(0, point);
          geomList.setColor(0, color);
          point.set(-1.0f, 1.0f, 0.0f);
          geomList.setCoordinate(1, point);
          geomList.setColor(1, color);
          point.set(0.0f, 0.0f, 2.0f);
          geomList.setCoordinate(2, point);
          geomList.setColor(2, topColor);
          break;
        return geomList;
    //called like this
        removeGeometry(0);
        addGeometry(createGeometry(0, glColor, glTopColor));
        addGeometry(createGeometry(1, glColor, glTopColor));
        addGeometry(createGeometry(2, glColor, glTopColor));
        addGeometry(createGeometry(3, glColor, glTopColor));The 2D plane geometry:
            QuadArray plane = new QuadArray(4, GeometryArray.COORDINATES
                                     | GeometryArray.TEXTURE_COORDINATE_2 );
            Point3f p = new Point3f(-1.0f,  1.0f,  0.0f);
            plane.setCoordinate( 0, p);
            p.set(-1.0f, -1.0f,  0.0f);
            plane.setCoordinate( 1, p);
            p.set(1.0f, -1.0f,  0.0f);
            plane.setCoordinate( 2, p);
            p.set(1.0f,  1.0f,  0.0f);
            plane.setCoordinate( 3, p);
            TexCoord2f q = new TexCoord2f( 0.0f,  1.0f);
            plane.setTextureCoordinate(0, 0, q);
            q.set(0.0f, 0.0f);
            plane.setTextureCoordinate(0, 1, q);
            q.set(1.0f, 0.0f);
            plane.setTextureCoordinate(0, 2, q);
            q.set(1.0f, 1.0f);
            plane.setTextureCoordinate(0, 3, q);

    I copied your code into my working 3D scene. Zoomed out a little and saw a nice pyramid pointing up at me, its base on a square surface. I have buttons that allow me to move around the scene. I believe they give the same results as your "moving camera". Everything looks OK to me.
    One thing you may not realize. The 3D scene is in perspective. The pyramid changes appearance as the position of the camera moves. Could this be the problem you are seeing? If not I can send you the code for translating the view platform transform3D. But I'm thinking you already have that part well in hand.

  • AAE: how to track eyes to position photos of my face

    Hi,
    I have been taking photos of my face for already more than two years. Now I am trying to create a short movie with those pictures.
    My problem: although I tried to take identical photos I am always positioned a little bit different than the day before.
    Until today I used some really basic programs: a foil on my monitor and Paint to move the picture to the right place, Irfanview to trim the pictures and finally Moviemaker to create the movie. You can imagine how much effort it is if you have to deal with more than 800 photos.
    That's why I thought I could try it with Adobe After Effects using the tracking function. I already watched several tutorials and read some instructions but because I want to create a movie composed of photos and not movie-clips I didn't find a way how to deal with the problem. My idea was that AAE finds both my eyes, moves the picture to the right position and resizes it if necessary. Finally it should trim all pictures to get a proper shape.
    Do you know if that's possible with After Effects or is there another program I should use?
    Thanks for your help,
    Andy

    Won't work. you have to do it manually. AE has no way to detect features across temporally and spatially inconsistent data, which a time-lapse sequence is. There are a few experimental programs out there, but they are usually not publicly available and only research. AE can help you with the scaling and warping by using blending modes liek Difference or reducing opacity just as effects like Mesh Warp or Optics Compensation may be useful for correcting intra-image warping and lens distortion, but it would still require a lot of manual work. Likewise, the panorama merge and Merge to HDR in Photoshop could potentially be used to create some baseline layered PSDs with aligned layers, but you will still have to fine tune everything later.
    Mylenium

  • Opening eyes and closing lips in photos

    I have a number of images that I want to use for 3D texturing, but to use them, I need to do a few things:
    1)  open the eyes wider (whenever the subject is squinting);
    2) Flatten the lips into a neutral position if smiling or frowning
    3) Close the lips so teeth are not showing at all if the subject had mouth open (talking or smiling)
    Any advice on best ways to do this in After Effects. 
    I've exerimented a little.  Trying to figure out if Effects like Bulge or Mesh Warm or others will let me accomplish what I need.
    I'd like to keep things as 'photorealistic' as possible, though doesn't really have to be perfect - just don't want jagged edges or cartoony appearance.
    Thanks for any recommendations. 

    Thanks for the replies.
    In this situation, I'm just talking about stills.
    After some experimenting, I've found the Mesh Warp and a tiny bit of Bulge can work for some photos in After Effects, while the Puppet Warp and Liquefy are useful in Photoshop.  None of these (at least in my hands) are as simple or consistent a solution as I'd like, but in combination they're good enough to give me better results than using unprocessed stills. 
    Thanks for the suggestions.

  • How do I find all possible positions?

    If I have 3 positions and 3 numbers I know there are 6 possible soultions (of having a diff number in each)
    If I have 4 positions and 4 numbers I know there are 12 possible soultions (of having a diff number in each)
    I know the function to find the number of diff positions in n! where n is the number of positions ( + is the ! called shreek?)
    5! = 5 * 4 * 3 * 2 * 1 = 120 diff positions
    But how do I find out what all the different positions are (ie to print out a list)
    I just can't work out how to write an algorithm for this.
    Maybe there is somwwhere on the web I can find out, but Google won't let me search for ! + I don't know the real name for it.

    I dashed this off. I can't read the one posted above without my eyes bleeding. This one uses recursion.
    import java.util.*;
    public class Test
        public static void main(String[] args)
            for (int[] array : permutations(5)) print(array);
        static void print(int[] array)
            for (int i = 0; i < array.length; i++) {
                System.out.print(array);
    System.out.print(' ');
    System.out.println();
    static List<int[]> permutations(int number)
    int[] input = new int[number];
    for (int i = 1; i <= number; i++) input[i - 1] = i;
    return permutations(input, 0);
    static List<int[]> permutations(int[] input, int from)
    List<int[]> bigPerm = new ArrayList<int[]>();
    for (int i = from; i < input.length; i++) {
    int[] next = new int[input.length];
    System.arraycopy(input, 0, next, 0, from);
    next[from] = input[i];
    for (int j = from + 1; j < input.length; j++) {
    next[j] = input[(j <= i) ? j - 1 : j];
    if (from < input.length - 1) {
    bigPerm.addAll(permutations(next, from + 1));
    } else {
    bigPerm.add(next);
    return bigPerm;

Maybe you are looking for

  • How to define output types email or external send in the report program

    Hi all, now we are using a zprogram for output type printer and standard program RVADOR01 for output type external send. Now i want to modify my zprogram itself to support external send email as well as printer also. What modifications should I have

  • Can I use OS 9 printer from Windows XP? Software needed?

    I have a printer that I have been using from Windows XP by use a program called PC Maclan, but this computer went down and I found out that they make that software anymore. This program was great because it allowed me to browse the Mac network when a

  • Using Dimensions with SQL Developer 3.1

    After much effort I have been able to install the Dimensions plug-in to SQL Developer. I had to take a round about way due to proxy issues and manually installed it. Manually copied the zip file and extracted under jdev/extensions. But now I am havin

  • Calling a function from a movieClip

    I have a function in my main timeline but I want to call it from a movieClip - I've tried _parent.myFunction() and _root.myFunction() but they don't work while _parent.gotoAndPlay does work. Thanks. Rob Childress

  • Start services connector connect to sap gateway failed

    hi experts when i want to start services connector saprouter saprouter string :/H/172.16.2.11/S/3299 a error window say "connect to sap gateway failed error  partner 'host:3299' not reached" where is the possible problem here ? regards ying xie