Kinect Skeleton Engine

Hey guys,
I recently got an XBox Kinect and downloaded the Microsoft SDK for Win7 with the intention of using it with LabVIEW.  After a little experimentation I got stuck until I saw the presentation at NI Week by the folks from LabVIEWHacker.com.  Thanks to that sharp group I got a whole lot further.  I am very impressed with this cheap, little gadget!
I've attached two examples that use the Skeleton Engine of the Kinect.  They were tricky.  They are simple examples but, hopefully, they show how to create more sophisticated tasks.
Good luck!
Attachments:
Kinect Expt.zip ‏63 KB
Kinect QuickDrop Daemon.zip ‏78 KB

davylee,
I haven't looked at this example in a while, and I haven't tried what you're asking.  But try the following:
1.  Look inside SkelFrmRdy_EvntHdlr.vi.
2.  Notice that the property node for "SkeletonFrame" outputs an array of Skeleton References ("Skeletons"), so it should already contain as many references as Skeletons.  If you have two people in the frame the array of Skeleton References should be two elements long.
3.  Notice that I index through all the Skeleton References.
4.  For each Skeleton Reference I read a property called "Joints".
5.  For each "Joints" (same as "JointsCollection") reference I'm invoking a method called "get_Item".
6.  You'll notice the "get_Item" method takes an argument "i" which is a Ring constant currently set to "HandRight".
7.  Try changing "HandRight" to "HandLeft" and see what that does.
Hope that helps.  I'm on the road and currently don't have access to a Kinect to try this out.

Similar Messages

  • Relationship between angle (Stepper motor) and distance (Kinect)?

    Hi I am doing a project consisting of a stepper motor and Kinect. I want the stepper motor to rotate according to the distance of a
    body joint recognised by Kinect. I wan to cast a laser pointer on the left hand of a person all the time. If the person goes further away from the camera, the stepper motor moves in a way that the laser dot is at the hand.

    Kinect will have no knowledge of your external system, you have to develop how that would work. The Kinect Skeleton tracking api's will provide you the joint distances from the sensor in "world space" in meters. Each skeleton object has a position
    property that will give you the center of the person(up to 6). If you want to know specific joint detail, you can get full joint information on up to 2. See the skeleton tracking samples in the SDK Toolkit browser(Skeleton Basics, Kinect Explorer, etc)
    https://msdn.microsoft.com/en-us/library/hh973074.aspx
    Carmine Sirignano - MSFT

  • The type or namespace name 'Skeleton' could not be found in V2.0 kinect

    I Have Use kinect 2.0V and install and add toolkit and Use this Code.
    then display error in "Skeleton" Class but same code already working 1.8V.
    public interface ICommonProperty
            Skeleton currentSkeleton
                get;
                set;

    Hi Achal,
    the Skeleton class in v1.8 was replaced with the Body class in v2.0. Just have a look at the Body example in the SDK Browser to see hot to use it.
    There is also a tutorial of all sources introduced in v2.0 on Channel9:
    http://channel9.msdn.com/Series/Programming-Kinect-for-Windows-v2/02
    Greets!

  • Skeleton tracking issue with kinect studio v2

    Hello,
    We have an issue related to skeleton tracking using kinect studio v2. The skeleton is not showing when we playback a stream in a connected mode. Note that when we recorded the stream the Nui Body Frame, Nui Body Index, Nui Depth, Nui IR are selected.
    Is there a way to show the skeleton in a connected mode during playback with kinect studio?
    Thanks for your help,

    Connected mode will regenerate body data based on those data streams. Recording body data in the capture is only used when using VGB. You need to ensure that you have captured the "lock-on" state for body tracking. Best practice is to cover the
    sensor and start your recording. When you see the body lock-on, then you have captured the lock on state and then perform the action you need then stop recording.
    Carmine Sirignano - MSFT

  • Kinect Explorer WPF C# read depth data from the file and conversion to skeleton

    Hello,
    I am using Kinect sensor to do some research on biometrics. Since it is impossible to find people to collect some samples, I would like to use samples from the TUM - GAID database, where there exist Color View and Depth View (in .raw format) samples. Therefore,
    I need to read these samples from the file and do a batch processing, instead of performing real-time collecting of samples (where I can take automatically the Skeleton view using the Kinect SDK). So, I need help on
    1. Reading the data from the file (instead of activating the Kinect sensor) and give this data as an input for conversion to a skeleton
    2. Read the .raw files 
    Thank you very much
    Kind regards

    Hiya,
     I doubt this forum, C Sharp, will provide much help. You are asking about the Kinects system not C#
    language. However, you can try this link; http://kinectforwindows.codeplex.com/
    Hope this help. Thanks :)

  • How to control Get3DShape() in Microsoft Kinect face tracking SDK?

    I'm working on a lip-reading project and I'm using Microsoft Kinect face tracking SDK in order to extract lip features in a generated text file from C# to be processed in MATLAB. The features are 29 angles resulting after calculating the distances between
    the 18 feature points related to the lips using Get3DShape() function to get the co-ordinates in each frame and my code looks like that which I added in facetrackingviewer.cs:
    public void getangles(FaceTrackFrame fr)
    var shape = fr.Get3DShape();
    var X_7 = shape[FeaturePoint.MiddleTopDipUpperLip].X;
    var Y_7 = shape[FeaturePoint.MiddleTopDipUpperLip].Y;
    var Z_7 = shape[FeaturePoint.MiddleTopDipUpperLip].Z;
    var X_31 = shape[FeaturePoint.OutsideRightCornerMouth].X;
    var Y_31 = shape[FeaturePoint.OutsideRightCornerMouth].Y;
    var Z_31 = shape[FeaturePoint.OutsideRightCornerMouth].Z;
    var X_33 = shape[FeaturePoint.RightTopDipUpperLip].X;
    var Y_33 = shape[FeaturePoint.RightTopDipUpperLip].Y;
    var Z_33 = shape[FeaturePoint.RightTopDipUpperLip].Z;
    var X_40 = shape[FeaturePoint.MiddleTopLowerLip].X;
    var Y_40 = shape[FeaturePoint.MiddleTopLowerLip].Y;
    var Z_40 = shape[FeaturePoint.MiddleTopLowerLip].Z;
    var X_41 = shape[FeaturePoint.MiddleBottomLowerLip].X;
    var Y_41 = shape[FeaturePoint.MiddleBottomLowerLip].Y;
    var Z_41 = shape[FeaturePoint.MiddleBottomLowerLip].Z;
    var X_64 = shape[FeaturePoint.OutsideLeftCornerMouth].X;
    var Y_64 = shape[FeaturePoint.OutsideLeftCornerMouth].Y;
    var Z_64 = shape[FeaturePoint.OutsideLeftCornerMouth].Z;
    var X_66 = shape[FeaturePoint.LeftTopDipUpperLip].X;
    var Y_66 = shape[FeaturePoint.LeftTopDipUpperLip].Y;
    var Z_66 = shape[FeaturePoint.LeftTopDipUpperLip].Z;
    //some stuff
    float[] point1 = new float[3] { X_7, Y_7, Z_7 };
    float[] point2 = new float[3] { X_64, Y_64, Z_64 };
    float[] point3 = new float[3] { X_41, Y_41, Z_41 };
    float[] point4 = new float[3] { X_80, Y_80, Z_80 };
    float[] point5 = new float[3] { X_88, Y_88, Z_88 };
    float[] point6 = new float[3] { X_31, Y_31, Z_31 };
    float[] point7 = new float[3] { X_87, Y_87, Z_87 };
    float[] point8 = new float[3] { X_40, Y_40, Z_40 };
    float[] point9 = new float[3] { X_89, Y_89, Z_89 };
    float[] point10 = new float[3] { X_86, Y_86, Z_86 };
    float[] point11 = new float[3] { X_85, Y_85, Z_85 };
    float[] point12 = new float[3] { X_82, Y_82, Z_82 };
    float[] point13 = new float[3] { X_84, Y_84, Z_84 };
    float[] point14 = new float[3] { X_66, Y_66, Z_66 };
    float deltax = point2[0] - point1[0];
    float deltay = point2[1] - point1[1];
    float deltaz = point2[2] - point1[2];
    float cetax = point3[0] - point2[0];
    float cetay = point3[1] - point2[1];
    float cetaz = point3[2] - point2[2];
    float notex = point3[0] - point1[0];
    float notey = point3[1] - point1[1];
    float notez = point3[2] - point1[2];
    float deltax1 = point5[0] - point4[0];
    float deltay1 = point5[1] - point4[1];
    float deltaz1 = point5[2] - point4[2];
    float deltax2 = point5[0] - point3[0];
    float deltay2 = point5[1] - point3[1];
    float deltaz2 = point5[2] - point3[2];
    //some stuff
    float distance = (float)
    Math.Sqrt((deltax * deltax) + (deltay * deltay) + (deltaz * deltaz));
    float distance1 = (float)
    Math.Sqrt((cetax * cetax) + (cetay * cetay) + (cetaz * cetaz));
    float distance2 = (float)
    Math.Sqrt((notex * notex) + (notey * notey) + (notez * notez));
    float distance3 = (float)
    Math.Sqrt((deltax1 * deltax1) + (deltay1 * deltay1) + (deltaz1 * deltaz1));
    float distance4 = (float)
    Math.Sqrt((deltax2 * deltax2) + (deltay2 * deltay2) + (deltaz2 * deltaz2));
    float distance5 = (float)
    Math.Sqrt((deltax3 * deltax3) + (deltay3 * deltay3) + (deltaz3 * deltaz3));
    float distance6 = (float)
    Math.Sqrt((deltax4 * deltax4) + (deltay4 * deltay4) + (deltaz4 * deltaz4));
    float distance7 = (float)
    Math.Sqrt((deltax5 * deltax5) + (deltay5 * deltay5) + (deltaz5 * deltaz5));
    float distance9 = (float)
    Math.Sqrt((deltax6 * deltax6) + (deltay6 * deltay6) + (deltaz6 * deltaz6));
    float distance8 = (float)
    Math.Sqrt((deltax7 * deltax7) + (deltay7 * deltay7) + (deltaz7 * deltaz7));
    //some stuff
    double feature = CalcAngleB(distance, distance2, distance1);
    double feature2 = CalcAngleB(distance2, distance, distance);
    double feature3 = CalcAngleB(distance, distance1, distance2);
    double feature4 = CalcAngleB(distance3, distance5, distance4);
    double feature5 = CalcAngleB(distance5, distance4, distance3);
    double feature6 = CalcAngleB(distance4, distance3, distance5);
    double feature7 = CalcAngleB(distance6, distance, distance7);
    double feature8 = CalcAngleB(distance10, distance8, distance9);
    double feature9 = CalcAngleB(distance9, distance10, distance8);
    double feature10 = CalcAngleB(distance8, distance9, distance10);
    //some stuff
    using (System.IO.StreamWriter file = new System.IO.StreamWriter(@"D:\angles.txt", true))
    file.WriteLine("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19},{20},{21},{22},{23},{24},{25},{26},{27},{28}", feature, feature2, feature3, feature4, feature5, feature6, feature7, feature8, feature9, feature10, feature11, feature12, feature13, feature14, feature15, feature16, feature17, feature18, feature19, feature20, feature21, feature22, feature23, feature24, feature25, feature26, feature27, feature28, feature29);
    Where CalculateAngleB is a declared function above:
    public double CalcAngleB(double a, double b, double c)
    return Math.Acos((a * a + c * c - b * b) / (2 * a * c));
    and I call getangles(fr) at :
    internal void OnFrameReady(KinectSensor kinectSensor, ColorImageFormat colorImageFormat, byte[] colorImage, DepthImageFormat depthImageFormat, short[] depthImage, Skeleton skeletonOfInterest)
    this.skeletonTrackingState = skeletonOfInterest.TrackingState;
    if (this.skeletonTrackingState != SkeletonTrackingState.Tracked)
    // nothing to do with an untracked skeleton.
    return;
    if (this.faceTracker == null)
    try
    this.faceTracker = new FaceTracker(kinectSensor);
    catch (InvalidOperationException)
    // During some shutdown scenarios the FaceTracker
    // is unable to be instantiated. Catch that exception
    // and don't track a face.
    Debug.WriteLine("AllFramesReady - creating a new FaceTracker threw an InvalidOperationException");
    this.faceTracker = null;
    if (this.faceTracker != null)
    FaceTrackFrame frame = this.faceTracker.Track(
    colorImageFormat, colorImage, depthImageFormat, depthImage, skeletonOfInterest);
    getangles(frame);
    this.lastFaceTrackSucceeded = frame.TrackSuccessful;
    if (this.lastFaceTrackSucceeded)
    if (faceTriangles == null)
    // only need to get this once. It doesn't change.
    faceTriangles = frame.GetTriangles();
    //MessageBox.Show("face has been detected");
    this.facePoints = frame.GetProjected3DShape();
    //this.FaceRect = new System.Windows.Rect(frame.FaceRect.Left, frame.FaceRect.Top, frame.FaceRect.Width, frame.FaceRect.Height);
    Now, whenever the mask appears as if (this.facetracker!=null), the code starts to calculate the angles and generate the text file. My question is how can I add a control (e.g.Timer or button) to decide exactly when to start calculating features or generating
    the file because in data collection I need features to be recorded accurately when the speaker starts to pronounce phrases not when the mask appears. I have tried to add a dispatcherTimer as when the timer counts after the mask appears it starts to calculate
    the angles like this:
    private static void dispatcherTimer_Tick(object sender, EventArgs e)
    Microsoft.Kinect.Toolkit.FaceTracking.FaceTrackFrame fr;
    fr = facetracker.Track(sensor.ColorStream.Format, colorPixelData,
    sensor.DepthStream.Format, depthPixelData,skeleton);
    //getangles(fr) and stream writter
    and added this to getangles(fr)
    System.Windows.Threading.DispatcherTimer dispatcherTimer = new System.Windows.Threading.DispatcherTimer();
    dispatcherTimer.Tick += dispatcherTimer_Tick;
    dispatcherTimer.Interval = new TimeSpan(0, 0, 10);
    dispatcherTimer.Start();
    but this gives an error in: var shape = frame.Get3DShape(); as using unassigned variable frame and I tried to declare facetracker in timer's void but this gives me an error too so, if you have an I idea how can I add a timer or a button to control when the
    angles to be calculated I would be grateful.
    Thanks in advance.

    Hi Mesbah15,
    I am not sure if I’ve understood the question correctly. You can try using event to communicate with other classes. Define an event, when mask appears you just need call that event handler.
    https://msdn.microsoft.com/en-us/library/17sde2xt(v=VS.100).aspx.
    Please post more information about your scenario for better understanding if event handler not help.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Limit the number of skeletons

    I have some application using the Body Basics example from SDK.
    Like you know, the kinect v2 get many skeletons simultaneously. My application uses the joint points to do some calculations, but if more then one skeleton appears at same time, i get the wrong result.
    Soo, i need some solution, i thinked in 2:
    - Limiting the numeber of skeletons to 1, and only the first skeleton that appears stay there.
    or
    - Limiting the distance of capture, only the closer skeleton will be captured.
    I prefer the first one, but if someone has other idea will be welcome.
    Sorry for my bad english and Thanks you. 

    Hi
    You can try this step.
    After you run SE16 , You need to change your output list to ALV Grid Display
    It's from menu Settings-User Parameter.
    then you choose which field you want to display. After that do Save Variant -> there will be a checked box to indicate that this will be default setting. Check this box.
    Try run SE16 in background.
    Please do note that you need to remove check in the delault setting later after you finish, because this layout applied to all SAP table.
    hope this help
    Best Regards
    Caroline

  • Engineering simulations and quadtree

    Hi all, i want to make an engineering simulation about heat transfer and i need to learn how to realize this king of applications. I stuid about computer graphics, game development, bsp tree but this is not enough to make solid shapes so i can implement temperature scale on shape by coloring that shapes. I think this is relevant to quadtree. What i want to do;
    to create a skeleton algorithm.
    Am i right in this way? If so, how can find tutorial, ebook, code examples about quadtrees. Or is there any way to make that sort of applications? Where can start from? What ways shall i follow?

    Please, do not create double posts. That is not appreciated at all.

  • Maximum number of Kinect for Windows v2 Cameras on one PC?

    There is a question about the previous iteration of the Microsoft Kinect for Windows (no longer on sale) here: https://social.msdn.microsoft.com/Forums/en-US/63c7c52e-1290-452e-8e04-74330822d16f/maximum-number-of-kinect-cameras-on-one-pc?forum=kinectsdk.
    How many Kinect for Windows v2 sensors can be used on a single PC? And if this is limited to 1, is this a software, or a hardware restriction?

    Are there plans for this to change, or be lifted?
    No.  No product roadmap anyway.
    The reasoning requires you to look at it from an engineering standpoint.  The Kinect uses a lot (more than half) of the available bus bandwidth to operate normally.  So even though you could physically connect two or more sensors, there is no feasible
    way to have them both sustain enough of a data rate for them to operate normally using the existing hardware.
    I'm no hardware expert, but from what I have been told, the solution is harder than just having multiple USB 3.0 controllers in the same PC.
    As for "plans for this to change", rest assured that people want this, and that there are clever engineers thinking about solutions, but due to technical limitations, there is no product roadmap yet that allows for multiple kinects sensors on a
    single PC.  And because of the technological limitations, it will never just be "lifted".  The hardware needs to change for this to work.
    Another thing to consider is that, from a physics standpoint, Kinects
    do interfere with each other (slightly) when operating in the same space.  It's not at all dissimilar from two people trying to have a conversation in the same room -- some words of the conversation will be missed.  The more people in the
    room, the more difficult it is to hear one another.  For the Kinect, the quality of the depth results, for example, is diminished by having two kinects operating in the same space, but the visible light cameras continue to operate unimpeded.  The
    sensor is cleverly designed to "tolerate" the light patterns from other kinects, but they can't completely ignore those patterns and conflicts do occur --  so it
    does degrade slightly with noise from each additional sensor.  It's using invisible light, so the problem tends to be "out of sight, out of mind" for us as users.  But at some (modest) limit, it becomes mathematically impossible
    to multiplex the light patterns from additional Kinects using the current specs (somewhere around 16 kinects based on the patterns, IIRC).
    So there are other physical and technological hurdles to overcome with operating multiple Kinects in the same space, even if they were all connected to different PCs connected in a network cluster or something like that.

  • Distance Between user and The kinect Device

    Hey all :)
    how to get the distance between user and the kinect ??
    i found a function but it's in kinect v1 
    public static double GetSkeletonDistance(this Skeleton skeleton)
                return (skeleton.Position.X * skeleton.Position.X) +
                       (skeleton.Position.Y * skeleton.Position.Y) +
                       (skeleton.Position.Z * skeleton.Position.Z);
    i don't know how to make it under kinect v2 ?

    Joint head = body.Joints[JointType.hipCentre];
    return head.Position.Z ;
    it always return 0 :/
    There is no hipCentre in Kinect v2 SDK. Have a look at
    JointType Enumeration. There you can see all the available JointTypes. I am using the
    SpineBase , because users may lean from time to time. So
    return body.Joints[JointType.SpineBase].Position.Z;
    should give you a nice distance. Keep in mind, that even the smallest changes will affect this value. So if you want to check for a distance, you might want to use a threshold or even joint accumulation.
    Best regards,
    Alex
    Edit: Please mark Brekel's answer as answer. Thanks.

  • Recording actions and gestures sing Kinect v2

    Hello everyone, 
    I brought Kinect V2 for recording the different gestures and actions. I am very new to all the SDK stuff.I downloaded the Kinect SDK from "http://www.microsoft.com/en-us/download/details.aspx?id=44561".  Can you please guide me how can I record
    RGB, Depth, and skeleton together of different actions and gestures for my project?
    Thank you

    Hi,
    Have a look to Visual Gesture Builder (an app installed with the SDK). There is also a video tutorial:
    http://channel9.msdn.com/Blogs/k4wdev/Custom-Gestures-End-to-End-with-Kinect-and-Visual-Gesture-Builder
    http://channel9.msdn.com/Blogs/k4wdev/Custom-Gestures-End-to-End-with-Kinect-and-Visual-Gesture-Builder-part-2-
    If you want to record "video streams" for another purpose (e.g. video files), you will have to write your own code capturing data from the sensor. You can find some info in this
    post.

  • Skeleton missing in replay

    I have a problem trying to replay a recording I did with Kinect Studio (version 2.0.1409.10000).
    I recorded all streams (except the audio stream) and when I open the .xef file and just hit play then I can see all the data, especially the skeleton data in Kinect Studio.
    But when I then try to pipe this recording into a program I have written by pressing the "Connect to service" button in the "PLAY" tab, the skeleton data will vanish and I only get the color stream and the depth data.
    (If I then disconnect from the service, the missing data will show up in Kinect Studio again)
    Now the odd thing is that this happens only for some recordings I have done (all of them with the same settings: all streams recorded except audio).
    Could anyone provide some insights as to why this happens?

    Did you record both depth and IR in the recording? Kinect Studio regenerates body data when in connected mode. Recording the body data in the clip is only used while using VGB.
    Additionally, you should be following the best practice of covering the sensor before starting to record your clip to ensure you always have a deterministic lock-on to that body since you are always feeding it the same frames over and over.
    Carmine Sirignano - MSFT

  • Effects of height in kinect setup

    hi.
    If you placed kinect in half of the height of the same table what exactly happens for skeleton tracking and field of vision?
    For example in normal situation i placed the kinect in 80cm table. what happens for skeletons mesaures if i placed kinect in 40 cm table?

    placement of the sensor will effect what the values might be but you are getting the data. Skeleton tracking is a world position in relation to the camera. If my hand is vertically level with the sensor at 80cm, when it drops to 40cm, then my hand location
    would be 40cm higher now.
    Keep in mind, field of view of the v1 sensor may restrict where you put it and may required that you use the elevation api's to help move the sensor to "get a better view"
    Carmine Sirignano - MSFT

  • Kinect with labview

     I download the 'Kinesthesia Toolkit for Microsoft Kinect' and try example_ Kinect.VI. I wonder is any way to save video the kinect acquire. Thanks.

    Hello,
    There is actually a site that supports that product more closely and I think you might find better luck posting this question there.
    https://decibel.ni.com/content/groups/labview-leeds-mechanical-engineering
    Karen M
    Applications Engineer
    National Instruments

  • Find the depth of head from the kinect sensor

    Hi,
    I'm trying to get the actual depth of head from the Kinect sensor positioned at 82 inches from the floor and tilted to 40 deg angle.  I used Skeleton Joints head point and right angled triangle trigonometry but unable to get the accurate measurement.
     I considered Joint[JointType.Head].Position.Z as distance but when manually measured the value is not accurate, why it is different from the manual measurement.
    Any help would be appreciated!
    Mani

    Body tracking provide estimates based on body tracking algorithms. These take into account the some offset into the head to represent but based on angles and noise and tracking state of the joint you may need to average it out. Also, you can take the point
    as a starting point and then analyze the actual depth data and calculate your own if you need better accuracy.
    Carmine Sirignano - MSFT

Maybe you are looking for