If you examine the pointable objects that it pushes to the console, you can see the various properties that are used to expose information about the pointable length and width, the spatial coordinates of the pointable tip, and the rate at which the pointable tip is moving.Įnlarge / Examining arrays of pointable objects in the Firefox developer console. ![]() The example above outputs the array of pointables to the console in each frame. The frame object has a property called pointables that exposes an array of pointable objects that are visible in the frame. In Leap Motion parlance, the generic term “pointable” is used to describe something that is either a tool or a finger. A “tool” is a long implement, such as a pencil, that the user holds in the air. The Leap Motion APIs make it easy to determine the position of hands, fingers, and tools. ![]() The anonymous function that is passed into the loop will execute every time a new frame is available. The software produces roughly 30 frames every second, providing applications with a constant stream of information. The Leap Motion drivers emit “frames” of data, which are processed snapshots of the controller’s video stream. In the second script tag, which is beneath the page body, we use the Leap.loop method to capture data from the device. You can visit the Leap Motion website to find the URL for both versions. We're using the latter in this case because it will make it easier to step through the code if we need to use the browser’s JavaScript debugger. Leap Motion provides a minified version for production use and a non-minified version for development purposes. In the head element, there is a script tag that downloads the Leap Motion JavaScript library from the company’s CDN. To get started, let’s create a webpage that loads the Leap Motion JavaScript library, obtains data from the device, and logs some of that data in the browser’s debug console: The Leap Motion JavaScript library connects to the local WebSocket server, captures the data, and wraps it with some simple APIs that are very easy to use. ![]() The data captured by the Leap Motion controller is pumped through the WebSocket server, making it easy to consume within a Web browser without requiring special browser plugins. When a user sets up his or her Leap Motion device and installs the accompanying software and drivers, one of the built-in software components included in the installation is a lightweight WebSocket server that runs in the background on the user’s computer. The WebSockets standard was designed to allow JavaScript code running in a webpage to establish a persistent connection to a remote server, a feature that's typically used to build browser-based chat clients and other real-time Web applications. The Leap Motion JavaScript library relies on WebSockets to expose data from the controller to the user’s Web browser. We'll start by showing how to render finger points on an HTML Canvas element before demonstrating how to use the Pixi.js graphics library to make a simple 2D game with Leap Motion controls. In this tutorial, we’ll describe how to build front-end Web applications that take advantage of Leap Motion tracking. It abstracts away much of the controller’s complexity, exposing the hand and finger tracking data through high-level APIs. Working with the Leap Motion SDK is very easy and rewarding. It supports several different programming languages across a number of platforms-in addition to native desktop software, Leap Motion also offers a JavaScript library that can be used to build Leap-compatible websites that work in conventional Web browsers. ![]() Leap Motion provides an extensive SDK that makes it easy for developers to incorporate support for the device into their own applications. The Leap Motion software processes the image data, translating it into gestures and touch events. The Leap Motion controller, which plugs into a standard USB port, relies on built-in cameras and infrared LEDs to capture and analyze finger and hand movement. Last year, Ars reviewed the Leap Motion controller, a device that uses high-precision motion capture and finger tracking to provide gesture input on any standard desktop computer. The latest generation of the technology is still far from delivering the Minority Report ideal, but it shows a lot of potential. It’s increasingly clear that touch will play a prominent role in the future of computing, but there are still challenges that make it difficult to bring the advantages of touch-enablement to conventional desktop form factors.ĭepth cameras and 3D position trackers seem to offer a particularly promising route to ubiquitous touch interaction. Modern smartphones have helped shed a light on the power of user interfaces that are driven by gesture and touch.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |