Sixth Sense is a gesture-based wearable computer system developed at MIT Media Lab by Steve Mann in 1994 and and further developed by Pranav Mistry (also at MIT Media Lab), in 2009, both of whom developed both hardware and software for both head-worn and neck-worn versions of it.
Sixth sense technology is a perception of augmented reality concept. Sixth sense is in fact, about comprehending information more than our available senses. The sixth sense technology is all about interacting to the digital world in most efficient and direct way. Hence, it wouldn’t be wrong to conclude sixth sense technology as gateway between digital and real world.
A camera is acting as a digital eye, which sees everything the user sees. The camera is meant to capture and recognize objects in its view and does the tracking of user’s hand gestures using techniques based on computer-vision. The camera tracks all the movements made by the thumbs as well as the index fingers of both the hands of the user.
The sixth sense setup consists of an internet-enabled smartphone which processes the data send from the camera. Smartphone is used to send and receive data and voice information from anywhere and to anyone through mobile internet. Software is run on the smartphone which supports this technology and handles data connection. The smartphone is meant to search the web and to interpret hand gestures.
The smartphone interprets the data and this data is projected onto a surface mainly walls, body or hands of a person. A battery is found inside the projector which provides 3 hours battery life. Visual information is projected on to the surfaces and other physical objects which are used as interfaces by the projector. This projection of information is done by a tiny LED projector. The image is projected on to the mirror by the downward facing projector.
The mirror is used as the projector hangs from the neck pointing downwards and it reflects the image to a desired surface. This step finally frees the digital information from its confines and places it to the physical world.
The color markers that are red, green, blue and yellow are placed at the tips of the fingers which helps the camera to recognize the hand gestures. The various movements and structural arrangements made by these markers are interpreted as gestures that subsequently act as an instruction for the application interfaces that are projected.
The methodology shown below in the algorithm used is based on the Sixth Sense Technology where user has to make several gestures using the finger worn color markers and perform real time actions whose images are preloaded in the program. Our aim is to move mouse cursor as the user moves his/her fingers, zooming of images and capturing of photos using fingers. For this purpose, three components of Sixth Sense are used i.e. Camera, Colored Caps and MAT LAB installed in Laptop.
The approach works in a continuous manner where camera takes the live video, sending to the laptop, and MAT LAB installed in laptop processes the input and recognizes the colors at the finger tips of the user. Following figure shows the algorithm we used in our approach to move mouse cursor on screen, capturing images using gestures and interfacing the image (zoom in & out/rotating).
Here, in our proposed methodology, first interaction with the physical world is done by camera. Camera takes the video and starts recording the live video and in continuation of recording it sends the live video to MAT LAB which is already installed in laptop which is connected with the camera. In MAT LAB, code is prepared which convert the incoming live video from camera into frames of images or slicing of video is done in the form of images.
These images that are obtained from the slicing of video are then processed for color recognition process. The output of the color recognition process are the images that contains those colors of which color caps are present at the finger tips of the user, the background of the image and shadow if present. The fingers of user are not shown in the output images. For this purpose, RGB values of the color caps are set prior in the code so that no other color will be detected in the image after color recognition except the caps colors and the background.
The output images are displayed in continuation and at the same speed as the speed at which slicing of video is done, so that it looks like a continuous movie in which the input is physical world and the output is only those colors which are present at the fingertips of the user. The color is then associated with the mouse cursor in code so that whenever the color moves in the output image from one position to another, the mouse cursor gets attached at the same position where the color is now displayed. In the same manner the combination of yellow, green and blue, red is detected and hence by the action performed, we can click the images. Similarly various gestures of the finger marker are processed and allow the user to interface the image.
There are numbers of application of Sixth Sense which can make our interaction with the digital world more easier, faster and efficient.