Faceium–Face Tracking
Jufin P. A1, Amrutha N2
1Jufin P A, Department of Computer Science, St. Albert’s College (Autonomous), Ernakulam, India.
2Amrutha N, Department of Computer Science, St. Albert’s College (Autonomous), Ernakulam, India.
Manuscript received on 18 July 2022 | Revised Manuscript received on 05 August 2022 | Manuscript Accepted on 15 August 2022 | Manuscript published on 30 August 2022 | PP: 1-4 | Volume-2 Issue-5 August 2022 | Retrieval Number: 100.1/ijdcn.B39231212222 | DOI: 10.54105/ijdcn.B3923.082522
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Published by Lattice Science Publication (LSP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Human faces are identified and localized via facial detection, which ignores any backdrop objects like curtains, windows, trees, etc. Each frame of the video is run through multiple stages of classifiers in OpenCV’s Harr cascade, and if the frame passes through each level, the face is considered to be present; otherwise, the frame is dismissed from the classifier, which means that the face is not detected. When an image is detected, OpenCV also returns the height and breadth of the picture along with its cartesian coordinates. The center coordinates of the image can be determined from these coordinates. When the face is detected, these coordinates are sent via the pyserial library to the Arduino UNO. The camera is attached to one of the servos that are connected to the Arduino to create a pan/tilt mechanism. The servo will align to move the face toward the center of the screen when its coordinates are off-center.
Keywords: Open CV, Harr Cascade, IoT, Pyserial Library
Scope of the Article: Internet of things