Face Emoji Week 3

    Face Emoji Week 3

    Week 3 Content of Face Emoji

    By AI Club on 2/24/2025
    0

    # Week 3: Facial Landmarks with MediaPipe


    Hello AI Club members! Welcome to Week 3 of our Face-Emoji project. Last week, we detected faces using MediaPipe's Face Detection. This week, we're diving deeper by exploring facial landmarks - the foundation for our future emotion detection!

    You will need this file: week3_code.py

    What We'll Learn Today


    - Understanding facial landmarks and their importance

    - Using MediaPipe Face Mesh to detect 468 facial landmarks

    - Visualizing the full face mesh and facial features

    - Exploring the landmark data structure


    Introduction to Facial Landmarks


    While last week we detected faces as simple rectangles, this week we're exploring the detailed structure of the face. MediaPipe's Face Mesh provides 468 precise points that map to specific facial features.


    These landmarks are the key to emotion detection because they allow us to:

    - Track subtle facial movements

    - Measure changes in facial expressions

    - Detect key facial features like eyes, nose, and mouth

    - Create a foundation for mapping expressions to emojis


    Understanding the Code


    The provided code extends Week 2's face detection by adding facial landmark detection. Let's understand what's new:


    1. Face Mesh Setup


    mp_face_mesh = mp.solutions.face_mesh

    face_mesh = mp_face_mesh.FaceMesh(

        max_num_faces=1,

        refine_landmarks=True,

        min_detection_confidence=0.5,

        min_tracking_confidence=0.5

    )


    We're initializing MediaPipe's Face Mesh, which will detect those 468 landmarks. The refine_landmarks=True parameter provides extra precision around the eyes and lips - important areas for emotion detection!


    2. Processing the Face Mesh

    mesh_results = face_mesh.process(rgb_frame)

    This processes our image to find facial landmarks, similar to how we detected faces last week. The results are stored in mesh_results.multi_face_landmarks.


    3. Your Task: Drawing the Landmarks


    We've left two important TODOs for you:

    # TODO 1: Draw the face mesh annotations on the image

    # TODO 2: Draw the face contours (eyes, lips, etc.)


    You'll need to complete these TODOs to visualize the facial landmarks.

    How to Complete the TODOs

    Here's exactly what you need to do:

    TODO 1: Draw the Face Mesh

    To draw the full face mesh (all 468 landmarks and their connections):

    mp_drawing.draw_landmarks(

        # Fill this out

    )


    The draw_landmarks() function takes several important parameters:

    - image: The image to draw on (your frame)

    - landmark_list: The detected landmarks for a face (your face_landmarks variable)

    - connections: Which connections to draw between landmarks - for the mesh, you'll want to use mp_face_mesh.FACEMESH_TESSELATION

    - landmark_drawing_spec: How to style the landmark points - you can set this to None to use defaults

    - connection_drawing_spec: How to style the connections between landmarks - use mp_drawing_styles.get_default_face_mesh_tesselation_style() for the mesh


    Take a look at the MediaPipe documentation to understand these parameters better. The tessellation draws connections between landmarks to create a "wireframe" of the face.


    TODO 2: Draw the Face Contours


    To draw the facial feature contours (outlines of eyes, eyebrows, nose, lips, etc.):

    mp_drawing.draw_landmarks(

       # Fill this out too

    )


    This function call is very similar to the first one, with the same parameters, but with two key differences:

    - For the connections parameter, you'll use mp_face_mesh.FACEMESH_CONTOURS instead

    - For the connection_drawing_spec, you'll use mp_drawing_styles.get_default_face_mesh_contours_style() to get a different style for the contours


    The FACEMESH_CONTOURS specifically focuses on the outlines of facial features like eyes, lips, eyebrows, and the face perimeter, making them stand out with a different color. This helps visualize the specific facial features we'll use for emotion detection.

    After editing the file with the necessary changes, MAKE SURE to have your conda environment activated (for Windows users) or use python3.12 week3_code.py for those on Mac

    Documentation Resources

    For more information about MediaPipe Face Mesh and these functions, check out:


    1. MediaPipe Face Mesh Documentation:

       https://developers.google.com/mediapipe/solutions/vision/face_landmarker

    2. MediaPipe Drawing Utilities:

       https://github.com/google/mediapipe/blob/master/mediapipe/python/solutions/drawing_utils.py


    Understanding the Output


    Once you've implemented the TODOs and run the code, you'll see:

    - A mesh covering the entire face

    - Highlighted contours around facial features

    - Landmark data printed in the console every 3 seconds


    These landmarks are normalized coordinates, meaning they're relative to the image dimensions:

    - X ranges from 0 (left) to 1 (right)

    - Y ranges from 0 (top) to 1 (bottom)

    - Z represents depth (how far a point is from the camera)

    Looking Ahead

    These facial landmarks are the foundation for our emotion detection system. Next week, we'll start using these landmarks to detect different facial expressions, bringing us one step closer to our emoji mapping goal!


    Happy coding! 😊

    Comments