Skip to main content

Dual-Camera Human Detection and Face Recognition System with Simultaneous Video Recording

 Dual-Camera Human Detection and Face Recognition System with Simultaneous Video Recording

------------------------------

This project implements a dual-camera human detection and face recognition system using a laptop's built-in camera and an external USB camera. It captures and records video from both cameras simultaneously, saving the recordings in separate files. The system detects human faces, captures images when new faces are identified, and logs each detection with a time-stamped image. Designed for use on an Ubuntu server, it operates via a local network, allowing users to access live camera feeds and saved data.

------------------------------

Windows 10/11 Setup for Dual-Camera Human Detection Project

Steps to Set Up AI + Camera Projects on Windows:

Step-by-Step Guide for Webcam Integration in VS Code

1. Install Python 3.10

Since you’re using Python 3.13 and TensorFlow is not compatible, you’ll need to downgrade to Python 3.10 first. Once you have Python 3.10 installed, proceed with the steps below.


2. Set Up VS Code for Python Development

Step 1: Install VS Code

  • If you haven’t already installed VS Code, download it from the official site.
  • During installation, ensure you check the box to add VS Code to your system PATH.

Step 2: Install Python Extension for VS Code

  • Open VS Code.
  • Go to the Extensions view by clicking on the Extensions icon on the left sidebar or pressing Ctrl+Shift+X.
  • In the search bar, type Python and install the Python extension by Microsoft.

Step 3: Select Python Interpreter

  • Open VS Code and press Ctrl+Shift+P to open the command palette.
  • Type Python: Select Interpreter and choose the Python 3.10 interpreter.

3. Install Necessary Libraries

Step 1: Install OpenCV

  • Open the VS Code terminal (`Ctrl+``) or from the top menu bar, go to Terminal > New Terminal.
  • In the terminal, install OpenCV with:
    bash
    pip install opencv-python

4. Test Webcam Integration

Step 1: Create a Python File

  • In VS Code, go to File > New File and save the file as webcam_test.py.

Step 2: Write the Python Code

  • Copy and paste the following code into your webcam_test.py file

5. Run the Python Script

Step 1: Run in VS Code

  • Save the webcam_test.py file.
  • Open the integrated terminal (`Ctrl+``) in VS Code.
  • Run the script by typing:
    bash
    python webcam_test.py

Step 2: View the Webcam Feed

  • Once the script runs, a window should open showing the live webcam feed.
  • Press q to close the window and stop the webcam.

-------------------------Face Recognition------------------------------

Steps to Install dlib and face_recognition on Windows:

  1. Install Visual Studio Build Tools:

    • Download and install the Visual Studio Build Tools from Microsoft.
    • During installation, make sure to select "Desktop development with C++" to install the necessary components for building dlib.
  2. Install cmake:

    • Download and install cmake from its official site.
    • After installing, make sure to add cmake to your system path. You can check if cmake is installed correctly by running cmake --version in your command prompt.
  3. Download Pre-built dlib Binary (Wheel File):

    • You can download a pre-built dlib wheel for Windows based on your Python version from this site: https://pypi.org/project/dlib/#files
    • Look for the right file based on your Python version (in your case, Python 3.13 isn't officially supported by dlib yet, so you may need to use Python 3.11 or an earlier version to proceed with dlib and face_recognition).

    Example: For Python 3.11, you might find a file like dlib-19.22.0-cp311-cp311-win_amd64.whl.

  4. Install dlib Using the Wheel File:

    • Once downloaded, open the command prompt and navigate to the folder where the .whl file is stored.
    • Install the wheel using:
      bash
      pip install dlib-19.22.0-cp311-cp311-win_amd64.whl
  5. Install face_recognition:

    • Once dlib is successfully installed, proceed with installing face_recognition:
      bash
      pip install face_recognition

After following these steps, you should be able to install dlib and face_recognition successfully on your system.

-----------------------------------------

Summary of What the Code Does (Algorithm):

  1. Webcam Activation: The webcam is activated, and video frames are continuously captured.

  2. Video Recording:

    • Each frame is recorded and saved in a folder named with the current date.
    • The video filename includes the date and start time.
  3. Human Detection:

    • The program detects human faces in each video frame using face detection algorithms.
  4. Image Capture:

    • When a human is detected, the whole frame is captured and saved in a folder named with the date.
    • If the same person is detected, an image is captured after a 2-minute delay.
    • For a new person, an image is captured immediately.
  5. No Redundant Storage:

    • If no human is detected, no image or folder is created.
  6. Data Storage:

    • Videos are continuously saved in the respective date folder.
    • Images of detected humans are stored with filenames containing the exact detection timestamp.

This ensures organized saving of video and image data, while minimizing redundant captures.

-------------------------------------------

If you are still encountering issues installing face_recognition_models, let’s try a more detailed approach to troubleshoot the problem. Follow these steps:

Step 1: Ensure Git is Installed

First, make sure you have Git installed on your system, as it's required to clone repositories.

bash
sudo apt update sudo apt install git

Step 2: Clone the Repository Manually

Instead of installing directly via pip, let’s clone the repository and install it manually.

  1. Navigate to your project directory:

    bash
    cd ~/human_detection_project
  2. Clone the face_recognition_models repository:

    bash
    git clone https://github.com/ageitgey/face_recognition_models.git

Step 3: Navigate to the Cloned Directory

Change into the directory you just cloned:

bash
cd face_recognition_models

Step 4: Install the Package

Now, install the package using pip:

bash
pip install .

Step 5: Check for Errors

If you receive an error during installation, please take note of it and share the complete output here. It can provide valuable information for diagnosing the issue.

Alternative: Install Dependencies Manually

If the installation still fails, you can install the dependencies manually by checking the repository for a requirements.txt file or similar.

  1. Check for a requirements.txt in the cloned directory:

    bash
    ls
  2. If requirements.txt exists, install the dependencies:

    bash
    pip install -r requirements.txt
  3. Then try installing the package again:

    bash
    pip install .

Step 6: Check CMake Installation

Since face_recognition depends on dlib, ensure you have CMake installed, as it is required for building dlib.

bash
sudo apt install cmake

Step 7: Check for dlib Installation

If dlib is not installed, you can try installing it separately:

bash
pip install dlib

Step 8: Retry Installing face_recognition

Once dlib is installed, retry installing face_recognition again:

bash
pip install face_recognition

Final Note

If you continue to face issues, consider creating a new virtual environment and starting fresh:

  1. Deactivate the current virtual environment:

    bash
    deactivate
  2. Create a new virtual environment:

    bash
    python3 -m venv ~/human_detection_project/venv
  3. Activate the new virtual environment:

    bash
    source ~/human_detection_project/venv/bin/activate
  4. Retry the installation steps above.

----------------------------------------
---------------------------------------- FINAL
---------------------------------------- FINAL
----------------------------------------

Here’s a detailed explanation of the final code, which involves capturing video from both the laptop's primary camera and a USB camera simultaneously, performing face detection and recognition, and saving both video and face-detected images in separate folders for each camera.


1. Library Imports

python
import cv2 import datetime import os import time import face_recognition
  • cv2 (OpenCV): Used for capturing video, detecting faces, and saving video files and images.
  • datetime: Helps generate timestamps to create unique filenames for the videos and images.
  • os: Used to create directories and manage file paths dynamically.
  • time: Provides functions to work with time intervals (for controlling the frequency of image capture).
  • face_recognition: Allows for face encoding and comparing faces for recognition.

2. Loading Face Detection Model

python
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
  • The Haar cascade model is pre-trained to detect faces. It is used here to quickly identify faces in frames from both cameras.

3. Initialize Video Capture

python
cap_primary = cv2.VideoCapture(0) # Primary laptop camera cap_usb = cv2.VideoCapture(1) # USB camera
  • cv2.VideoCapture(0): Initializes the primary camera (typically, the laptop's built-in camera).
  • cv2.VideoCapture(1): Initializes the USB camera (external webcam).

Both cameras will capture video frames at the same time.


4. Creating Directories for Videos and Photos

python
now = datetime.datetime.now() date_str = now.strftime("%Y%m%d") # Format: YYYYMMDD video_folder_primary = os.path.join(date_str, 'videos_primary_camera') video_folder_usb = os.path.join(date_str, 'videos_usb_camera') photo_folder_primary = os.path.join(date_str, 'photos_primary_camera') photo_folder_usb = os.path.join(date_str, 'photos_usb_camera') os.makedirs(video_folder_primary, exist_ok=True) os.makedirs(video_folder_usb, exist_ok=True) os.makedirs(photo_folder_primary, exist_ok=True) os.makedirs(photo_folder_usb, exist_ok=True)
  • The code creates separate directories for:

    • Videos and photos from the primary camera.
    • Videos and photos from the USB camera.
  • Each directory is timestamped using the current date (YYYYMMDD), ensuring that all recordings and images are saved in a structured manner.


5. Setting Up Video Recording

python
fourcc = cv2.VideoWriter_fourcc(*'XVID') # For primary camera video_filename_primary = f'primary_video_{now.strftime("%Y%m%d_%H%M%S")}.avi' video_path_primary = os.path.join(video_folder_primary, video_filename_primary) out_primary = cv2.VideoWriter(video_path_primary, fourcc, 20.0, (640, 480)) # For USB camera video_filename_usb = f'usb_video_{now.strftime("%Y%m%d_%H%M%S")}.avi' video_path_usb = os.path.join(video_folder_usb, video_filename_usb) out_usb = cv2.VideoWriter(video_path_usb, fourcc, 20.0, (640, 480))
  • VideoWriter Objects are created for each camera:

    • out_primary: Writes the video captured from the primary camera to a .avi file.
    • out_usb: Writes the video captured from the USB camera to a different .avi file.
  • Each video file is named with a timestamp (primary_video_YYYYMMDD_HHMMSS.avi and usb_video_YYYYMMDD_HHMMSS.avi) and saved in their respective directories.


6. Face Detection and Recognition

python
detected_faces_primary = {} detected_faces_usb = {} capture_interval = 120 # 2 minutes in seconds
  • detected_faces_primary and detected_faces_usb: Dictionaries that store the faces detected by each camera.
  • capture_interval: Ensures that images of the same face are saved only if 2 minutes have passed since the last capture.

7. Processing Frames for Face Detection and Saving Images

python
def process_frame(frame, detected_faces, photo_folder): gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.1, 4) current_time = time.time() for (x, y, w, h) in faces: face_img = frame[y:y + h, x:x + w] face_enc = face_recognition.face_encodings(face_img) if len(face_enc) > 0: face_encoding = face_enc[0] matched_face = None for detected_face in detected_faces.keys(): if face_recognition.compare_faces([detected_faces[detected_face]['encoding']], face_encoding)[0]: matched_face = detected_face break if matched_face: last_captured = detected_faces[matched_face]['last_captured'] if current_time - last_captured >= capture_interval: timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S") image_filename = f'image_{timestamp}.jpg' image_path = os.path.join(photo_folder, image_filename) cv2.imwrite(image_path, frame) detected_faces[matched_face]['last_captured'] = current_time else: timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S") image_filename = f'image_{timestamp}.jpg' image_path = os.path.join(photo_folder, image_filename) cv2.imwrite(image_path, frame) detected_faces[timestamp] = { 'encoding': face_encoding, 'last_captured': current_time } return detected_faces
  • This function processes each frame from either the primary or USB camera:
    • It converts the frame to grayscale for face detection.
    • If a face is detected, the face encoding is computed using the face_recognition library.
    • The face encoding is compared with previously detected faces:
      • If the face has been seen before, it checks if 2 minutes have passed since the last capture. If so, a new image is saved.
      • If it’s a new face, the image is saved immediately, and the encoding is stored in the detected_faces dictionary for future comparisons.
  • Face images are saved in the corresponding photo folder (photos_primary_camera or photos_usb_camera).

8. Main Loop to Capture and Process Video

python
while True: ret_primary, frame_primary = cap_primary.read() ret_usb, frame_usb = cap_usb.read() if ret_primary: out_primary.write(frame_primary) # Save frame from primary camera detected_faces_primary = process_frame(frame_primary, detected_faces_primary, photo_folder_primary) cv2.imshow('Primary Camera', frame_primary) if ret_usb: out_usb.write(frame_usb) # Save frame from USB camera detected_faces_usb = process_frame(frame_usb, detected_faces_usb, photo_folder_usb) cv2.imshow('USB Camera', frame_usb) if cv2.waitKey(1) & 0xFF == ord('q'): break
  • In the main loop, frames are captured simultaneously from both the primary and USB cameras.
  • Each frame is written to its respective video file (out_primary for the primary camera and out_usb for the USB camera).
  • Face detection and processing are performed on each frame using the process_frame function.
  • The imshow function displays the video streams in separate windows for each camera.
  • The loop runs continuously until the user presses the 'q' key to quit.

9. Cleanup

python
cap_primary.release() cap_usb.release() out_primary.release() out_usb.release() cv2.destroyAllWindows()
  • Once the loop ends (on pressing 'q'), the cameras and video writers are released, and all windows are closed to free up resources.

Key Features of the Code:

  • Dual Camera Support: Records from both the laptop's built-in camera and a USB camera at the same time.
  • Face Detection and Recognition: Detects faces and saves images only if a new face is detected or if a sufficient time interval has passed since the last capture.
  • Organized Storage: Videos and images are saved in separate directories for each camera, with timestamped filenames.

----------------------------------------
---------------------------------------- FINAL Code
---------------------------------------- FINAL Code
----------------------------------------
import cv2
import datetime
import os
import time
import face_recognition

# Load the pre-trained Haar cascade for face detection
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')

# Open the primary laptop camera (index 0) and the USB camera (index 1)
cap_primary = cv2.VideoCapture(0)
cap_usb = cv2.VideoCapture(1)

# Get current date for folder names
now = datetime.datetime.now()
date_str = now.strftime("%Y%m%d")  # Format: YYYYMMDD

# Create directories for storing videos and photos
video_folder_primary = os.path.join(date_str, 'videos_primary_camera')
video_folder_usb = os.path.join(date_str, 'videos_usb_camera')
photo_folder_primary = os.path.join(date_str, 'photos_primary_camera')
photo_folder_usb = os.path.join(date_str, 'photos_usb_camera')
os.makedirs(video_folder_primary, exist_ok=True)
os.makedirs(video_folder_usb, exist_ok=True)
os.makedirs(photo_folder_primary, exist_ok=True)
os.makedirs(photo_folder_usb, exist_ok=True)

# Define the codec and create VideoWriter objects for video saving (for both cameras)
fourcc = cv2.VideoWriter_fourcc(*'XVID')

# Video output for primary camera
video_filename_primary = f'primary_video_{now.strftime("%Y%m%d_%H%M%S")}.avi'  # Format: primary_video_YYYYMMDD_HHMMSS.avi
video_path_primary = os.path.join(video_folder_primary, video_filename_primary)
out_primary = cv2.VideoWriter(video_path_primary, fourcc, 20.0, (640, 480))

# Video output for USB camera
video_filename_usb = f'usb_video_{now.strftime("%Y%m%d_%H%M%S")}.avi'  # Format: usb_video_YYYYMMDD_HHMMSS.avi
video_path_usb = os.path.join(video_folder_usb, video_filename_usb)
out_usb = cv2.VideoWriter(video_path_usb, fourcc, 20.0, (640, 480))

# Initialize variables
detected_faces_primary = {}
detected_faces_usb = {}
capture_interval = 120  # 2 minutes in seconds

def process_frame(frame, detected_faces, photo_folder):
    # Convert the frame to grayscale for face detection
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.1, 4)

    # Get the current timestamp
    current_time = time.time()

    for (x, y, w, h) in faces:
        # Extract the face from the frame
        face_img = frame[y:y + h, x:x + w]
        face_enc = face_recognition.face_encodings(face_img)

        if len(face_enc) > 0:  # If face encoding is found
            face_encoding = face_enc[0]

            # Compare with existing detected faces
            matched_face = None
            for detected_face in detected_faces.keys():
                if face_recognition.compare_faces([detected_faces[detected_face]['encoding']], face_encoding)[0]:
                    matched_face = detected_face
                    break

            if matched_face:
                # Update the last captured time for the matched face
                last_captured = detected_faces[matched_face]['last_captured']

                if current_time - last_captured >= capture_interval:
                    # Save the entire frame with the detected face
                    timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")  # Format: YYYYMMDD_HHMMSS
                    image_filename = f'image_{timestamp}.jpg'
                    image_path = os.path.join(photo_folder, image_filename)
                    cv2.imwrite(image_path, frame)

                    # Update last captured time
                    detected_faces[matched_face]['last_captured'] = current_time

            else:
                # New face detected
                timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")  # Format: YYYYMMDD_HHMMSS
                image_filename = f'image_{timestamp}.jpg'
                image_path = os.path.join(photo_folder, image_filename)
                cv2.imwrite(image_path, frame)

                # Store new detected face with current time
                detected_faces[timestamp] = {
                    'encoding': face_encoding,
                    'last_captured': current_time
                }

    return detected_faces

# Main loop for both cameras
while True:
    ret_primary, frame_primary = cap_primary.read()
    ret_usb, frame_usb = cap_usb.read()

    if ret_primary:
        out_primary.write(frame_primary)  # Write frame to primary camera video file
        detected_faces_primary = process_frame(frame_primary, detected_faces_primary, photo_folder_primary)
        cv2.imshow('Primary Camera', frame_primary)

    if ret_usb:
        out_usb.write(frame_usb)  # Write frame to USB camera video file
        detected_faces_usb = process_frame(frame_usb, detected_faces_usb, photo_folder_usb)
        cv2.imshow('USB Camera', frame_usb)

    # Press 'q' to quit
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release everything after the job is finished
cap_primary.release()
cap_usb.release()
out_primary.release()
out_usb.release()
cv2.destroyAllWindows()
-----------------------------------------------------------

Requirements to Install on Windows 10 for Face Detection and Recognition

To run the provided code for dual-camera human detection and face recognition on Windows 10, you'll need the following:

  1. Python Installation:

  2. Install Required Libraries:

    • Open a command prompt and install the required Python libraries using pip. Run the following commands:
      bash
      pip install opencv-python pip install opencv-python-headless pip install face_recognition
    • Make sure to have pip installed. If it's not available, you can use the Python installer to add it.
  3. Additional Dependencies:

    • Haar Cascade Classifier: This is included with OpenCV, so you don't need to install it separately. However, ensure that OpenCV is correctly set up so that it can access the haarcascade files.
    • CMake and dlib: The face_recognition library requires dlib. You may need to install CMake and Visual Studio Build Tools to compile it, which can be installed from the CMake website and Visual Studio.
  4. Camera Setup:

    • Ensure that both the primary laptop camera and the USB camera are connected and properly recognized by Windows. You can check this in the Device Manager.
  5. IDE or Code Editor:

    • Install an Integrated Development Environment (IDE) like Visual Studio Code or PyCharm to write and run your Python code easily.
  6. System Permissions:

    • Make sure that your system permissions allow access to the camera for the Python application.

Summary of Steps to Run the Code

  1. Install Python and Required Libraries:

    • Download and install Python.
    • Use pip to install opencv-python, opencv-python-headless, and face_recognition.
  2. Set Up the Cameras:

    • Connect the USB camera and ensure that the laptop's built-in camera is operational.
  3. Write and Save the Code:

    • Copy the provided code into a new Python file (e.g., dual_camera_face_recognition.py).
  4. Run the Code:

    • Open a command prompt in the directory where your Python file is saved and run the command:
      bash
      python dual_camera_face_recognition.py
  5. Monitor the Output:

    • The video feed from both cameras will display, and images of detected faces will be saved in the corresponding folders based on the camera used.
  6. Stop the Program:

    • Press 'q' to terminate the program and release the camera resources.

Comments

Popular posts from this blog

Tutorials Download Link | Software Download Link

  ====================================================== BooKs ====================================================== Ebooks Collection : Click Here ====================================================== Tutorials ======================================================   All Useful Google Drive Links 300 TB Link – https://drive.google.com/drive/folders/1oCMgJeBc55NuEasPcgwjx2FuPdQd8neu Heaven 50TB – https://drive.google.com/drive/folders/1hxe7Des-ooQpamAtjyR7CX-k_hyqlQ7Q Plenty Of Udemy Courses – https://drive.google.com/drive/u/0/folders/1RDGY0Q3WBO_OE1gyImUn1W2ybFuFo6AQ 1.25 TB Course Collection – https://drive.google.com/drive/u/0/folders/1ASdn3H_kF_HsNswsQc4F3rh_PHmGNjKK DK English Books – https://drive.google.com/drive/u/1/folders/13NgYNawnbS3YqExM9Zi3y2YYSqyVr7Xd SAT Books & Past Exams – https://drive.google.com/drive/u/0/folders/14P77CHMXbErX19AUc5A05lEb_UcEAT8E Mixed Folder – https://drive.google.com/drive/folders/1DsvR68wjyT1WbNZb6tgzFep_3-hzecXm TKT Books – https://drive.g

Configure QUANTUM ESPRESSO Parallel execution Setup/Installation procedure

To install QE with Parallel:  extract QE download from [www.quantum-espresso.org] & Extract. goto inside qe-6.6 or Download From Here ................. Open a Terminal ..................... >> sudo apt-get update >> sudo apt-get upgrade >> sudo apt-get install gfortran >> ./configure >> sudo apt-get install openmpi-bin openmpi-doc libopenmpi-dev [you must have to install libopenmpi-dev or intel mpi  so that the mpi can communicate with processor otherwise it will be serial] >> ./configure  >> make all ...................... If after configuration if it shows ( ./configure ) Parallel environment detected successfully.\ Configured for compilation of parallel executables. For more info, read the ESPRESSO User's Guide (Doc/users-guide.tex). -------------------------------------------------------------------- configure: success Then ................................. Done ................................

WiFi randomly disconnected on Ubuntu 18.04 LTS

  I was having the same issue with bionic. First, I thought it was related to Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter, but even after updating it the signal continued to fluctuate. It appears to be related to the gnome's network manager. After switching to WICD, the wi-fi hasn't been unstable anymore (that was almost 4 months ago). [EDIT: Still no issues as of today 05/28/2019] Here are a few steps to apply this fix: Open up a Terminal and execute the following commands: First, install WICD: sudo apt install wicd-gtk Next, we uninstall NetworkManager: sudo apt remove network-manager-gnome network-manager After everything is confirmed to be working (best to check this after rebooting), you can remove config files for NetworkManager: sudo dpkg --purge network-manager-gnome network-manager (source  https://help.ubuntu.com/community/WICD  ) https://askubuntu.com/questions/1030653/wifi-randomly-disconnected-on-ubuntu-18-04-lts