Social Distancing using YOLOv3 – Object Detection – with source code – fun project – 2024

So guys, here comes one of the most awaited projects, Social Distancing using YOLOv3 and OpenCV. In this project, we will perform object detection on a camera’s live feed to check if Social Distancing is being maintained or not. So without any further due, Let’s do it…

Create a conda environment and install the required libraries

conda create -n sd python=3.9
conda activate sd
pip install opencv-python numpy

Code for Social Distancing project…

import cv2
import numpy as np
import random
import os
from PIL import Image
import time

net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)

distance_thres = 50

cap = cv2.VideoCapture('data/humans.mp4')

def dist(pt1,pt2):
    try:
        return ((pt1[0]-pt2[0])**2 + (pt1[1]-pt2[1])**2)**0.5
    except:
        return

layer_names = net.getLayerNames()
output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]
print('Output layers',output_layers)

_,frame = cap.read()

fourcc = cv2.VideoWriter_fourcc(*"MJPG")
writer = cv2.VideoWriter('output.avi', fourcc, 30,(frame.shape[1], frame.shape[0]), True)


ret = True
while ret:

    ret, img = cap.read()
    if ret:
        height, width = img.shape[:2]

        blob = cv2.dnn.blobFromImage(img, 0.00392, (416, 416), (0, 0, 0), True, crop=False)

        net.setInput(blob)
        outs = net.forward(output_layers)

        confidences = []
        boxes = []
        
        for out in outs:
            for detection in out:
                scores = detection[5:]
                class_id = np.argmax(scores)
                if class_id!=0:
                    continue
                confidence = scores[class_id]
                if confidence > 0.3:
                    center_x = int(detection[0] * width)
                    center_y = int(detection[1] * height)

                    w = int(detection[2] * width)
                    h = int(detection[3] * height)
                    x = int(center_x - w / 2)
                    y = int(center_y - h / 2)

                    boxes.append([x, y, w, h])
                    confidences.append(float(confidence))

        indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)

        persons = []
        person_centres = []
        violate = set()

        for i in range(len(boxes)):
            if i in indexes:
                x,y,w,h = boxes[i]
                persons.append(boxes[i])
                person_centres.append([x+w//2,y+h//2])

        for i in range(len(persons)):
            for j in range(i+1,len(persons)):
                if dist(person_centres[i],person_centres[j]) <= distance_thres:
                    violate.add(tuple(persons[i]))
                    violate.add(tuple(persons[j]))
        
        v = 0
        for (x,y,w,h) in persons:
            if (x,y,w,h) in violate:
                color = (0,0,255)
                v+=1
            else:
                color = (0,255,0)
            cv2.rectangle(img,(x,y),(x+w,y+h),color,2)
            cv2.circle(img,(x+w//2,y+h//2),2,(0,0,255),2)

        cv2.putText(img,'No of Violations : '+str(v),(15,frame.shape[0]-10),cv2.FONT_HERSHEY_SIMPLEX,1,(0,126,255),2)
        writer.write(img)
        cv2.imshow("Image", img)
    
    if cv2.waitKey(1) == 27:
        break

cap.release()
cv2.destroyAllWindows()
  • Line 1-6 – Importing required libraries.
  • Line 8-10 – Read the Yolo files and enable Cuda.
  • Line 12 – Set the distance threshold to 50 pixels.
  • Line 14 – Instantiate the VideoCapture object which will help us in reading frames from the video file.
  • Line 16-20 – A simple distance function that calculates the distance between two coordinates on the plane.
  • Line 22 – Get a list of all layer names in the network.
  • Line 23 – Get output layers.
1
  • Line 26 – Read a frame from the video just to get the height and width of it.
  • Line 28-29 – We will be saving our results in a video output also as shown below using VideoWriter.
  • Line 32-33 – Let’s start the loop.
  • Line 35 – Start reading from the input video.
  • Line 36- If the cam object is returning something then the res will be True.
  • Line 37 – Extract image height and width.
  • Line 38 – Create a blob of shape 416X416 from the image.
  • Line 41 – Give this blob as input to the net using cv2.dnn.blobFromImage.
  • Line 42 – Get output from output layers.
  • Line 47 – Traverse in all the outputs from that frame.
  • Line 48 – Now traverse in all the detections.
  • Line 49 – There are 85 points in the detection array. The first four indices are for coordinates of the box and indexes starting from 5 to 85 are for class confidences.
  • Line 50 – Get the class id by getting the index of that element that has the highest score.
  • Line 51-52 – If the class_id is not 0 (person), continue. Because our main purpose in this use case is just to detect persons.
  • Line 53 – Get the confidence score.
  • Line 54 – If the confidence is greater than 30%, proceed further.
  • Line 55-56 – Calculate the center x and center y points.
  • Line 58-61 – Calculate the x,y,w, and h of the bounding box.
  • Line 63 – Append this Bounding Box in our boxes list.
  • Line 64 – Append confidences in the confidences list.
  • Line 66 – Here we are performing non-maximum suppression of bounding boxes using cv2.dnn.NMSBoxes. It will return a list of indexes containing a list of those indexes which we have to consider.
  • Line 72-76 – Now traverse in all boxes and take only those boxes whose index is in the indexes list. And append only these relevant boxes in the person list. Also, append the box centers in the person_centres array.
  • Line 79-83 – Now traverse in person_centres array and find all those person centers who are violating social distancing norms of 50 pixels. We will pass these person_centres points in the distance function and check the distances between all the persons. We will add these violating persons to the violate array.
  • Line 85-93 – Draw a red box around the persons who are violating the social distancing norms and a green box around those who are not violating them.
  • Line 95 – Show the number of violations on the screen.
  • Line 96 – Save the output in video form.
  • Line 97 – Showing the output.
  • Line 99-100 – If anyone hits the ESC key, break the code.
  • Line 102-103 – Release the VideoCapture object and destroy all open windows.

Download yolov3.weights

Download the source code…

Do let me know if there’s any query regarding Social Distancing or object detection by contacting me by email or LinkedIn.

So this is all for this blog folks, Thanks for reading it and I hope you are taking something with you after reading this and till the next time ?…

Read my previous post: DOCUMENT SCANNER USING OPENCV 

Check out my other machine learning projectsdeep learning projectscomputer vision projectsNLP projectsFlask projects at machinelearningprojects.net.

7 Comments

  1. hello! when trying to run the code i get an error on this line:
    net = cv2.dnn.readNet(“yolov3.weights”, “yolov3.cfg”)
    which is this one:
    cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\darknet\darknet_importer.cpp:217: error: (-212:Parsing error) Failed to parse NetParameter file: yolov3.weights in function ‘cv::dnn::dnn4_v20220524::readNetFromDarknet’
    And i don’t know how to fix it, if you could help i would be very grateful!
    Sorry for bothering.

  2. is this code is deleted from github sir, is it code was correct.
    if so,. im getting some error like
    how we slove sir

    Traceback (most recent call last):
    File “C:\Program Files\Python310\lib\runpy.py”, line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
    File “C:\Program Files\Python310\lib\runpy.py”, line 86, in _run_code
    exec(code, run_globals)
    File “c:\Users\sai chandu vootla\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\__main__.py”, line 39, in
    cli.main()
    File “c:\Users\sai chandu vootla\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py”, line 430, in main
    run()
    File “c:\Users\sai chandu vootla\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py”, line 284, in run_file
    runpy.run_path(target, run_name=”__main__”)
    File “c:\Users\sai chandu vootla\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py”, line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
    File “c:\Users\sai chandu vootla\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py”, line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
    File “c:\Users\sai chandu vootla\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py”, line 124, in _run_code
    exec(code, run_globals)
    File “c:\Users\sai chandu vootla\Downloads\socialdistancing\detect.py”, line 8, in
    net = cv2.dnn.readNet(“yolov3.weights”, “yolov3.cfg”)
    cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\darknet\darknet_importer.cpp:217: error: (-212:Parsing error) Failed to parse NetParameter file: yolov3.weights in function ‘cv::dnn::dnn4_v20220524::readNetFromDarknet’

Leave a Reply

Your email address will not be published. Required fields are marked *