用Python和OpenCV做攝像頭監控

jopen 9年前發布 | 69K 次閱讀 OpenCV 圖形/圖像處理
 

起因:

我的豬籠草不知道被什么蟲子咬了,新長的葉子老是被咬爛,以至于長不出籠子,這還能忍!豬豬草可以已經陪了我快一年了!所以我決定要把真兇揪出來!

用Python和OpenCV做攝像頭監控

用Python和OpenCV做攝像頭監控

分析:白天我經常到陽臺去,除了螞蟻沒見過什么異常的蟲子,所以我判斷蟲子應該是夜間出沒,而且看葉子上的咬痕,應該是昆蟲吃過的痕跡. 我特地跑去問淘寶賣家,希望他有過類似的經驗,他說有可能是黑色毛毛蟲,其實我也不確定是啥,我也沒在陽臺見到過毛毛蟲.

所以我還是要采取行動,考慮到我平時也不會一直在豬籠草旁邊,所以就想做一個監控攝像頭,這樣就可以實時監控豬籠草附近的一舉一動,真兇遲早要現行!

手頭材料不多,就只有一個webcam,本來打算買個紅外攝像頭,以便于夜間監控,但是網購還是要花點時間,所以想先用webcam代替,等做 出來了再考慮要不要換.于是我就上網搜索資料,看看有沒有類似蛙眼的實現方法,于是就搜到上面的兩篇文章,其實是一篇文章,中文版本為英文版的翻譯版本.

我對作者的代碼做了一點對應我的需求的改動:

1. 每過一段時間刷新一下首幀,這樣就算環境有一點點靜態的改變,系統也能很快適應

2. 需要把有入侵者的部分錄制和拍照下來,以便于事后觀察和取證(因為錄制視頻很占空間,所以只錄制有異常的部分)

以下是老規矩,貼代碼(代碼的解釋在上述引用的文章解釋得很清楚了,我比較懶,就不在贅述):

# http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
# http://python.jobbole.com/81593/
# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2
import cv2.cv as cv
import numpy as np
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=300, help="minimum area size")
args = vars(ap.parse_args())
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
  camera = cv2.VideoCapture(0)
  time.sleep(0.25)
# otherwise, we are reading from a video file
else:
  camera = cv2.VideoCapture(args["video"])
# initialize the first frame in the video stream
firstFrame = None
# Define the codec
fourcc = cv.CV_FOURCC('X', 'V', 'I', 'D')
framecount = 0
frame = np.zeros((640,480))
out = cv2.VideoWriter('calm_down_video_'+datetime.datetime.now().strftime("%A_%d_%B_%Y_%I_%M_%S%p")+'.avi',fourcc, 5.0, np.shape(frame))
# to begin with, the light is not stable, calm it down
tc = 40
while tc:
  ret, frame = camera.read()
  out.write(frame)
  #cv2.imshow("vw",frame)
  cv2.waitKey(10)
  tc -= 1
totalc = 2000
tc = totalc
out.release()
# loop over the frames of the video
while True:
  # grab the current frame and initialize the occupied/unoccupied
  # text
  (grabbed, frame) = camera.read()
  text = "Unoccupied"
  # if the frame could not be grabbed, then we have reached the end
  # of the video
  if not grabbed:
    time.sleep(0.25)
    continue
  # resize the frame, convert it to grayscale, and blur it
  frame = imutils.resize(frame, width=500)
  gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
  gray = cv2.GaussianBlur(gray, (21, 21), 0)
  # update firstFrame for every while
  if tc%totalc == 0:
    firstFrame = gray
    tc = (tc+1) % totalc
    continue
  else:
    tc = (tc+1) % totalc
  #print tc
  # compute the absolute difference between the current frame and
  # first frame
  frameDelta = cv2.absdiff(firstFrame, gray)
  thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
  # dilate the thresholded image to fill in holes, then find contours
  # on thresholded image
  thresh = cv2.dilate(thresh, None, iterations=2)
  (cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  # loop over the contours
  for c in cnts:
  # if the contour is too small, ignore it
    if cv2.contourArea(c) < args["min_area"]:
      continue
    # compute the bounding box for the contour, draw it on the frame,
    # and update the text
    (x, y, w, h) = cv2.boundingRect(c)
    cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
    text = "Occupied"
  # draw the text and timestamp on the frame
  cv2.putText(frame, "Room Status: {}".format(text), (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
  cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"), (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
  # show the frame and record if the user presses a key
  cv2.imshow("Security Feed", frame)
  cv2.imshow("Thresh", thresh)
  cv2.imshow("Frame Delta", frameDelta)
  # save the detection result
  if text == "Occupied":
    if framecount == 0:
      # create VideoWriter object
      out = cv2.VideoWriter(datetime.datetime.now().strftime("%A_%d_%B_%Y_%I_%M_%S%p")+'.avi',fourcc, 10.0, np.shape(gray)[::-1])
      cv2.imwrite(datetime.datetime.now().strftime("%A_%d_%B_%Y_%I_%M_%S%p")+'.jpg',frame)
      # write the flipped frame
      out.write(frame)
      framecount += 1
    else:
      # write the flipped frame
      out.write(frame)
      framecount += 1
  elif framecount > 20 or framecount<2:
    out.release()
    framecount = 0
  key = cv2.waitKey(1) & 0xFF
  # if the `ESC` key is pressed, break from the lop
  if key == 27:
    break
# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()

用Python和OpenCV做攝像頭監控

用Python和OpenCV做攝像頭監控
 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!