[I started this project back in November 2015. The project started nicely but never reached full speed and I didn’t find a natural time to write about it… until now]
In Fall 2015 I discovered Placemeter [Washington Post, TechCrunch, WebSite], a startup in NYC that used Computer Vision to track pedestrian and vehicular traffic. At that point our local neighborhood was seeing a lot of car cross-through traffic, so I started a project: use Placemeter services to track traffic in several key locations in the neighborhood, correlate the traffic to infer flow, and then use this to have a more educated conversation with our city council.
Placemeter has two different operating modes. In both modes different areas of the field of vision are marked for analysis, either as a turnstile, to track objects going through it, in both directions, or as a polygon, to track objects going in and out of the area delimited by the polygon. Placemeter applies CV algorithms to count the objects, to do object classification (pedestrians, cars, trucks, bicycles), and to do speed analysis.
In one mode the IP camera pushes video clips to the back-end services which perform CV analysis. In the other, the Placemeter Sensor performs the CV analysis at the edge and only sends processed data to the cloud.
The IP Camera solution will work with any IP Camera. This is the one we used:
Because it “pushes” through to the Placemeter end-point, it works through our gateway arrangement. On the other hand, it uses a fair amount of bandwidth. Since the IP Camera is “dumb”, the configuration is relatively simple (just set the WiFi/wired network, and set the appropriate end-point).
The Placemeter Sensor is a much better solution, computationally speaking. The sensor needs a good enough computing engine to run the CV algorithms, but the data pushed through the connection is minimal, and the sensor can help setup the network, etc. The main challenge with a sensor is that you need to update the software on-premise to push new algorithms. And that the hardware is fixed.
Here is a picture of the sensor in its box (from a post by Ted Eytan, MD – I loaned my own sensor before taking a proper photo :().
And here it is installed in my front window, while testing.
The result should be the same in both approaches (modulo bugs and software version drift, etc…). In our case we used the cameras as the sensors were backordered and we only got them fairly late in the cycle.
Both approaches use the browser to interact with the system. Here is the screenshot showing selecting different turnstiles:
And here is traffic data over a week where there was a particularly nasty traffic jam:
The data can also be exported in as CSV for processing, here it is from Excel with minimal processing:
Finally, the data can also be accessed via a Web API.
At the end I only found two households willing to host the cameras so we did not do any significant correlation analysis, but even two data points helped understand a bit better the traffic.
The project is currently “on hold”, as NetGear recently acquired Placemeter to use them in their Arlo consumer cameras.
The NetGear Arlo combine battery operated cameras connected wirelessly to a powered hub. That seems an ideal arrangement to do “computing at the edge”, placing the cameras exactly where you want them while doing the extensive CV algorithms in a powered platform. I’m looking forward to the next instantiation of this technology, for projects and for our own personal use.
I also used the Placemeter data in one of the CSUMB Capstone projects, Quantifying the BIT building, where we also used other sensors and the AMTech IOT platform. One of the “fun” parts of the BIT project was finding power outlets close enough to where we wanted to put the cameras!
Ah, and here is the Flickr Album for this project.