(this is a duplicate of a post from the Progress Blog)
For their 2016 Capstone Project, students at CSUMB used key Progress technologies like NativeScript and Modulus to create an app with powerful capabilities. Progress VP of Technology, Eduardo Pelegri-Llopart, mentored the team and dives into the technology stack—learn how he and his team developed their app.
May 20th was this year’s Capstone Festival at CSUMB. CSUMB is one of the newest campuses in the Cal State University system. It is located in the site of old Fort Ord and most of the student body is from the surrounding communities. The CS program has done some innovative things, like the new cohortprograms, which includes the CS-in-3 program, a combined project with Hartnell College that received a five million dollar award last year from the State of California.
All CSUMB students have to participate in a Capstone project, and this is my second year mentoring a team at the school (see Connected Merchant for my take on last year). My first contact with the school was back in the Fall of 2014 at YHackswith some students from the CS-in-3 program; it was fun to reconnect with some of them, now in their senior year.
Here is the obligatory selfie photo:
From left to right:
- Luis Montejano
- Alcides Sorto (bottom)
- Brayanne Reyes Ron (top)
- Cesar Galvan
And yours truly.
The goal for this year’s project was to quantify the brand-new BIT Building (Business and Information Technology). The technology stack included:
- IP cameras using Computer Vision from Placemeter to provide people flow
- iBeacons from Estimote to provide indoor location
- Mobile Apps written using NativeScript to talk to the beacons (and to GPS)
- Modulus to run Node.js glue code, and to store Immutable Data on a MongoDB instance
- The AMTech IOT platform to provide the Speed Layer for the IOT information
- Simple Analytics and Reporting on the MongoDB layer through Grafana and Google’s Heatmaps.
Lambda Architecture—The Big Picture
The system follows the Lambda Architecture approach: the sensor data is stored into both an Immutable Store and a Speed Layer. The Immutable Store is used for analytics and presentation purposes while Applications usually interact with the Speed Layer. Here is a sketch of the architecture:
For the project we used two main sensors: cameras and beacons, but the stretch goal also included GPS data. The IP cameras are processed by Placemeter, who apply Computer Vision to count traffic through determined virtual markers which is then made available via an API. Placemeter supports two modes. In one the CV algorithm runs in the cloud, in the other it runs locally in a sensor they build.
The sensor was not available when we started our project and so we used the camera connected via the CSUMB network. Making this work was more work than expected, in part because it was difficult to find power outlets close to the location where we wanted to install the cameras, and later because the IP cameras couldn’t provide the credentials needed to connect to the CSUMB network. We solved the first problem by carefully choosing a few good places, and the second through an ad-hoc bridge that did the connection.
The Beacons are from Estimote. We configured them to broadcast as iBeacons, and we used an iBeacon NativeScript plugin (thanks, Mehfuz!) in a mobile app to detect and then convey the data to a Node.js application on Modulus. That app then pushed it to the Immutable Data Store and to the IOT Speed layer.
That same Node.js application mentioned above also pulled the data from Placemeter. And if we had GPS data, that app would have done the aggregation. Arguably, the mobile (the source of the Beacon and GPS data) could also push the data directly to the IOT Speed Layer, but having a single aggregation point made things simpler for us.
Immutable Store, Reports and Analytics
The Immutable Store is a MongoDB instance. We didn’t really have a lot of data, and Modulus includes MongoDB, so that was the simplest approach.
The data from MongoDB was exported for two types of presentations. One was via Grafana dashboards, in this case, backed by an InnoDB temporary store. The other was a heatmap presented via the Google APIs in Chrome (we had to also use the KML layer as the new BIT building is not yet in maps.google.com). Both of these worked well.
The server-side of the application relies on Modulus and the Telerik Push Notification System (that part was not implemented). The server-side code is Node.js that interacts with presenter, teams, projects and attendee data stored in MongoDB. The data is presented through a NativeScript application; the same App that provides data from the beacons as part of the sensor layer. Together, the mobile app can determine where the wearer is inside the building and then provide information on the event happening in that room, navigation instructions, etc. This was only implemented partially.
Speed Layer: AMTech Platform
The Speed Layer uses the IOT Platform from AMTech Solutions. This is a multi-tenant IOT platform that was designed for Digital Asset Management. It is both a PaaS (to create solutions) and a SaaS (to run solutions), with a sophisticated resource and security model.
From a developer perspective, applications are defined as rules running on events that flow through queues. The events can be posted directly via REST calls, or generated through a gateway that can, for example, connect a collection of sensors to the platform. An application running on the platform can interact with other services through REST calls (e.g. to inquire the state of some objects) or via notifications (e.g. to request action when some situation arises).
Our team had used the AMTech platform in a previous project, the ProgressNEXT Conference App. In the ProgressNEXT case, the gateway connected a set of RFID readers and antennas to the platform. It tracked attendees as their RFID tags were detected by the antennas, and the platform tracked individuals as they moved through the conference and applied different rules. For example, tracking real-time occupancy of a conference room, or notifying when some attendant is entering a room.
In our capstone project the AMTech platform tracks three types of events: traffic counts from Placemeter, beacon detection from smartphones, and GPS readings from smartphones. The platform then tracks where smartphone users are, and the occupancy of different areas. Below is a screenshot of how the different areas look in the AMTech platform:
The picture shows GeoFences, turnstiles and areas that are counted, beacons providing proximity information, rooms in which occupancy is being counted, and some individuals being tracked.
The computing model for AMTech is essentially based on event computing. At run-time the system relies on a cluster of machines that run services (like geolocation) as well as Cassandra and Storm clusters, and Kafka queues.
AMTech is very interesting and I plan to write about it in future posts.
Our first team email exchange was Feb 3rd and we started meeting the week after. The students were carrying a load of multiple classes, in one case also working full-time outside of school. Most of the pieces described above were completed by the students in time for the Capstone Festival (May 20th), with the exception of connecting the Node.js aggregation service to the AMTech backend. We did that after graduation, as we all wanted to see the final result.
This was a very ambitious project, especially considering the all the constraints. I learned a lot, and so did the students. Big kudos to all. Additional pictures at this Flickr Album.
PS. An aside to anyone looking to get involved in their community—helping a community grounded college like CSUMB is a great experience. Plenty of work, but rewarding. Contact me if you want additional insights.