Atlanta Map Room – Week 8

Muniba – I continued working on the iPad Controller, primarily focusing on writing methods to calculate geographical coordinates to send to the Projector in order for it to know which area of the map to display. These points were found by first finding the pixel coordinates on the html page, and then unprojecting it to geographical coordinates using Mapbox GL functionality. The points I calculated were the four bounding points, as well as the “center” of the leftmost and rightmost projections of the selected map. As well as this, I worked with Melanie to get our application on an Ubuntu virtual machine so we no longer have to run locally.

Annabel – I spent a good chunk of this week working with Muniba and Chris on the interface of the Map Room. We want the user to be able to highlight specific points in the data for discussion, so I created a page that allows you to select from the currently visible points and have that point highlighted on the map, accompanied by its available information on the panel. Besides that, I’ve been spending a bit of time trying to figure out the irregularities (or, rather, unexpected numbers) in the tax assessment data; Amanda looked over the dataset with me and narrowed down the unexpected numbers into a shorter time span and Dr. Dan Immergluck of GSU was then able to enlighten us as to why that change was happening.

The screen to select a point from those currently on view:

The dynamic panel on the projector screen (featuring a highlighted point)

Seeing Like A Bike: Week 8

Over the last week we have made a large amount of progress! Last Tuesday we did 8 test runs around piedmont park and got full coverage from one of our sensors on every run and coverage on 5 out of 8 runs with the other sensor. From there, the sensors have only gotten more reliable, and we have stopped getting empty readings at points throughout the data. With the sensors acting more predictably and the GRIMM back we have been focusing this week on collecting and analyzing data. In addition, we have added a second route, this time around Georgia Tech so as to increase the speed at which we can complete runs. The wealth of data has allowed us to begin seeing trends in how the sensors relate to each other, and given us lots of ideas for test runs to make over the coming week.

The above graph shows one of our runs at piedmont park where we had one sensor placed near the GRIMM, represented by the red line, at the front of the steel bike and one sensor on the front of the pink bike, which is represented by the blue line.  The GRIMM readings are the grey line. We were able to confirm that the spikes in the GRIMM readings are real data, and not mistakes caused by the turbulence of a bike ride. While neither sensor shows such large spikes, they are close to the GRIMM’s readings in general.

Next week will take us even closer to the end of the program and we intend to completely finish data collection before Monday so we can devote the week to data analysis and preparation for presentations!

RatWatch: Week 8

Yesterday, we had a meeting with community partners from the Westside where we presented our work on RatWatch thus far. The focus was on the usage of the app, both from an individual aspect and a community aspect. Users generally did not have a problem using the app, although some preferred to send images first. However, the bigger concern was actually engaging people to text in when they did see a rat, especially on the westside where there were significantly fewer reports were than on the eastside of Atlanta.

As RatWatch is meant to be a tool for data collection that eventually leads to advocacy, a lack of usage and thus a lack of data presents a challenge. The focus on the modeling side is now to analyze specific code violation and building permit data in a concentrated area to see whether targeted action towards a certain type of code violation affects rat prevalence. This is difficult, however, with the sparsity of data on rat sightings that we currently have, so extrapolation may be necessary.

ATL Map Room: Week 7

Muniba- For the mapping interface, we’re currently working with Melanie from support to switch over and run off of an Ubuntu virtual machine rather than locally off our laptops. I’ve created a toggle for the rectangle in the Controller, so users can choose between a full size, 16 foot map and a half-sized map, and integrated the toggle with Socket.io so that information is sent across the server to the Projector. Also, for the Controller on the iPad, I’m working on creating a faded square that shows the user which portion of the map is currently being projected. Moving forward into next week, we plan to complete the prototype of our Atlanta Map Room so that in the week after, we can have our first set of participants from Dr. Loukissas’s class.

The image above shows the updated set up for the Atlanta Map Room – we have a long, 16 foot platform for participants to draw their maps on. You can also see a sample map of Atlanta traced by our project manager, Chris, using our projector interface. The drawing robot was sent to us from the St. Louis Map Room, however we are not yet sure what role it would play.

 

Annabel – At the beginning of this week I finished up the final map for the tax assessment data as well as a version 2.0 for the panel. After assessing the two in combination, Dr. Loukissas pointed out that it might be more helpful to see the raw data, rather than an explanation of it, next to the mapped data. I’ve subsequently been working on, for most of this week, a dynamic table that shows the all attributes for a given address, which help to understand a point on the map in context. Right now I’ve got the table dynamically updating to the current bounds of the map – via Node.js and the DataTables jquery plug-in – but it needs a hefty bit of stylistic overhaul before it can be seen in the light of day!

Some snapshots of the tax assessment map, below:

Most of Fulton County

On a smaller scale

RatWatch: Week 7

We have finished our data collection period! We managed to gather about 76 reports in total, 7 of which are evidence reports, with the rest being sightings. InImage result for rat 4th of july cartoon addition to the reports themselves, we also gathered some really interesting insights on the data collected. This includes observing how users respond, what times they tend to make reports, and how they interact with the overall application. Through this, we’ve been able to make some pretty substantial changes to the app that we hope will improve the user experience, make data collection more efficient, and enhance its overall effectiveness. We are still working very diligently to develop and implement the new features and factors into the app and statistical model. It’s an arduous process, but we are making great progress. We’ll have more to share next week! Until then, happy 4th of July!

Seeing Like a Bike: Week 7

Our progress over the last week was fairly straightforward, and we are currently on track to have great results by the end of the program. We spent most of the last week without the GRIMM, so we didn’t go out to collect more data. However, we started preliminary analysis of the data we collected last Monday, testing to see if our method of mobile air quality sensing is feasible.

When the real-time clocks arrived, we connected them to the Raspberry Pi, and updated our code to take advantage of the new hardware. This was a very important step, as Pi’s themselves don’t contain a clock on-board, so when the Pi is powered off, it doesn’t actually record the passage of time, adding a significant layer of confusion onto our data collection.

In the meanwhile, Urvi has been designing a 3d printed box to place the Arduino and Air Quality sensor inside, which should hopefully mitigate the many problems we have been having with wiring. The biggest issue is simply that our current setup is only temporary, a few weeks at most, so we don’t want to take the step of soldering the wires to the Arduino. Instead, we are using hot-glue, and electrical tape, which is holding up decently, but not as reliably as we would like.

On Friday, I went out for the first time with two sensors simultaneously. Instead of going on our typical 2.5 mile route starting from Piedmont, I simply took a short loop around Georgia Tech’s campus. These results were promising, but one of our two sensors was giving us very inconsistent data, as around 10% of the time, it would just decide to not give us any data, and we couldn’t find a specific cause of this. Also on Friday, we got the GRIMM back, and to make up for lost time, I decided to go out again on Saturday to do more tests.

On Saturday, I spent the beginning part of the day setting up the entire system, and finalizing the hardware for the bike, and software for the Pi. After going for another quick ride around campus, I discovered that some of our wiring had failed on that run. The next few hours, I focused on rewiring the system, and ensuring that the system would be reliable and resistant to stress. On the second run that day, the setup worked perfectly! Additionally, the data we collected exceeded our expectations, as the two sensors seemed to be aligned perfectly with each other, and seemed to differentiate the various types of streets on the route very well. On the first three segments, the values mimicked each other, and on the 4th segment, the variance in values was significant, but easily explainable due to the positioning of the sensors on the bike (see above image), and the construction taking place on the route. In fact, we would be more worried if the data did match on the last segment!

After analyzing the data from Saturday, we went on another run today, to collect more data similar to our standard Piedmont route choice, as well as adding GPS data collected from a log on my phone. Tomorrow, we plan to have our final major data-collection day, as we will hold experiments not just on the feasibility of the system, but to collect actual, usable data! The rest of the week will be spent analyzing the data from tomorrow, and deciding our path for the last three weeks of the program.

Electric Vehicle Infrastructure: Week 6

This week was incredibly productive for Team EV. While we wait for our IRB to be approved we are happy to put survey design aside for a few days while we are hard at work wrapping up our sentiment analysis.

The final model, a convolutional neural network, is complete, thanks to Kevin’s hard work. We are now just adding new features to see if it further increases the model’s accuracy. Once our best model is complete, we will use it to get the most accurate results possible.

Meanwhile, Arielle and Emerson have worked on analyzing the sentiment results using the classifications from the (slightly less accurate) support vector machine. Once the CNN is completely done, their code will be rerun with the more accurately classified data. Emerson created an interactive map in Leaflet and D3 that allows users to visualize and inspect any charger location in North America. The map will help identify possible trends in the data that can later be investigated for statistical significance. Arielle has been working on creating models in R to figure out what factors are associated with a location having more positive or negative sentiment. While results are still preliminary, it seems as though the day of the week is a predictor for whether or not a location will have more negative reviews!

Lastly, we are preparing a working paper for the Bloomberg Data for Good Exchange! Our abstract is due this Sunday and the paper next week so we are writing away to submit our best work.

By next week, we hope to have our IRB approved so that we can put up our surveys as soon as possible!

A mapping of EV station reviews in the LA metro. Larger circles signify more reviews and greener reviews signify a more positive sentiment in reviews for the station

Atlanta Map Room: Week 6

Muniba – This week, I primarily worked on continuing to develop the mapping application for the Atlanta Map Room. Currently, users are able to zoom, rotate, and toggle layers for our map through our Controller interface on the iPad, which shows a static, rectangular “window” of the area which will be displayed. Then, using Socket.io, these events are emitted to our local server, which in turn pushes that information to the Projector interface to display.

Image above shows our Project Manager, Chris, drawing a map projected from our interface. Above, you’ll see the rail on which our projector slides across.

Annabel: I spent the majority of this week finishing up geocoding the tax assessment data, which I’ve found really interesting as a case study in civic data. There are a lot of irregularities which make it difficult to handle the set, in my opinion – for example, Sidney Marcus Boulevard is alternately referred to as “Sidney Marcus Blvd” and “Sidney Marcus Blv” which creates a bit of an issue when you need to extract the core portion of the street name, but my regex skills are getting a good workout! I’ve also been finalizing the visualization for the tax assessment data; I’m currently working on making the color intensity proportional to the percent change in assessment from 2010 – 2017/18. A small sneak peek is here, with more to come next week:

RatWatch: Week 6

We are nearing the end of our data collection period and have a total of about 60 reports now. Our plans for this and next week are to analyze the usage of the app and how people are interacting with the questions. An initial look at the database and text message exchanges with users shows that there are some inputs that we were not necessarily expecting, such as free-form descriptions of the rats and typing the name of the option instead of the number associated with it. Once we take a deeper look at the reports we have gathered so far, we will be able to make the necessary improvements to the app. In addition, we are also close to finishing our webpage! We are currently in the process of prototyping some designs, but we are very close to deployment and we cannot wait to show you what we’ve madeImage result for web design

Work on the modeling side is currently based upon improving the model that predicts the baseline probability of seeing rats. The city of Atlanta is divided up into a grid of smaller squares, and for each square the total count of rat sightings over time, intersection areas of different environmental layers, and counts of restaurants is computed. Currently, we are testing different models such as a poisson regression, zero-inflated poisson regression, and generalized boosted regression model to see which provides the most reasonable and accurate predictions.

Seeing Like a Bike: Week 6

Last week, our tasks for the week was very straightforward. All we needed to do was simply to start collecting data on our designated route in Piedmont Park. On Monday, we got the Air Quality sensors set up with the Raspberry Pi and Ardiuno, which allowed us to collect data from the sensors every three seconds. On Tuesday, we spent most of our time testing our data collection process, by collecting test runs with both the PMS Air Quality sensors, and the GRIMM, simultaneously, and comparing the two values. The PMS sensors gives us data for both PM values, and particle counts. However, the PM values, are heavily rounded, so they are not as useful for our purposes, and we decided to stick with particle counts for now.

For particle counts, both sensors data with values of 0.3um, 0.5um, 1um, 5um, etc. However, our task was made much more difficult, as the PMS gives us counts less than each value, and the GRIMM gives us counts greater than each value. For example, at 0.5um, the PMS gives us the number of particles smaller than 0.5um, but the GRIMM gives us the number of particles greater than 0.5um. This makes our comparison much harder, and less straightforward. Further complicating the issue, the PMS sensors does not give very consistent values from second to second, meaning that we need to likely aggregate data specifically depending on the type of route. Additionally, the two PMS sensors we have are not very consistent with each other, with values being off by as much as 25%. Overall, these factors will require more careful analysis after our initial data collection and calibration.

On Wednesday, we focused on getting multiple sensors connected to a single Raspberry Pi. Initially, our goal was to start data collection today, but we were still waiting on various bike parts from Amazon, so we decided to work on a related and useful task. Doing this would allow us to combine and aggregate double the amount of data on a single run, giving us more accurate data. We spent all day, running into the late evening working on this task. However due to inconsistencies with the type of text the sensors send, we were never able to have multiple sensors connected to one Pi, where our code could collect data from both sensors at startup, meaning the code would run by itself, without any human input. This was especially important, as when we are out in the field doing data, we won’t have access to a keyboard or mouse to start the program manually.

Thursday was the big day, as we came out bright and early, at 7am, and went out to Piedmont Park with a number of various types of bikes to test air quality. We made 9 bike runs, of 2.5 miles each. The route we selected had a number of diverse environments, including parks, suburban streets, urban side-roads, and major arterial with heavy construction and traffic. It was a beautiful day outside, and we had fun going around the city to get data. We also borrowed Michael, who helped us gather eye-tracking data, along with Urvi, as all of the others on our team wear glasses, which don’t work with the eye-tracking visors. Urvi and Michael each collected data on three runs. The rest of us alternated going around on the bikes collecting pollution data, so in the end, each of us went on 2-4 rides. In the meantime, we got to relax, and explore Piedmont Park!

However, once we got back, we discovered that we had made a huge mistake in our data collection code. Between each run, we needed to reboot the Raspberry Pi by unplugging and replugging it from our battery. However, because this was our first time in the field, we completely forgot about this crucial step. As a result, no pollution data was collected on this day, as our script was never run correctly. Out of the six runs with the eye-tracking visors, we were able to gather data from three of them, and the other three’s data was mysteriously corrupted. Coincidentally, all three corrupted runs were done with Michael.

On Friday, most of us took the day off, for a number of unrelated personal issues. Urvi was able to complete her research training to officially become a part of the research team. April and I were able to come in for a few hours around 4pm to finalize some of the mistakes we made on Thursday. I updated the code to create files in a different format, making data analysis simpler. We planned on doing additional test runs on Sunday to make up for the disappointing results from Thursday, but this plan was later scrapped.

Today, we went out again this morning, and collected three runs of pollution data. After these three runs, it started to rain, and we had to end our trials early. We came back to the lab, with successful data results from each run! However, as we are the bike team, more issues came up. The Raspberry Pi does not contain an onboard real-time clock, meaning that when the Pi is powered off, time isn’t kept, and it appears that time is even reset. To fix this, we have purchased real-time clocks online, and when the arrive, we will connect them to the Pi.

For the next week, our plan is to integrate the real-time clocks to our setup, and start preliminary data analysis from our results today. We will not have access to the GRIMM for the rest of the week, so we will not be able to gather any more data, but by the end of this week, we will have a robust data-collection setup, as well a solid understanding of the data we are working with. From there, it should hopefully be much easier to proceed with the project, and have meaningful results by the end of the summer