Electric Vehicle Infrastructure: Week 6

This week was incredibly productive for Team EV. While we wait for our IRB to be approved we are happy to put survey design aside for a few days while we are hard at work wrapping up our sentiment analysis.

The final model, a convolutional neural network, is complete, thanks to Kevin’s hard work. We are now just adding new features to see if it further increases the model’s accuracy. Once our best model is complete, we will use it to get the most accurate results possible.

Meanwhile, Arielle and Emerson have worked on analyzing the sentiment results using the classifications from the (slightly less accurate) support vector machine. Once the CNN is completely done, their code will be rerun with the more accurately classified data. Emerson created an interactive map in Leaflet and D3 that allows users to visualize and inspect any charger location in North America. The map will help identify possible trends in the data that can later be investigated for statistical significance. Arielle has been working on creating models in R to figure out what factors are associated with a location having more positive or negative sentiment. While results are still preliminary, it seems as though the day of the week is a predictor for whether or not a location will have more negative reviews!

Lastly, we are preparing a working paper for the Bloomberg Data for Good Exchange! Our abstract is due this Sunday and the paper next week so we are writing away to submit our best work.

By next week, we hope to have our IRB approved so that we can put up our surveys as soon as possible!

A mapping of EV station reviews in the LA metro. Larger circles signify more reviews and greener reviews signify a more positive sentiment in reviews for the station

Electric Vehicle Infrastructure: Week 4

While week three was filled with quick wins, week four has been a slow trawl to make progress on critical objectives. A lot of time this week was focused on realignment with our faculty mentor, Professor Asensio, on what was necessary for our review categorization ML training set. After long discussions, we’ve decided to pivot away from using MTurk to utilizing two different tools: Qualtrics and PlugInsights. Qualtrics offers a crowdsourcing platform that higher participant demographic fidelity in comparison to MTurk. PlugInsights is a spin-off crowdsourcing platform from PlugShare, the company that provided us with the original dataset. The bonus of PlugInsights is all participants from the platform are EV drivers, immediately lending them credibility in understanding the nuance in the reviews we would ask them to classify.

In terms of sentiment analysis, after additional feature augmentation and hyper-parameter tuning, we’ve reached the peak of feasible performance with SVMs. Time was spent this week exploring neural network based learning algorithms and understanding how one could be properly implemented for our domain specific problem.

We also tried utilizing an SVM for the review classification problem on a small training set of 1,300 reviews. We reached around ~50% accuracy, which is reasonable considering the difficulties of multi-label data and the size of our training set, but here again, we’ve decided to look towards other methods. We’re excited to see where our new plans will lead us!