I’m currently reading Digital Humanitarians by Patrick Meier, an excellent exploration of everything that IS the citizen-driven response to emergencies in cyberspace. Today, I was reading about the integration of Artificial Intelligence (AI) into the digital response and wanted to share some thoughts. If you are unfamiliar with the concept of digital humanitarianism see my previous blogpost here.
In this chapter, Meier explains the application of AI to the Syria Crisis Map. Using a combination of crowdsourced human intelligence combined with automated data mining, the map repurposes HealthMap (a data mining platform that tracks and geo-locates disease outbreaks) to depict chemical attacks and documented killings during the crisis. At presented, 1,529 reports of human rights violations including a total of 11,147 killings have been identified. The New Scientist reports the map as the most accurate estimate to date of the death toll in Syria to date.
He then continues to address the issue of resource matching, during disasters i.e. matching calls for support with offers of support on social media. During the Oklahoma City tornado in 2013, they employed the Phd research of Purohit, which uses AI algorithms similar to those used in online dating websites. The approach involves collecting a sub-set of tweets, developing classifiers for categorizing this data, and teaching the system how to read and interpret a sub-set of tweets using these classifiers. Although effective in Oklahoma, attempts to apply this technique to subsequent disasters showed that classifiers are a) specific to context, b) specific to disasters, and c) time and resource-intensive to prepare for each type of disaster. In response to this need, they developed and implemented the AIDR – Artificial Intelligence Disaster Response portal. This tool allows users to bank a series of tweets based on a series of search tags and develop their own classifiers for identifying and categorizing tweets of importance, while also removing garbage data. Useful tweets may include those that refer to shelter, food, supplies, donations, etc.
In reading this chapter, a couple questions came to mind.
First, despite making great leaps and bounds the issue of developing classifiers specific to each context and hazards remains a human resource issue. Is there a way to apply an all-hazards approach versus hazards-specific approach to classifying tweets? i.e. is it possible to actually generate a list of classifiers that could be applied across all forms of hazards as a baseline? Thinking more meta, would this be possible with AI itself? AI can learn how to tag and compartmentalize specific contents of data, is AI also capable of analyzing the classification mechanisms used across cases to generate a streamlined categorization system? One of the issues identified is language. As Meier describes, requests for shelter in English are expressed differently in Spanish. Is there a way to leverage existing translation systems to automatically generate some of these classifiers independent of language?
Second, as we allocate time to teach AI systems how to classify tweets perhaps we can also take the time to help the people learn how to classify their tweets. Building the work of developing hashtags for emergencies, perhaps emergency responders can leverage existing media outlets and collaborate with response organizations to education the population how to further structure their reports for faster response. For example, as we can use hashtags to identify the general nature of a tweet i.e. #EbolaResponse vs. #EbolaNeed to depict if the tweet pertains to a need or response activity related to the ebola outbreak, perhaps we can also education citizens to categorize the specifics of their tweets a bit further.