Self Help

AI for Good Applications in Sustainability, Humanitarian Action, and Health - Lavista Ferres, Juan M.;Weeks, William B.;Smith, Brad; & William B. Weeks

Author Photo

Matheus Puppe

· 55 min read
Thumbnail

“If you liked the book, you can purchase it using the links in the description below. By buying through these links, you contribute to the blog without paying any extra, as we receive a small commission. This helps us bring more quality content to you!”

BOOK LINK:

CLICK HERE

  • The author leads Microsoft’s AI for Good Lab, which aims to use AI and data science to help solve social problems and help not-for-profit organizations.

  • The Lab provides cloud credits and data science expertise to organizations working on important issues like health inequities, climate change, and humanitarian crises.

  • The goal of the Lab and this book is to share knowledge and approaches so others can apply similar methods to address world challenges.

  • As a doctor, economist, and father, the author worries about issues like health disparities, climate change, and leaving the world worse off for future generations.

  • He has hope that AI tools, when applied ethically and with expertise, can empower people and communities to solve problems and avoid harms.

  • The author discusses lessons learned - that impact requires collaboration with experts, simple versus complex solutions, and focusing on problems over tools or publications.

  • The overall message is a call to action for data scientists and academics to direct their skills toward addressing real-world issues, not just clicks or citations. The societal impact should be the primary measure of success.

  • The intent of the book is to inspire readers by sharing real-world examples of how AI and data science can be applied to address pressing social and environmental problems. It aims to engage readers in a discussion about ethically directing technology for good.

  • Part I provides a primer on AI and machine learning for non-experts. It defines key terms and practices to help understand the case studies presented in later chapters.

  • Chapter 1 defines AI as the ability for machines to learn and apply knowledge. It traces the history and development of the field. Examples are given of how AI is already used by companies to analyze user data and make product/content recommendations.

  • The chapter argues that AI tools, which are effective at driving commercial behavior, could also be applied to problems of social good through objective research. It outlines three ways: 1) facilitating data collection 2) analyzing data to gain new insights 3) using insights to help improve lives at scale.

The overall summary is that the book aims to inspire readers by presenting real-world examples of how AI can be responsibly used to help address important societal and environmental challenges, while engaging in a discussion about the ethical application of emerging technologies.

  • AI, specifically machine learning, can use large amounts of data to discover complex patterns and relationships that would be impossible for humans or traditional programming to handle. This capability has become highly valuable with the recent growth in available data and computing power.

  • Machine learning works by having algorithms generate rules and patterns based on example data, rather than having human programmers manually specify rules. This allows ML to be applied to problems too complex for explicit human programming.

  • Factors like Moore’s Law led to massive decreases in data storage and processing costs over recent decades, enabling the current AI revolution. Storing large datasets and running complex algorithms is now inexpensive.

  • AI can solve problems that were previously intractable, like handwriting recognition. It is also uniquely capable of handling issues like large-scale disease screening that are simply too big for human resources alone to address effectively.

  • Examples are given of how AI can assist in early detection of diseases like diabetic retinopathy, which could help address growing healthcare needs in a more scalable way than relying solely on limited doctor resources.

Artificial intelligence enables problem-solving capabilities that traditional programming cannot match and promises worldwide scalability. However, there are also important challenges and lessons to consider when applying AI to real-world problems. Models can pick up biases from flawed or incomplete data if not properly analyzed. While predictive power is useful, AI cannot determine causation from data alone. If historical data reflects past discrimination, algorithms may inadvertently discriminate as well. Models can also take shortcuts in learning or fail to generalize beyond the specific data they were trained on. Overall, the quality and representativeness of the training data is paramount for ethical and effective AI, and its limitations must be clearly understood. With care and oversight, AI holds great potential, but achieving its full benefits requires addressing these complex issues.

  • Machine learning models and AI systems can be gamed or manipulated if people understand their mechanics and incentives. An example from colonial India shows how a policy to reduce snake bites backfired when people started breeding snakes for the bounty.

  • Voice cloning technology that helps ALS patients can also be misused for financial crimes, as criminals have used AI to impersonate voices for sophisticated heists. This shows the dual nature of technology.

  • Financial models and credit ratings used in the run up to the 2008 crisis created an illusion of certainty and underestimated risks, contributing to the housing crash. Overreliance on models without understanding limitations can be dangerous.

  • Collaborating with subject matter experts is crucial for using AI to solve complex problems. Data scientists alone may lack important domain knowledge, as shown by examples in COVID modeling and premature baby survival analysis.

  • In summary, while AI has potential, its limitations must be acknowledged. Models can be gamed, data may be imperfect, and causality is difficult to determine without expert knowledge. Collaboration is key to responsibly applying AI for social good.

  • Advanced AI models like GPT have sparked huge interest in AI among the general public and governments. Searches for “AI” increased 900% after models like GPT-3 were introduced.

  • The capabilities of these large language models, especially their ability to generate human-like text, represent a major leap forward in AI and have started a new era of widespread AI development and adoption.

  • However, these models are still limited in that they do not truly understand language or have consciousness - they rely on complex mathematical computations and pattern matching based on huge amounts of training data.

  • A key limitation is an inability to distinguish fact from fiction or determine the truthfulness of information. Bias or flaws in training data can affect the model’s responses.

  • While impressive, LLMs do not actually comprehend the meaning or context of what they generate. Their human-like responses are based on statistical patterns rather than real understanding.

  • At the same time, LLMs have useful applications like assisting non-native English speakers and increasing access to knowledge. Overall they are powerful tools that could have significant positive societal impacts if developed responsibly.

  • Machine learning projects involve obtaining and evaluating datasets, then splitting the data into training, validation, and test sets.

  • The training set is used to develop a model through supervised or unsupervised learning techniques. This involves iterative experimentation to optimize the model.

  • The validation set is used to test the model’s performance and refine it, such as by removing unnecessary variables. This optimizes the tradeoff between accuracy and complexity.

  • The test set provides an objective evaluation of how well the final model can predict outcomes on unseen data.

  • Common performance measures used on the test set include accuracy, precision, recall, F1 score, and ROC AUC.

  • The goal is to develop a model that generalizes well based on the test set results, without overfitting to the training data. This validated, optimized model could then potentially be deployed for applications.

In summary, machine learning projects follow a structured process of data splitting, model training/refinement, and objective evaluation to develop models that perform well on new data.

  • Researchers developed a deep learning library called TorchGeo to process geospatial data from satellites for remote sensing applications.

  • TorchGeo allows loading data from various satellite datasets, sampling of geospatial data, and preprocessing functions. This makes deep learning models more feasible for satellite imagery which has different collection methods/resolutions.

  • The goal is to realize the potential of deep learning for monitoring activities on Earth like agriculture, urban planning, disaster response, and climate change research using satellite data.

  • Deep learning can process satellite data at a fine-grained level, compare longitudinal data elements, and calculate relationships to recognize patterns and potentially anticipate future states. This helps policymakers address issues like climate change.

  • Examples in the full text apply AI/machine learning to satellite imagery and sensors to identify places that contribute to/mitigate climate change, understand animal behavior, and forecast solar panel degradation.

So in summary, the library enables using deep learning with geospatial/satellite data to analyze patterns and changes over time for applications in sustainability, climate monitoring, disaster response, and more.

  • Satellite imagery provides important data for monitoring climate change impacts and human activities that influence climate change, like land use, urban development, and renewable energy production.

  • There are many different satellite systems collecting multi-spectral data at varying resolutions, making it challenging to integrate and analyze the diverse data sources.

  • The TorchGeo library developed reproducible benchmarks and pre-trained models to facilitate satellite imagery research when labeled data is limited. It addressed issues like non-aligned data from different sources and resolutions.

  • Benchmark results and preprocessing functions in TorchGeo allow researchers to focus on analysis rather than redundant data processing steps. This can accelerate insights into monitoring and mitigating climate change.

  • By streamlining data analysis, TorchGeo helps reduce unnecessary computation and its associated energy/climate impacts. It also makes more datasets accessible for humanitarian and sustainability applications.

  • The key learning was that a unified, tested and multi-disciplinary approach can provide effective and reproducible solutions to common geospatial data challenges in a way that supports broader climate and social impacts.

So in summary, TorchGeo develops AI tools to help integrate diverse satellite imagery in a standardized way, supporting more efficient and accessible climate and environmental research.

  • The researchers developed an AI-informed modeling approach to understand nature-dependent tourism activities in 5 small island nations in the Eastern Caribbean. They analyzed user-generated data from sites like Flickr, eBird, and TripAdvisor to identify popular locations for activities like snorkeling, beaches, wildlife viewing, etc.

  • They incorporated additional local data from tourism operators and governments to enhance the models. Maps were generated showing the intensity and value of different nature-based tourism activities.

  • Estimates showed that coral reef and beach activities accounted for 8% and 22% of tourism expenditures respectively across the countries. However, cruise visitors, who make up 70% of visitors, only accounted for 14% and 7% of time spent on reef/beach activities due to short stays.

  • The models and maps provide insights on where to focus conservation efforts to preserve natural resources that drive tourism economies. It also shows how expanding access to underutilized areas could increase revenues.

  • Considering climate change impacts, even modest environmental degradation could significantly reduce tourism receipts from nature-based activities, highlighting the importance of sustainable resource management.

  • Overall, the scalable approach demonstrates how data and AI can inform sustainable tourism planning by quantifying the value of natural assets that tourism depends on.

Here are the key points about the wildlife bioacoustics detection methods used:

  • A multi-modal contrastive learning approach called CLAP was developed that integrates audio features and text features in a single model. This allows the model to learn relationships between sounds and semantic concepts instead of relying on predefined labels.

  • Contrastive learning maximizes similarity between audio and text embeddings (features extracted by neural networks) while minimizing similarity between mismatched pairs. This aligns the representations from different modalities.

  • CLAP enables “zero-shot transfer” where categories can be defined during inference using natural language, not just predefined labels. This provides more flexibility than supervised learning.

  • Two different audio feature extractors were used - PANN (CNN-based) and HTS-AT (Transformer-based) and compared to a supervised ResNet-18 baseline.

  • Average Precision was used to evaluate model performance on group-level recognition tasks across 9 bioacoustics datasets, including some the models were not trained on.

  • Three versions of CLAP differing in pre-training data scale were evaluated to test its ability to adapt to new bioacoustics tasks even without similar training data.

So in summary, CLAP uses multi-modal contrastive learning to correlate sounds and text, eliminating the need for predefined labels and enabling more flexible zero-shot recognition compared to supervised learning baselines.

  • Monitoring whales and other marine mammals is challenging due to their large migratory ranges and remote ocean habitats that are difficult for ships and planes to access.

  • The Geospatial Artificial Intelligence for Animals (GAIA) initiative aims to use advanced satellite imagery and AI to monitor whale populations from space. This could provide a more comprehensive view of whale distribution across vast ocean areas.

  • Very high-resolution commercial satellite imagery now enables identifying individual whale species. AI tools can analyze large volumes of satellite data to detect and count whales.

  • Initial challenges included lack of labeled satellite whale imagery for training AI models. Partners collaborated to manually label some images as a start. Other challenges were variable image quality and occlusion of whales.

  • Lessons learned were the need for more diverse labeled data, techniques to handle data variability, and validation of AI detections against other data sources like acoustic sensors.

  • With further research, satellite and AI monitoring could provide near real-time surveillance of whale populations globally. This could help conservation by better understanding threats from climate change, pollution, ship strikes in remote habitats.

  • The GAIA collaboration demonstrates how public-private partnerships can combine expertise to tackle biodiversity monitoring problems through innovative technological solutions.

  • The study analyzed the social networks and behaviors of 1,081 individually identified giraffes over 5 years using AI-identified datasets.

  • Giraffes form social groups but can move between groups. Males mate with multiple females. Sex and age were believed to influence social connectedness and movement between communities.

  • Researchers hypothesized adult males and younger animals of both sexes would show greater social connectedness and movement (“betweenness”) across groups compared to adult females, which form stronger relationships.

  • Using network analysis, 4 distinct mixed-sex “super-communities” were identified, along with intermediate-level female-only communities.

  • Around 70% of giraffes remained within their original super-community. Adult males had higher betweenness than adult females, supporting hypotheses.

  • Younger animals had more social connections than adults, also supporting hypotheses. This suggests climate change impacts on habitat could influence behaviors and reproduction by altering social networks.

  • The study demonstrated a reproducible way to analyze animal social behaviors and track changes over time using large AI-identified datasets, important for understanding potential climate change impacts.

Here are the key points from the summary:

  • The study examined the social networks and movements of giraffe populations in northern Tanzania using photo identification data collected over 5 years.

  • Giraffes were classified into calves, sub-adults, and adults based on age. Social network analysis was used to analyze connections between individuals.

  • Metrics like degree, closeness, and betweenness centrality were calculated to assess social connectedness within and between different age/sex classes.

  • Four distinct “super-communities” were identified that had stable structures but overlapped spatially.

  • Adult males had higher social centrality scores than females and transitioned among super-communities twice as often, reflecting their roaming reproductive strategy.

  • Young males had the most social ties and moved most frequently, attributed to social exploration prior to dispersing from their natal group.

  • The findings provide insight into how giraffe social associations and movements vary by age and sex due to differing life history strategies.

  • This has implications for conservation, as translocating giraffes without considering social structures could disrupt social groups and reduce fitness.

So in summary, the study analyzed the complex social networks of giraffes and how connectivity varies between demographic classes in relation to their life histories and reproductive strategies. This increases understanding of giraffe social dynamics, which is important for conservation planning.

Here is a summary of the key points about community assertiveness values from the passage:

  • Researchers studied over 1,000 giraffes in Tanzania’s Tarangire ecosystem over 5 years to analyze their social networks and movements.

  • Both static and dynamic network clustering analyses identified 4 distinct “super-communities” of giraffes, surprisingly mirroring geographic areas despite not using location data.

  • On average, each giraffe was socially connected to 65 others. Male giraffes had more social interactions than females, especially among calves.

  • 70% of giraffes stayed within their original super-community, 27% ventured to one new one, and 3% visited all three examined super-communities. Adult male movement was higher.

  • Males had higher “closeness centrality” and connectedness than females, indicating closer network ties. Calves had higher “betweenness centrality” than adults, showing more influence over information flow.

  • The researchers discovered multiple distinct adult female-only social communities within each super-community.

  • Male movements covered a more diffuse area and were less constrained than females. Males traveled 1.5x farther than females and calves.

  • The study revealed the giraffes have a complex multi-level social organization with fluid groups nested within female communities and larger super-communities. Male roaming strategies help connect this social structure.

  • Wildlife-livestock conflicts in Kenya’s Maasai Mara are intensifying due to growing human populations, resource competition, and climate change effects. This threatens endangered species and pastoralist livelihoods.

  • When predators kill livestock, pastoralists sometimes retaliate by poisoning wildlife, perpetuating a destructive cycle. Targeted interventions are needed to safeguard both livestock and wildlife.

  • The researchers collaborated with Kenya Wildlife Trust and Smithsonian to develop a data-driven solution using satellite imagery and machine learning.

  • Convolutional neural networks were trained to identify boma settlements and detect cattle presence from high-resolution satellite images.

  • The models achieved good performance, with a Jaccard coefficient of 0.63 for semantic segmentation of bomas/buildings and an F1 score of 0.97 for classification of boma occupancy.

  • Mapping settlements and detecting potential conflict hotspots based on livestock presence can help direct conservation initiatives like predator-proof bomas and anti-poisoning education.

  • This work has potential to inform wildlife conflict mitigation strategies beyond Maasai Mara and support broader sustainability efforts. Accounting for population, resources, and communities is key.

  • The researchers developed deep learning models to detect poultry CAFOs (concentrated animal feeding operations) from aerial imagery across the US. This addressed limitations of relying solely on self-reported census data, which is only every 5 years and may miss unreported farms.

  • They trained a convolutional neural network model on labeled aerial imagery of the Delmarva Peninsula that identified poultry barns. This model was then applied more widely.

  • To improve generalization, the model used data augmentation techniques like random rotation flipping of images. Temporal augmentation also allowed identifying new CAFOs and construction date estimates.

  • A rule-based filter removed false positives from the model predictions. Validation compared predictions to a hand-labeled California dataset.

  • The result was the first national open-source dataset of poultry CAFO locations, which can help regulators identify and prioritize farms for efficient monitoring of waste management practices and their environmental impacts. This supports efforts to curb agriculture’s disproportionate contributions to climate change and water/air pollution.

  • The researchers developed a deep learning model to identify and map existing solar farms across India using satellite imagery. Their goal was to better understand the land use impacts of solar energy development.

  • The model was able to identify 1,363 solar farms in India, including 1,035 that had not been previously mapped. It achieved an average accuracy of 92% in identifying solar farms.

  • Analysis of the locations found that around 7% of solar developments occurred in habitats important for biodiversity and carbon storage. Nearly two-thirds were situated on agricultural lands.

  • Mapping existing solar farms can help policymakers and developers make more informed decisions on siting future projects to balance renewable energy needs with other land use considerations like ecosystem preservation and agriculture.

  • The model could potentially be applied globally to track countries’ progress on their climate commitments and renewable energy targets based on actual solar farm deployment on the ground.

  • Special data on solar installations is important to proactively identify and mitigate any potential conflicts between renewable energy development and other land uses valued by people and nature.

In summary, the researchers developed a deep learning tool to better map and understand the land use impacts of India’s large-scale solar energy expansion for climate change mitigation purposes. The aim was to balance renewable energy goals with biodiversity and agricultural land needs.

  • The authors mapped glacial lakes in the Hindu Kush Himalayas, which are susceptible indicators of climate change impacts and pose flood risks downstream. Accurate mapping over time is important for risk assessment.

  • They compared different machine learning approaches (U-Net, morphological snakes) using Landsat labels to guide segmentation of higher resolution Sentinel-2 and Bing imagery.

  • The historically guided U-Net and properly initialized morphological snakes models achieved 8-10% better accuracy than existing U-Net approaches based on intersection over union scores.

  • Error analysis highlighted strengths and limitations of each method.

  • Visualizations were designed to facilitate discovery of lakes of potential concern. An interactive interface was also developed.

  • The goal was to provide automated tools to support organizations monitoring these risks more efficiently than manual methods. All code was publicly released.

  • Accurate mapping of changing glacial lakes over time due to climate impacts is critical for assessing outburst flood risks and informing mitigation efforts to protect downstream communities.

  • Existing methods for mapping glacial lakes from satellite imagery like Landsat are labor intensive and don’t capture dynamic changes well due to low resolution and infrequent updates.

  • The researchers developed new methodologies for automated glacial lake mapping by incorporating historical data into semantic segmentation models and using higher resolution Sentinel-2 and Bing Maps satellite imagery.

  • They explored several machine learning approaches, including U-Net, historically guided U-Net, morphological snakes, and deep level set evolution models.

  • Experiments evaluated the models on metrics like intersection over union, precision, recall and Fréchet distance using recent and historical labeled imagery.

  • Models using Sentinel-2 data generally outperformed those using Bing Maps data. The morphological snake model also performed relatively well compared to the others.

  • While no single model was best on all metrics, historically guided models were more robust on complex segmentation tasks.

  • The new automated methods showed improved performance over existing techniques and have practical benefits for timely risk assessment of hazardous glacial lakes.

Here are the key points about the methods used:

  • DeepDeg is a machine learning model developed to both forecast and explain degradation of solar panels over time.

  • It has two main components - a forecasting model and an explanation model.

  • The forecasting model uses initial hours of current-voltage degradation data to predict future current-voltage characteristics. It incorporates linear and non-linear techniques like auto-regressive models and CNNs.

  • The explanation model correlates the forecasted degradation trends with underlying physical or chemical factors that may be driving degradation.

  • It attributes time-dependent degradation to specific physical parameters like changes in shunt resistance, based on an analytical framework using the one-diode equivalent circuit model.

  • This allows DeepDeg to not only predict degradation accurately but also provide explainable insights into the physiochemical causes of degradation for design improvements and further research.

  • The model was trained and validated on a large dataset of organic solar cell stability tests to demonstrate its ability to characterize and forecast degradation.

Here are the key points about using data and machine learning for humanitarian action:

  • Natural disasters like hurricanes, floods, earthquakes have increased in frequency and severity due to climate change, creating more emergency situations.

  • Emergency response has gotten better over time but still faces challenges of obtaining accurate, timely data on things like damage assessment and food/resource needs.

  • New approaches are using various datasets like satellite imagery, household surveys, longitudinal/historical data to rapidly and precisely identify humanitarian needs after disasters.

  • Satellite data can assess damage at scale but needs calibration with ground data. Combining multiple data sources provides a more holistic picture.

  • Machine learning models are going beyond binary classifications (damaged/not damaged) to provide nuanced assessments, e.g. degree/type of damage.

  • Historical and longitudinal data help establish baselines to better determine the actual impacts and needs created by each new disaster event.

  • The goal is to speed up response times, ensure resources are directed to where they are most needed, and allow aid to be tailored to diverse on-the-ground conditions.

  • As datasets and methods improve, technology and data science have large potential to enhance humanitarian action and support people affected by climate disasters. Timely, accurate needs assessments are critical for effective emergency response.

  • The chapter discusses how AI can help assess building damage after natural disasters using satellite imagery. This helps humanitarian organizations provide timely emergency response.

  • It develops a convolutional neural network (CNN) model using a technique called Siamese U-Net. The model takes pre- and post-disaster satellite images as input and outputs building segmentation and damage classification.

  • The model classifies damage into 4 levels - no damage, minor, major, destroyed. This provides a more granular assessment than binary classifications.

  • It was tested on real disaster data from the xView2 challenge to evaluate performance under operational emergency conditions.

  • Compared to winning solutions, it works much faster (3x fastest, 50x some solutions) while still achieving accurate results (F1 scores of 0.74 for building segmentation and 0.60 for damage classification).

  • A web visualizer was also created to display imagery, building outlines and damage predictions to help emergency responders on the ground.

  • The aim is to provide rapid damage assessment at scale to coordinate effective humanitarian aid after major natural disasters.

  • The paper developed a convolutional neural network (CNN) model to analyze satellite imagery and identify damaged buildings after disasters, in order to help accelerate humanitarian emergency response.

  • The model performed building segmentation to identify building locations and multi-class classification to rate damage level from 1-4.

  • It was found to be very accurate for building segmentation (F1 score of 0.74) and outperformed the previous best model from the xView2 challenge.

  • The model was also three times faster than the fastest xView2 model and over 50 times faster than the winning xView2 model. With optimizations, it could process over 600 square km per hour.

  • A web-based visualizer was developed to display the segmentation and damage predictions overlaid on pre-and post-disaster imagery to facilitate response.

  • Overall, the model provided a faster, more cost-effective solution for damage assessment using satellite imagery compared to previous methods, with the goal of improving disaster relief and resource allocation.

Here are the key points from the summary:

  • The study developed a machine learning model to classify dwelling types from satellite imagery in order to assess natural hazard risk at a granular, household level in India.

  • A U-Net-based neural network was trained to perform multi-class semantic segmentation of buildings and identify their types.

  • A statistical risk scoring model was also developed using dwelling type classification and flood inundation data.

  • The models showed good performance when validated with post-disaster ground truth data from natural hazard events in India.

  • Having household-level risk indicators can help disaster response efforts target vulnerable areas and communities.

  • The approach has potential for adaptation to other locations by training on local data, to generate detailed risk models globally.

  • Classifying dwelling roof types provides an indicator of structural integrity important for assessing vulnerability to floods, cyclones, earthquakes etc.

  • The study findings could help drive preemptive disaster preparedness and response by communities at risk.

So in summary, the key importance is that the work developed a machine learning approach to granular natural hazard risk assessment, which has utility for better targeting disaster response and preparedness efforts.

  • The team used AI and satellite imagery to assess earthquake damage to buildings in Turkey from February 2023. They partnered with Turkey’s disaster management agency.

  • They focused on 4 cities near the earthquake epicenter and estimated building damage using high-resolution satellite images from before and after the quake.

  • An AI model was trained to identify buildings and classify damage levels by comparing pre/post images on a pixel level. Building footprints helped attribute damage estimates.

  • Across the 4 cities, they identified 3,849 damaged buildings impacting an estimated 160,411 people.

  • Kahramanmaraş was the most heavily affected, with 7.44% of its buildings damaged potentially impacting 148,388 people (24.3% of the city population).

  • Rapid damage assessment via AI and satellites can help emergency response by providing near real-time, granular information on the scale and location of impacts following disasters.

In summary, the team demonstrated how AI can be applied to satellite imagery to quickly assess building damage from natural disasters at a large scale, which aids emergency response efforts. Kahramanmaraş appeared to suffer the worst impacts from the earthquakes.

  • The study used machine learning models to predict food insecurity at the community and household levels in near real-time, using high-frequency household survey data collected through sentinel sites in Malawi.

  • Various machine learning models were compared, including neural networks, random forests, and convolutional neural networks. A random forest model performed best.

  • Location and self-reported welfare were found to be the best predictors of food insecurity at the community level.

  • When predicting household vulnerability, including a historical food insecurity score and 20 additional variables selected via explainability frameworks improved prediction accuracy.

  • The combination of high-frequency local data and machine learning provides a way to identify vulnerable households and improve humanitarian food relief programs. Near real-time predictions can help efficiently target scarce resources.

  • Existing food insecurity prediction systems often rely on geospatial and qualitative data and have mixed accuracy. This study showed machine learning on granular household data can potentially provide better predictions.

  • The approach aims to aid policy and programming by forecasting food insecurity risks at finer scales of communities and individual households.

  • The study used machine learning models to predict household food insecurity levels in rural areas based on survey data collected over time.

  • The outcome variable was the Reduced Coping Strategies Index (rCSI) score, which measures behaviors related to food insecurity.

  • The data was split randomly into training and test sets to build and validate the models. Feature engineering was done to prepare the data.

  • Random forest and logistic regression models were compared. Random forest consistently outperformed logistic regression across evaluation metrics.

  • Key predictive features included past rCSI scores, subjective well-being ratings, seasonal factors, experienced shocks, and social assistance. Location was also important.

  • The models could forecast rCSI scores up to 4 months in the future, allowing for anticipatory humanitarian responses.

  • Maps were generated to identify vulnerable households for targeted assistance. This helps move beyond larger area analysis to a more granular level.

  • The study demonstrated the potential of machine learning and high-frequency survey data to predict food insecurity at the community/household level.

  • The researchers collected a large dataset of banknote images called BankNote-Net to address the lack of comprehensive public datasets for training currency recognition models. It contains 24,826 images spanning 17 currencies and 112 denominations, approximating real-world conditions faced by visually impaired users.

  • They used supervised contrastive learning to train an encoder model that learns compressed, regulation-compliant embeddings of the banknote images. These embeddings capture important features while protecting privacy.

  • The trained encoder model and embeddings are shared publicly to enable training and testing specialized models for any currency, including those not covered in the original dataset.

  • A “leave-one-group-out” validation scheme on the embeddings demonstrated their ability to pre-train models for new currencies. This helps democratize machine learning solutions for currency recognition in assistive technologies.

  • The goal is to improve such technologies and thereby help visually impaired people engage in activities like commerce more independently through improved banknote recognition.

  • The authors curated a Broadband Coverage dataset that reports broadband coverage percentages for ZIP codes in the United States, using broadband coverage estimates collected from Microsoft Services.

  • Differential privacy methods were applied to preserve the privacy of individual households while allowing publication of the aggregated ZIP code-level estimates. The Laplace Mechanism was used, adding random noise with an epsilon value of 0.1.

  • An empirical methodology was developed to calculate error range estimates for the broadband coverage at each ZIP code. This provides transparency on the expected error introduced by differential privacy.

  • Importantly, the authors show that calculating and publishing these error ranges does not induce any additional privacy loss beyond what is guaranteed by the differential privacy technique.

  • Making this broadband coverage and error range data publicly available can help policymakers and organizations better understand gaps in connectivity and target resources. But privacy of individuals had to be preserved, which the differentially private methodology achieves.

In summary, the key contributions are the new Broadband Coverage dataset for the US, the use of differential privacy to anonymize the data while allowing publication, and demonstration that calculating error ranges does not compromise privacy. This work aims to improve broadband access analyses and decisions while protecting individual privacy.

  • The Carter Center monitors the ongoing Syrian civil war by collecting and analyzing reports of conflict events from ACLED data. This helps inform humanitarian and peacebuilding efforts.

  • Manual classification of events into 13 incident types (e.g. clashes, shelling) is time-consuming and difficult to scale.

  • Researchers at The Carter Center trained a language model on a sample of their Syrian data to automate event classification.

  • The model achieved 96% accuracy on test data and 90% on out-of-sample data. It was also able to identify events involving multiple incident types.

  • This automation reduced the time needed to process data and allowed The Carter Center to produce more timely reports while scaling to more data.

  • The language model made a breadth of conflict datasets more accessible by automating manual data transformations.

  • This work demonstrates how natural language processing can be incorporated into peacebuilding by helping organizations like The Carter Center monitor conflicts more efficiently and gain deeper insights. Overall, it contributes to improved humanitarian and policy responses.

  • The Carter Center collaborated with researchers from Princeton to analyze how misinformation spreads online through users browsing from unreliable news sites to other untrustworthy content (i.e. falling down “rabbit holes”).

  • Through data analysis, they identified factors that contribute to this phenomenon, like browsing patterns of users who engage more with misinformation after initially encountering it.

  • There are stark differences in how users reach reliable vs unreliable news sites based on linking structures online. Engagement with these different types of sites also differs.

  • They then used these findings to develop a machine learning model to better identify other unreliable news sites in a practical, real-world application.

  • The goal is to shed more light on how misinformation proliferates online and then use that understanding to help combat the spread of untrustworthy content through technological solutions like automated fact-checking models.

In summary, the collaboration analyzed how misinformation spreads through user browsing behaviors, identified contributing factors, and applied those insights to create an ML model to detect unreliable news sites. The aim is to understand and address the proliferation of online misinformation.

  • The study analyzed browsing data and site ratings to understand patterns of traffic between reliable and unreliable news sites. It found unreliable sites are “stickier” in keeping users within their own sites through internal links.

  • Unreliable sites also tend to refer users to other unreliable sites, contributing to potential “rabbit holes” of misinformation. Reliable sites mostly refer to other reliable sources.

  • This motivated developing a machine learning model to identify unreliable sites based on incoming and outgoing traffic patterns. Features included proportions of traffic from/to reliable, unreliable, and other types of sites.

  • The model achieved 98% precision in classifying sites, outperforming similar prior work. Importantly, real-world testing validated the practical value of the proposed framework for identifying new misinformation sites at scale.

  • The research highlights the importance of understanding how digital technologies disseminate information, and the responsibility to investigate their impact, given opportunities from anonymized browsing data.

  • GitHub is a major platform for open data collaboration, hosting over 800 million open data files totaling 142 terabytes. This makes it one of the largest hosts of open data worldwide.

  • Sharing data openly on platforms like GitHub can accelerate AI research by allowing researchers to access larger datasets and build on each other’s work. This promotes experimentation and innovation.

  • The authors analyzed patterns of open data sharing on GitHub and found it has experienced rapid growth in recent years. They looked at usage trends to understand how open data is currently being utilized.

  • As an example, they openly shared three datasets they collected on GitHub to support their analysis. Releasing data openly can help improve its discoverability and reuse.

  • By examining the open data landscape, the goal was to empower users to leverage existing open resources and contribute to advances in AI. AI has significant potential to help address societal challenges if developed responsibly using open approaches.

  • Open data collaboration platforms like GitHub unlock this potential by facilitating data sharing between researchers and encouraging the development of new applications. This accelerates progress in using AI for good.

The key message is that openly sharing more data will help advance AI innovation and its ability to create beneficial applications, so platforms like GitHub that encourage open data are important to support.

  • GitHub is one of the largest open data platforms in the world, hosting over 11 million public data repositories containing over 800 million files. It plays an indispensable role in the open data ecosystem that supports AI development.

  • Open data access on platforms like GitHub is critical for unleashing the power of AI by providing researchers with large and diverse datasets to train and improve AI models. Many important AI datasets are hosted on GitHub.

  • An analysis of GitHub data files found the majority are in JSON and CSV formats, most have no clear license, and contributions have surged in recent years. The top contributors collectively posted over 40 million files.

  • While GitHub has search functions, only a fraction of repositories and data can currently be found this way. Improving discoverability would help users and organizations leverage the available datasets.

  • GitHub is a rich source of open data fueling AI advances, but the analysis had limitations like excluded file types and inability to fully validate every file’s content. Further analysis could provide more insights into this important open data platform.

In summary, the key findings relate to the scale of open data on GitHub, its importance for AI development, trends in file types and contributors, and opportunities to enhance discoverability of datasets. GitHub plays a major role in supporting open data and AI progress.

Here are the key points about detecting middle ear disease using artificial intelligence from the summary:

  • Middle ear infection is a major preventable cause of hearing loss in children, and can be detected via otoscopy (examination of the ear with a microscope).

  • The researchers evaluated AI models developed using deep learning to identify normal vs abnormal eardrums from otoscopic images.

  • They tested models on independent image datasets from Turkey, Chile and the US, to assess generalizability/external performance beyond the training data.

  • Models showed high accuracy (AUC 0.95) when tested on the same dataset they were trained on, but lower accuracy (AUC 0.76) when tested on external/unseen data from different locations.

  • Combining all datasets and using cross-validation yielded better pooled performance (AUC 0.96).

  • More work is needed to improve the external generalizability of AI models for otoscopy, through techniques like data augmentation and preprocessing.

  • Accurately detecting middle ear infection via AI-assisted otoscopy could help address the large burden of hearing loss, especially in children.

This study evaluated the use of deep learning algorithms to classify otoscopic images as normal or abnormal. Over 1800 images from three different cohorts were used to train and test models. Four neural network architectures (ResNet-50, VGGNet-16, DenseNet-161, Vision Transformer) were evaluated.

Models performed well when trained and tested on images from the same cohort (“internal” validation), achieving high accuracy, sensitivity, and specificity. However, models tended to perform worse when trained on one cohort and tested on another (“external” validation), indicating issues with generalizability. External validation accuracy decreased significantly, though AUC scores remained moderate.

This suggests deep learning models for otoscopic images can accurately classify images from the same data source, but have limitations generalizing to new datasets with different characteristics. Factors like image quality, devices, ear conditions represented, and deep learning methods affected model performance. While promising, these algorithms require further refinement to be reliably applied in real-world clinical settings on varied patient populations.

  • Leprosy remains a public health issue, infecting around 200,000 people annually worldwide, particularly in India, Brazil, Indonesia, and Sub-Saharan Africa. Delayed diagnosis and treatment can lead to permanent nerve damage and disabilities.

  • The researchers used artificial intelligence models to help diagnose leprosy and differentiate it from other conditions using photos of skin lesions, clinical symptoms and exam findings, and patient demographic data.

  • Three separate machine learning models were developed - one for the photos, one for clinical data, and one for demographic data. The outputs of these models were then combined into an integrated model.

  • The photographic and clinical data models processed the image and symptom features to generate probability histograms of whether findings were consistent with leprosy.

  • The integrated model combined the outputs of the individual models along with patient data to provide an overall prediction of whether a patient likely had leprosy or not.

  • The combined model was then tested on a new group of unseen patients to validate its effectiveness in diagnosing leprosy in a real-world clinical setting.

  • The goal was to help expand access to diagnosis, particularly in remote and underserved areas, through an AI-assisted virtual examination using commonly available mobile phone cameras and internet connectivity.

  • The researchers developed a convolutional neural network (CNN) model to automatically detect and segment metastatic prostate cancer lesions in whole-body PET/CT images. The images were of patients with metastatic castration-resistant prostate cancer who received an injection of the [18F]DCFPyL radiotracer targeting prostate-specific membrane antigen (PSMA).

  • They trained the model on 418 training images and validated it on 30 images. It was then tested on 77 images to evaluate detection performance.

  • Using a weighted batch-wise Dice loss approach and including the first two neighboring axial slices improved lesion detection rates compared to the baseline model.

  • When including two neighboring slices and the weighted Dice loss, the model was able to detect 80% of lesions in the test images overall, and 93% of lesions with a [18F]DCFPyL standardized uptake value greater than 5.0.

  • Automated segmentation could help quantify total tumor burden for treatment decisions and monitoring, but performance depends on factors like lesion size, intensity and location. Specialized loss functions may further improve results.

In summary, the researchers developed a deep learning model for automated segmentation of metastatic prostate cancer lesions in PET/CT images, with promising detection results that could help clinical adoption of total tumor burden quantification.

  • The images from emission tomography (PET) were 192x192 pixels while computed tomography (CT) images were 512x512 pixels. The CT images were downsampled to match the PET image resolution.

  • 525 PET/CT image pairs with up to 5 manually delineated lesions each were divided into training, validation and test sets for model development and evaluation.

  • Neural network models like U-Net with different backbone architectures and MA-Net were trained on the data. Adding neighboring slice information improved performance.

  • A weighted Dice loss function improved detection of small lesions but increased false positives compared to standard Dice loss.

  • Models performed better for larger lesions, those with higher radiotracer uptake values and lesion activity. Small and low uptake lesions were harder to detect.

  • Lesions farther from the bladder were also more difficult to identify, and incorporating bladder context may help segmentation.

  • While models showed moderate success detecting large lesions, accurately segmenting small and low uptake lesions remains a challenge. Improving detection of low uptake lesions should also aid small lesion detection.

  • Future work should focus on custom loss functions, data augmentation and reducing false positives to better separate signal from noise, especially for challenging low uptake lesions. Larger and more diverse datasets could also help develop deeper insights.

  • The authors developed an AI-assisted solution to help screen premature infants for retinopathy of prematurity (ROP) in low-resource settings where pediatric ophthalmologists are rare.

  • Using a smartphone camera and low-cost magnifiers, videos of infants’ eyes were collected. A machine learning algorithm then selected the best quality retinal image frames from each video.

  • These cropped retinal images were analyzed by an image classification model to predict the probability of ROP presence. The full video and results were then shared with a pediatric ophthalmologist.

  • This allows for ROP screening by less skilled healthcare workers where no ophthalmologist is available, expanding access. The AI aims to efficiently flag high-risk cases for specialist review, helping optimize scarce resources.

  • The key components of the solution are: 1) a retinal image frame selector, 2) an ROP classifier, and 3) a mobile app for doctor interaction. The frame selector identifies and crops retinal images, while the classifier predicts ROP probability from these images.

  • The goal was to develop a low-cost, accessible screening process using easily obtainable video inputs to help detect ROP earlier in underserved populations. This demonstrates how AI can assist diagnosis and treatment in low-resource settings.

  • Researchers developed an AI-based mobile application to help screen infants for retinopathy of prematurity (ROP) in low-resource settings where ophthalmologists are scarce.

  • The app allows trained personnel to collect retinal videos using smartphones, upload them, and get immediate feedback on image quality.

  • The videos are processed to select the best retinal frames using a frame selection model. These frames are then classified by an ROP classifier model.

  • The app displays the selected frames, the model’s ROP probability for each frame, and asks ophthalmologists for feedback to further improve the model.

  • Evaluation on test data found the frame selection model could obtain high-quality frames 82.5-87.1% of the time. The ROP classifier achieved 97.8% accuracy at the frame level.

  • Compared to ophthalmologists, the model outperformed in sensitivity (correctly identifying true ROP cases) though was less accurate overall and specific.

  • The researchers conclude the app can help increase ROP screening in low-resource settings by liberating scarce specialists to focus on diagnosis and treatment. More validation is still needed.

  • The study used a large US medical billing claims dataset to identify common long-term diagnoses and symptoms (coded as ICD-10 codes) following COVID infection.

  • It used a self-controlled cohort design to compare ICD-10 codes in a post-COVID period to a pre-COVID control period, focusing on codes that significantly increased after COVID.

  • Logistic regression analyzed relationships between long-term effects and social/medical risk factors like age, gender, race, income, education level.

  • Over 1.37 million COVID patients were analyzed. 36 ICD-10 codes and 1 code combination were significantly more common post-COVID.

  • Age and gender were most commonly associated with long-term effects. Race impacted only “other sepsis” while income impacted only “Alopecia areata.” Education impacted only “Maternal infectious diseases.”

So in summary, the study used claims data and statistical methods to identify specific long-term diagnoses after COVID and analyze their relationships with various risk factors.

Here are the key findings from the study:

  • 36 ICD-10 codes were statistically significantly more prevalent in the post-COVID period compared to pre-COVID, including codes for ongoing pulmonary issues, cardiac/thrombotic complications, malnutrition, and post-viral fatigue syndrome.

  • Many of these codes remained elevated in the 3rd, 4th and 5th months post-COVID diagnosis, though some like acute myocarditis and pneumothorax seemed to resolve over time.

  • Older age was consistently associated with higher rates of long-term diagnoses, while gender was sometimes positively and sometimes negatively associated.

  • Social determinants of health like race, education, income, etc. were generally not associated with higher rates of particular long-term diagnoses.

  • The findings confirm issues seen in other studies like prolonged respiratory symptoms, malnutrition, and post-viral fatigue. They also show complications potentially from severe disease or hospitalization.

  • Commonly reported symptoms in literature like headaches and sleep issues were not elevated in medical claims data, possibly due to under-recording in this data type.

  • In summary, the study utilized a large cohort to identify several long-term health effects of COVID-19 captured through diagnostic codes, and found older age but not social determinants predict longer term issues.

  • Pancreatic cysts are common but difficult to manage, as not all require intervention and surgery risks are high. More accurate assessment of cyst type could improve outcomes.

  • The study developed an AI model called an explainable boosting machine (EBM) to help classify pancreatic cysts and guide management decisions (surgery, monitoring, discharge).

  • The EBM used a two-step process, with one model using pre-biopsy data and a second incorporating cyst fluid analysis results.

  • Compared to clinical guidelines and an existing MOCA model, the EBM showed promising results in more accurately assessing cyst type and potentially reducing unnecessary surgeries by 59% while improving surgical intervention accuracy by 7.5%.

  • EBMs are well-suited for clinical use as they can integrate complex data, provide explanations for predictions, and give calibrated probabilities to aid decision-making.

  • The study demonstrates how AI may help address the complex challenge of pancreatic cyst management and optimize treatment outcomes and resource use.

Here are the key points from the summary:

  • Cigarette smoking causes millions of deaths globally each year and existing public health interventions have modest success in helping people quit.

  • Chatbots using natural language processing (NLP) are becoming common, allowing conversational interactions. However, their potential for smoking cessation compared to current approaches had not been examined.

  • The study describes the 3-year process to develop a smoking cessation chatbot called QuitBot that can answer users’ specific clinical questions about quitting smoking.

  • QuitBot’s conversational abilities were created by generating a training bank of 11,000 user questions and clinician answers on smoking cessation topics.

  • The chatbot was then trained using an LLMs to understand questions and provide appropriate conversational responses.

  • The goal was to create a more accessible and personalized intervention compared to existing standard public health approaches.

  • In summary, the study developed an NLP-powered chatbot for smoking cessation to potentially improve engagement and success rates compared to current interventions, through personalized conversational interactions. The key was generating a large training dataset to teach the chatbot how to discuss quitting smoking.

Here is a summary of the methods used to develop the question answering (QnA) capabilities of QuitBot:

  • Developed a knowledge base of over 11,000 question-answer pairs about quitting smoking from sources like counseling transcripts, call center transcripts, and clinical materials. Questions covered topics like smoking cessation medications, vaping, health effects, motivation, triggers, barriers, cravings, relapse prevention, etc.

  • Tested different NLP approaches (Azure QnA, ParlAI, DialoGPT, GPT-3) on the QnA pairs and found Azure QnA performed best for questions in the predefined library, while a fine-tuned GPT-3 did better on new questions.

  • Recruited adults trying to quit smoking to provide feedback on early prototypes. They preferred structured conversations but said free-form chats needed more contextualization.

  • Further fine-tuned the free-form chat using smoking context parameters in GPT-3 and GPT-3.5 to enhance comprehension of questions.

  • Conducted a randomized controlled trial comparing QuitBot to an existing texting program, finding QuitBot had higher engagement levels than typical clinician interventions.

The goal was to build QuitBot’s main QnA feature to allow open-ended questions about quitting smoking and provide accurate, concise, professional non-repetitive answers. An iterative process involving different data sources and NLP models was used to develop this capability.

  • The QuitBot was developed using transformer-based neural network models like DialoGPT and GPT-3 to address smoking-related Q&A scenarios.

  • Evaluation of different models found that fine-tuned GPT-3 Curie and Azure QnA provided the best answers, but the answers could be repetitive.

  • Additional training data was collected to improve model performance. User testing also provided feedback.

  • The final QuitBot architecture combines Azure QnA for structured questions matching its library, and fine-tuned GPT-3.5 Turbo for open-ended questions.

  • A randomized controlled trial is currently testing the efficacy of QuitBot versus a text-based smoking cessation program, by examining smoking cessation outcomes at various follow-up periods for over 1,000 enrolled participants.

So in summary, it describes the development and evaluation of different NLP models to power the QuitBot’s Q&A capabilities, culminating in a combination of Azure QnA and GPT-3.5, and an ongoing RCT to test its efficacy.

  • The researchers developed a multi-step approach combining satellite imagery, census data, and household surveys to map and estimate populations in the Sahel region of Africa. This region is experiencing effects of climate change and humanitarian crises.

  • Deep learning and model-based geostatistics were used to analyze satellite data on building density and estimate the number of people per building. Census and survey data on population were also incorporated.

  • Preliminary findings indicated the population estimates produced with this method were accurate and the approach could be replicated in other regions.

  • The human structure maps created were combined with environmental data to predict risk and migration patterns in Kenya. Analysis found people in Kenya are moving away from areas with higher heat, rainfall, and extreme heat days.

  • Ecological niches were also highly predictive of malnutrition risk in Kenya.

  • The methods showed promise for addressing population estimates challenges and understanding impacts on sustainability, humanitarian issues, and health. Accurate population data is critical but difficult to obtain.

  • Key learning was the importance of cross-institution collaboration and leveraging diverse expertise and data sources to solve global problems. The approach integrated satellite imagery, census data, surveys, and advanced analytics.

  • Existing population maps are often outdated, coarse, or imprecise for practical use by governments and organizations. They fail to capture urbanization between census years or vulnerable groups like undocumented people or those experiencing homelessness.

  • The authors partnered to produce quarterly, 30-meter gridded population density maps using Planet satellite imagery, deep learning models, and model-based geostatistics. This provided unprecedented insights into human movement over time.

  • They focused initially on the Sahel region of Africa, which has faced displacement, food insecurity, and degraded lands. Censuses in some countries are over 10 years old.

  • A deep learning model estimated building density from Planet imagery and building footprint datasets. Regional fine-tuning improved estimates for the Sahel.

  • Census and survey data were used to estimate people per structure. A population density surface was combined with the building map to distribute population totals across structures over time and administrative levels.

  • The new maps provide higher resolution, temporal consistency and frequency needed to understand population dynamics and deliver services in response to issues like displacement.

  • The chapter discusses the potential applications of AI-generated generative pre-trained transformer (GPT) large language models in medicine.

  • GPT models are capable of generating natural, coherent and grammatically accurate text on various topics. This has significantly advanced the field of artificial intelligence.

  • Potential medical applications include use in radiology to generate reports, facilitate self-care management/decision-making by answering patients’ questions, and improve public health by answering questions about health topics.

  • Radiology application: GPT models could generate initial draft radiology reports to reduce physician workload. Accuracy would need to be verified by radiologists.

  • Self-care management: Patients could get personalized medical guidance, advice and support by communicating with a GPT model. This could improve health outcomes and patient experiences.

  • Public health: GPT models could help spread correct health information to improve population health by answering basic questions from the public on various topics.

  • Overall, the chapter discusses the promising possibilities of leveraging advanced AI technologies like GPT models to help address challenges in various areas of medicine. But issues around accuracy, bias and appropriate use would need to be addressed first.

GPT models have potential applications in medicine, such as generating medical text like letters, notes, and manuscripts. They can also respond to patient queries about health conditions by providing information from medical literature.

In radiology specifically, GPT models could help radiologists by identifying areas of scans that require attention and reducing the time spent reviewing normal exams. They could also generate more accessible reports for patients. Integrating multiple data types like images and text could speed up diagnoses.

GPT models give the general public accessible medical information for self-care management of chronic conditions between clinical visits. They could help patients integrate new research into managing diseases like diabetes. However, models have limitations like potentially generating inaccurate answers.

Overall, GPT models show promise in supplementing healthcare providers by aiding decision making and communication. But they are not meant to replace providers and must be used responsibly given concerns around biases in the training data influencing outputs.

  • AI models like GPT have the potential to improve access to up-to-date, evidence-based medical information for patients. They can convey information in a conversational style without time constraints, using plain language without medical jargon.

  • This could help address gaps in patients’ medical knowledge and literacy that often interfere with following guideline-recommended care. It may help patients make more informed decisions.

  • GPT models integrated with large datasets have potential to provide objective information on treatment options and predicted outcomes to help guide preference-sensitive care decisions for conditions like knee osteoarthritis.

  • By empowering patients, this could improve experiences, outcomes and reduce costs by avoiding unnecessary interventions and complications.

  • AI also has significant potential to improve public health globally, especially in low-resource areas, by enhancing diagnosis, optimizing healthcare resources, and addressing social determinants of health.

  • Examples include using AI to train community health workers and provide reliable medical information to the general public via mobile.

  • Ensuring policymakers and community leaders guide AI development and application is critical for interventions to be effective and represent the public’s priorities and needs.

  • While still evolving, AI models can help fill gaps in access, diagnostics and resource allocation to potentially improve health standards and outcomes worldwide.

  • Overall, AI and GPT models show promise for augmenting clinical care, patient decision-making and empowerment, and population health if integrated thoughtfully with human guidance and oversight. More work is still needed but the opportunity is significant.

  • Effective communication between data scientists and partner organizations is crucial for the success of AI for Good projects. Data scientists need to set realistic expectations about AI capabilities, share technical limitations, and have detailed discussions to properly scope projects.

  • Data collection for AI4G projects is often done by partner organizations for specific applications, rather than Machine Learning, so datasets may not be optimal. This requires adapting to existing data and considering things like label subjectivity, data quality, and spatial/temporal splits.

  • Domain expertise from partners is important to incorporate into model development by selecting relevant data, features, modeling choices, and interpretation of results. Partners can also provide context on things like study designs.

  • AI4G models need to consider resource constraints as models may need to operate in remote areas with limited computing power. Being cost-effective is also important.

  • The goal of AI4G modeling is first achieving good performance for the application/domain, not pure novelty or state-of-the-art results like in some academic research. Developing interpretable models can also provide broader insights for partners.

  • It is important to educate partner organizations about the limitations and opportunities of AI to set realistic expectations.

  • Project scoping needs ongoing dialogue to develop practically useful solutions.

  • Datasets may not be immediately useful for models, so metadata, collection processes, and privacy issues must be understood.

  • Subjective labels require identifying early to avoid inconsistencies.

  • Carefully split data for unbiased evaluation of generalizability.

  • Incorporate partner domain expertise through techniques like feature selection.

  • Consider constraints like deployment when choosing modeling approaches.

  • Use domain-specific metrics for training, validation and relevance of machine learning metrics.

  • Humans are essential in the loop through active learning for partner engagement.

  • Long-term engineering is needed for model maintenance after deployment.

  • Partners should define mission impacts; work with them to quantify immediate and long-term project impact.

The key aspects are focusing on practical utility for partners through dialogue, understanding data limitations, incorporating domain expertise, considering real-world constraints, using appropriate evaluation, and ensuring humans remain engaged to maximize real-world impact.

  • AI and satellite imagery provide powerful new tools to help monitor and manage planetary-scale risks like climate change, deforestation, and natural disasters. Daily high-resolution satellite imagery of the entire planet is now available from companies like Planet Labs.

  • When analyzed with AI, satellite data allows near real-time detection of changes anywhere on Earth, in a way that wasn’t possible before. AI acts like a “massive magnet” to find important changes in the huge volumes of satellite imagery.

  • In Brazil, AI tools using Planet satellite data helped the government detect over 100 illegal deforestation sites per year, collect billions in fines, and reduce deforestation rates by 66%.

  • After wildfires in Hawaii, AI analysis of before/after satellite imagery within minutes assessed building damage across the entire affected area, helping first responders target relief more quickly.

  • The combination of AI and satellites is creating new capabilities for global monitoring and rapid disaster response, with many opportunities to help address planetary challenges like climate change, biodiversity loss, and humanitarian crises.

Various organizations are using artificial intelligence and data from sensors on the ground, in devices, aircraft, and satellites to monitor the state of the planet in real-time. Large streams of raw data are harmonized and analyzed using machine learning to extract spatial indicators related to water, carbon, biodiversity, agriculture, population, and infrastructure at a resolution of a few meters.

When combined, these AI analyses reveal how risks are distributed unevenly in different areas. They can show who and what may be exposed to hazards like floods, food insecurity, and diseases. Satellites then monitor at-risk areas to detect early signs of change and alert decision-makers when urgent action is needed.

This allows scarce resources like environmental enforcement or aid workers to be rushed to places where they can have the most impact. It invites a new relationship with Earth by making the invisible visible and the visible actionable. Areas identified as high priority today may require responsibility tomorrow. Overall it aims to capture real-time data on critical global issues using AI and direct help most effectively.

Here is a summary of the provided information:

This summarizes contributions from people affiliated with various organizations working on projects related to using AI for social good. It lists their names, affiliations, and provides a brief overview of their roles and backgrounds.

The editors are Juan Lavista Ferres, who serves as Microsoft’s chief data scientist and directs Microsoft’s AI for Good Lab leading projects applying AI to sustainability, humanitarian aid, accessibility, and health. William B. Weeks leads Microsoft’s philanthropic AI for Health efforts within the AI for Good Lab conducting research with non-profits and academics to improve global health through AI.

The authors are from Microsoft’s AI for Good Lab and include Lucia Ronchi Darre who has engineering and data science degrees and focuses on data, algorithms and AI safety. Additional contributors are thanked for supporting the book’s production and collaborative efforts from the AI for Good Lab team and other Microsoft groups and partners.

In summary, it recognizes individuals from Microsoft and other organizations who are working on applications of AI to address social and humanitarian issues through the AI for Good Lab and collaborations with academic and non-profit partners.

The profiles describe various applied data scientists and researchers working at Microsoft’s AI for Good Lab. They obtained degrees from top universities in fields like computer science, data science, engineering, mathematics, and the social sciences.

At the AI for Good Lab, they work on applying AI and machine learning techniques to address social challenges in areas such as environmental sustainability, humanitarian efforts, healthcare, education and combating misinformation. Some specific projects mentioned include using computer vision for disaster response and infrastructure mapping, tracking misinformation online, and developing models to support biodiversity conservation.

The researchers collaborate with other teams at Microsoft and external partners. They employ diverse methodologies including predictive analytics, deep learning, natural language processing, geospatial analysis and more. The overall goal of their work is to harness data science and AI to create positive social impact and help solve important global issues.

Here is a summary of the collaborators:

  • Ann Aerts is Head of the Novartis Foundation, an organization committed to improving health outcomes in low-income populations through data, digital technology and AI. She has a medical degree and masters in public health and tropical medicine. She is passionate about using data and digital tools to enhance population health.

  • Monica Bond is a wildlife biologist and co-founder of the Wild Nature Institute. She has a PhD in ecology from the University of Zurich. Her research focuses on population ecology, habitat selection and social behavior in various wildlife species. She has published over 50 scientific papers and co-authored children’s books about African wildlife.

  • Jonathan Bricker is a professor of public health at Fred Hutchinson Cancer Research Center and University of Washington. He is a licensed clinical psychologist with a PhD from UW. His research group, HABIT, develops and tests behavioral interventions for conditions like tobacco cessation and weight loss using technology. He currently serves as senior editor of the journal Addiction.

  • Tonio Buonassisi obtained BS and PhD degrees in materials science from MIT. He is a professor of mechanical engineering at MIT where his lab incorporates renewable energy and sustainability into materials science research. He is passionate about using scientific innovations to address climate change.

Here are summaries of the key points about each individual:

  • Daniel Ho is a professor at Stanford focusing on regulation, evaluation and governance of AI. He has experience advising the White House and government on AI policy.

  • Joseph Kiesecker is a lead scientist at The Nature Conservancy and published author on balancing renewable energy development and conservation. He has expertise in energy siting and mitigation strategies.

  • Kevin Greene researches the role of information technologies in politics. He has a PhD from the University of Pittsburgh and experience as a postdoc at Dartmouth College.

  • Kim Goetz studies marine mammal ecology and conservation at NOAA, with a focus on cetaceans in Alaska. She has a PhD from University of California, Santa Cruz.

  • Elliot Fishman is a radiology professor at Johns Hopkins focusing on medical imaging and AI, particularly for early cancer detection. He has industry experience developing 3D imaging software.

  • Derek Lee is a quantitative wildlife biologist and CEO of the Wild Nature Institute. He obtained a PhD from Dartmouth College studying African wildlife populations.

  • Will Marshall is Chairman and CEO of Planet, a space technology company. He has a physics PhD from University of Oxford and was a NASA scientist and systems engineer.

  • Al-Rahim Habib is an ENT surgeon and PhD candidate exploring AI applications for ear diseases among Indigenous Australians. He has medical degrees from multiple universities.

  • James Campbell works for Catholic Relief Services on monitoring and evaluation projects, with a background in engineering, public health and biostatistics.

  • Erwin Knippenberg is an economist at the World Bank focused on food security, climate impacts, and using data/ML for policy. He has a PhD from Cornell University.

  • Will Pomerantz is recognized as a Young Global Leader by the World Economic Forum. He served as Co-Principal Investigator on the PhoneSat mission and was the technical lead on several space debris remediation projects.

  • Maria Ana Martinez-Castellanos is a professor of pediatric retinology and directs a private pediatric retina clinic treating surgical and medical retinal diseases in children. Her research focuses on retinopathy of prematurity.

  • Mir Matin is an experienced researcher in water resources, hydroclimatic disaster risk reduction, and climate services. He has worked for organizations like ICIMOD, IWMI, and others.

  • Christopher J.L. Murray is the director of the Institute for Health Metrics and Evaluation and chairs the Global Burden of Disease studies which annually assess comparative health loss worldwide.

  • Kris Sankaran applies machine learning to ecological and biological problems through his work with the University of Wisconsin and Wisconsin Institute for Discovery.

  • Margarita Santiago-Torres leads lifestyle intervention research at Fred Hutch Cancer Center focusing on underserved populations.

  • Michael Scholtens facilitates data-driven programs at The Carter Center on issues like digital threats to democracy.

  • Jacob Shapiro researches conflict, development and security through his work with the Empirical Studies of Conflict Project at Princeton University.

  • Anshu Sharma co-founded SEEDS and STS Global which do disaster management and humanitarian work globally. He advises various organizations on these issues.

  • The summary provides a brief overview of the backgrounds and work of these individuals based on the information provided.

  • The charge involves spearheading GenAI products and enhancing chatbot capabilities using advanced ML, deep learning, and generative AI algorithms.

  • In her previous role at Microsoft, she helped grow the impactful AI for Good initiative, positioning Microsoft as a leader in the AI space.

  • Kevin Xu is a Senior Software Engineer at GitHub who focuses on projects related to building trust through transparency. He contributes skills in data analysis/visualization, full stack engineering, and legal research. He holds a Juris Doctor from Berkeley Law and a Bachelor of Science in Biology from UC San Diego. Previously, he served as a Clinical Supervising Attorney at Berkeley Law.

  • Andrew Zolli received degrees in Cognitive and Computer Science and foresight studies. He is Chief Impact Officer at Planet with over 20 years of experience at the intersection of technology, social, and ecological change. He previously led PopTech, a network exploring impacts of technology. He has advised many organizations on sustainability and social impact. He also serves on the Board of Directors of Human Rights Watch.

  • Deep learning methods were used for tasks like image classification, segmentation, object detection on a variety of datasets related to agriculture, development, environment, healthcare etc.

  • Findings were reported for studies on topics such as solar energy production, wildlife monitoring, natural disaster damage assessment, food security analysis, disease detection, broadband connectivity.

  • Lessons learned addressed improving data collection/labeling, incorporating domain expertise, validating models on new data, limitations of techniques.

  • Large language models were discussed for applications in healthcare, coding, as language assistants. Their impact, limitations and training process were summarized.

  • Humanitarian applications covered included building damage assessment after earthquakes, identifying dwelling types, measuring food insecurity, analyzing online misinformation, performing natural language processing for aid groups.

  • Challenges assessed included lack of ground truth data, subjective labeling, difficulty in maintaining models over time, importance of interdisciplinary collaborations.

#book-summary
Author Photo

About Matheus Puppe