Geospatial intelligence to assess climate, environmental, and supply-chain risks
Challenge to overcome
Financial institutions increasingly rely on geospatial data to assess climate, environmental, and supply-chain risks at the asset level. However, most portfolios and loan books lack reliable, scalable, and up-to-date mapping between companies and the precise locations and characteristics of their physical assets.
Physical climate risks vary sharply by geography and asset features. Identifying assets, classifying their activities, locating them accurately, capturing physical attributes (e.g. footprint, height, value), and linking them to corporate and supply-chain structures remains operationally complex and costly at global scale.
As a result, financial institutions face blind spots in exposure analysis and a constrained ability to translate climate and sustainability risks into actionable portfolio insights.
AI application / use case
To calculate the physical risk to which a company may be exposed to (for example flood risk), it is neccessary to determine the locations and values of relevant physical assets (offices, factories, etc.) and what their values are. Key inputs into the value of a physical asset are, among others, its size and associated revenues.
MSCI integrates Artificial Intelligence at multiple steps of the data operations workflow to help streamline the complex processes required to gather, validate, and continuously monitor geospatial and supply-chain data.
The solution utilizes a spectrum of AI capabilities, ranging from simple prompting in AI-assisted systems to more advanced, autonomous implementations. These tools support the identification of asset locations, extraction of coordinates and attributes required for physical-risk modeling, and derivation of structured relationships between companies.
AI supports two core operational steps:
- Deriving structured attributes and relationships from large volumes of heterogeneous, unstructured sources, and
- Continuously monitoring resulting datasets to identify anomalies and potential data-quality issues.
1. Geospatial Processing & Environmental Monitoring
For geospatial processing, MSCI leverages AI-powered satellite image processing and proprietary data modelling to derive physical-asset attributes by processing geospatial datasets (satellite imagery, LiDAR, and surface models) at planetary scale.
Computer-vision image-segmentation techniques are used to detect building footprints on maps and convert them into coordinate vectors for model calculations.
LiDAR-based approaches calculate building height using digital surface and terrain models, enabling consistent global coverage.
Beyond asset mapping, geospatial AI is also applied to monitor land-cover and environmental change over time. Computer-vision models classify land-cover types (e.g. vegetation, water, built-up areas) and support the tracking of biodiversity-related indicators, wildfire impacts, and reforestation ("green cover") dynamics. These capabilities provide environmental context that can be linked to asset-level exposure and supply-chain risk.
Figure 1: AI Powered Geospatial Data Processing for Physical Risk Modeling
2. Supply-Chain Relationship Extraction
AI techniques, including extensions of large-language-model approaches, are applied to extract conditional supply-chain relationships from unstructured company disclosures. These models identify both direct relationships and reverse (counterparty-disclosed) relationships, enabling a more complete representation of supply-chain dependencies.
By combining supply-chain relationship data with geospatial asset information, MSCI supports analysis of how disruptions, environmental change, or physical risks may propagate through corporate networks.
Figure 2: AI-powered geospatial data processing for supply chain risk
3. Data Quality, Monitoring & Validation
AI-powered anomaly detection continuously monitors datasets to safeguard integrity and quality. Monitoring identifies irregularities at the individual data-point level, within peer groups, and across related data fields. Entity-resolution and duplicate-detection capabilities improve over time through analyst feedback.
Key Beneficiaries
☒ Relationship Managers
☒ Portfolio Managers
☒ Research teams, macroeconomists
☒ Control functions
☒ Support functions (HR, CFO, …)
☒ Other: Stewardship / Engagement Teams, Underwriters
Benefits of the AI Use Case for the Financial Services Sector
Integrating AI into geospatial and supply-chain data management helps enable scalable processing of large-scale datasets that would otherwise be difficult to manage manually. Automating the extraction of addresses, coordinates, asset attributes, and company relationships helps support physical-risk modeling through enhanced location data coverage, while also improving visibility into supply-chain dependencies.
This supports more accurate assessment of climate-related physical risks, environmental impacts, and downstream supply-chain effects, which may contribute to more-informed investment and risk-management decisions. Continuous anomaly detection further reduces operational risk and supports confidence in downstream models and client deliverables.
Supporting Technology
The initiative leverages a combination of AI capabilities to address different levels of complexity in data collection and validation:
- AI-Assisted Systems: simple prompting techniques to support human analysts
- AI Agents: autonomous systems for extraction, reasoning, and validation
- Computer Vision & Remote Sensing: image segmentation for building-footprint detection, satellite-imagery processing, LiDAR-based height derivation, and scalable geospatial computation
- Anomaly Detection & NLP Quality Controls: detection of outliers, peer-group deviations, transversal inconsistencies, duplicate entities, and content-quality issues in text
- Non-AI Technologies: Python-based programs and web automations to increase determinism and manage disclosure variability
Agentic AI Orchestration
Data-operations processes employ a multi-layered AI architecture designed to balance scale, cost, and accuracy. Autonomous Extractor Agents handle large-scale data collection from relevant sources, while Validator Agents apply more advanced reasoning to review and confirm extracted outputs. For complex websites (e.g. Java-based or interactive maps), specialized computer-use agents replicate human navigation to access otherwise inaccessible information.
Human-in-the-Loop Validation
While AI agents perform the majority of extraction and monitoring tasks, reliability is supported through a hybrid workflow. Quality-assurance processes flag ambiguous or complex cases for review by geospatial data experts, who resolve edge cases and provide an additional layer of validation. This human-in-the-loop approach is fesigned to enchange quality and trustworthiness of the resulting datasets