coins on a table graphic

AI Policy

Responsible AI Objectives

Our organization is committed to developing and deploying AI and ML technologies responsibly, ensuring they serve humanity's best interests while upholding ethical principles. Our core objectives are:

  • Fairness and Non-discrimination: To develop AI systems that treat all individuals equitably, avoid perpetuating or amplifying harmful biases, and ensure fair outcomes across diverse populations.
  • Transparency and Explainability: To design AI models that are understandable, allowing for insights into their decision-making processes, and to provide clear communication about their capabilities, limitations, and intended uses.
  • Safety and Robustness: To build AI systems that are reliable, secure, and resilient to malicious attacks, and to proactively identify and mitigate potential risks and unintended consequences.
  • Privacy and Data Governance: To handle personal and sensitive data with the utmost care, adhering to robust privacy protection principles and implementing strong data governance frameworks.
  • Accountability and Human Oversight: To establish clear lines of responsibility for AI system development and deployment, and to ensure that human oversight and intervention mechanisms are in place where appropriate.
  • Beneficial Impact: To leverage AI for positive societal impact, contributing to sustainable development, improving quality of life, and addressing global challenges.

Policies and Processes

To achieve these objectives, we have implemented a comprehensive set of policies and processes that guide our AI/ML development lifecycle:

  • Team Reviews: All new AI projects are discussed internally at a team-level to ensure alignment with our responsible AI principle and discuss potential mitigation strategies for identified risks. Customers are often involved in these conversations to ensure alignment and provide transparency.
  • Bias Detection and Mitigation Frameworks: We integrate automated and manual tools for detecting and reporting bias in datasets and model outputs. Bias mitigation techniques, including data augmentation, re-weighting, and algorithmic debiasing methods, are discussed and deployed as needed.
  • Transparency and Documentation Standards: All AI projects are required to maintain detailed documentation outlining data sources, model architectures, training methodologies, performance metrics, and known limitations. For user-facing AI systems, we provide clear explanations of how the system works and how users can interact with it.
  • Data Privacy Impact Assessments (DPIAs): Before collecting or processing any new data for AI development, we work with customers to identify and mitigate privacy risks. Based on customers’ requirements and standards, we adhere to basic principles such as data minimization, purpose limitation, and secure data storage.
  • Transparency in Map-Based Predictions: For geospatial AI models that generate predictions or classifications displayed on maps (e.g., land use classification, flood risk assessment), we strive for high levels of transparency. This includes providing clear legends, confidence scores for predictions, and accessible information about the underlying data sources and model methodologies.
  • Ethical Use of Satellite Imagery and Remote Sensing: We adhere to strict ethical guidelines regarding the use of satellite imagery and other remote sensing data. We ensure that our applications do not infringe on individual privacy or national security, and we actively avoid uses that could contribute to surveillance or discrimination. Our internal review process for Geospatial AI projects includes a specific focus on potential dual-use concerns.
  • Community Engagement for Local Impact: For geospatial AI projects that may have significant local impact, we engage with affected communities to understand their needs and concerns, ensuring our solutions are developed with their input and benefit them directly. This includes early consultation and feedback loops in the project lifecycle.
  • Human-in-the-Loop Design: We prioritize human-in-the-loop design patterns, allowing for human review and correction of AI-generated outputs. This ensures that human judgment remains central to sensitive processes.
  • Training and Awareness Programs: All employees involved in AI development, deployment, or management receive mandatory training on responsible AI principles, ethical considerations, and relevant policies. Regular workshops and seminars are conducted to foster a culture of responsible innovation.

Internal Review Process

Our internal review process is iterative and continuous, embedding responsible AI considerations throughout the entire project lifecycle:

  1. Project Initiation: Every new AI project begins with a preliminary ethical assessment, outlining potential risks and proposed mitigation strategies.
  2. Design and Development Phase: During this phase, development teams present progress and address concerns with leadership, other team members, and customer stakeholders. Regular checkpoints ensure integration of responsible AI principles.
  3. Testing and Validation: Comprehensive testing includes technical performance, fairness, and robustness evaluations. Results are documented and reviewed by independent auditors.
  4. Deployment and Monitoring: Post-deployment, continuous monitoring systems track AI system performance, identify drift, detect new biases, and respond to unforeseen issues. User feedback mechanisms are often integrated.
  5. Post-Mortem and Iteration: After a project's lifecycle or significant updates, a post-mortem review is conducted to learn from successes and failures, feeding insights back into our policies and processes for continuous improvement.