Ethical AI Policy
Last Updated: 16 August 2025
Responsible Development Framework
This document outlines our approach to building responsible AI for hiring. As we develop our conversation-based interview platform, we're committed to transparency, fairness, and compliance - while maintaining the agility needed for rapid innovation.
1. What We're Building
Core Purpose: AI-assisted interview system that helps evaluate candidates through structured conversations rather than algorithmic CV screening.
Current Focus:
Real-time interview assistance for hiring managers
Competency-based assessment tied to job requirements
Transparent scoring with clear rationale
Human-in-the-loop decision making
What We're NOT Doing:
Automated candidate rejection without human review
Black-box algorithms that can't explain decisions
Analysis of protected characteristics or demographic data
Replacing human judgment in hiring decisions
2. Technical Transparency
What We Can Explain:
Our interview questions are standardized and job-relevant
Scoring is based on demonstrated competencies during conversation
Every assessment includes specific examples from the interview
Hiring managers can see exactly why we flagged strengths or concerns
Current Documentation:
Interview protocols for each role type
Scoring methodology based on response quality
Clear audit trail from candidate answers to recommendations
Ongoing Development:
We're continuously improving our ability to explain AI recommendations
Building more detailed technical documentation as we scale
Working toward full transparency in our decision-making process
3. Bias Prevention Approach
Design Philosophy: Conversation-based evaluation is inherently less biased than CV screening because candidates demonstrate qualifications directly rather than being filtered by algorithms analyzing resumes.
Current Practices:
Standardized questions eliminate subjective interviewer bias
Focus exclusively on job-relevant competencies
Every candidate gets the same opportunity to demonstrate skills
Human oversight required for all hiring recommendations
Monitoring & Improvement:
We track hiring outcomes across different candidate populations
Regular review of our scoring methodology for potential bias
Continuous refinement based on feedback and results
Commitment to third-party bias testing as we mature
4. Data Handling
Privacy Principles:
We only collect data necessary for job evaluation
Interview recordings are encrypted and securely stored
Client data is never used to train our general AI models
We comply with applicable data protection regulations
Data Rights:
Candidates can access their interview recordings and assessments
We provide clear explanations of how we evaluated their responses
Data deletion available upon request
Transparent communication about our data practices
Current Limitations: As an early-stage company, our data governance is evolving. We're building robust systems while maintaining operational flexibility.
5. AI Decision Process
How It Works:
AI analyzes candidate responses during structured interviews
System generates competency scores with specific evidence
Human interviewer reviews AI recommendations
Final hiring decisions always require human approval
What We Provide:
Clear scoring breakdown for each competency area
Specific examples from the interview supporting our assessment
Recommendations (not decisions) for hiring managers
Confidence levels indicating certainty of our analysis
Human Oversight:
Hiring managers can override any AI recommendation
All decisions require human review and approval
Appeals process for candidates who want to challenge assessments
Escalation procedures for edge cases or concerns
6. Third-Party Technologies
Current Stack: We use industry-standard technologies for:
Speech recognition and transcription
Natural language processing
Data storage and security
System hosting and compliance
Vendor Management:
We evaluate providers for security and compliance capabilities
Contractual requirements for data protection and privacy
Regular assessment of third-party performance
Migration plans if vendor relationships change
7. Compliance Commitment
Regulatory Alignment:
We monitor emerging regulations around AI in hiring
Design decisions prioritize compliance with employment law
Proactive approach to meeting transparency requirements
Regular consultation with legal experts as we grow
Industry Standards:
Following best practices for AI ethics and fairness
Participating in industry discussions around responsible AI
Learning from other companies' compliance approaches
Building relationships with compliance and legal experts
8. Accountability & Improvement
What We Track:
System performance and accuracy metrics
Candidate feedback and satisfaction scores
Hiring outcomes and quality measures
Potential bias indicators across different populations
How We Improve:
Regular review of our processes and outcomes
Incorporation of feedback from candidates and clients
Continuous learning about AI ethics and compliance
Iterative improvement of our technology and processes
Escalation Process:
Clear procedures for reporting concerns about our system
Investigation and response protocols for potential issues
Communication plans for stakeholders affected by problems
Commitment to transparency about limitations and failures
Current Status & Roadmap
Where We Are Now:
Early-stage system with basic transparency and oversight
Focused on core functionality and user experience
Building compliance capabilities as we scale
Learning from each implementation and customer interaction
Near-Term Priorities:
Enhanced documentation and explainability features
Improved bias detection and monitoring capabilities
Stronger data governance and privacy protections
Expanded compliance frameworks as we grow
Long-Term Vision:
Industry-leading transparency and explainability
Comprehensive bias testing and mitigation
Full regulatory compliance across all jurisdictions
Gold standard for responsible AI in hiring
Contact & Questions
For questions about our AI practices, compliance approach, or this framework: info@shortlistd.io
Disclaimer
This framework represents our current approach and intentions regarding responsible AI development. As an early-stage company, our practices are evolving rapidly. We commit to transparency about our capabilities and limitations while continuously improving our systems and processes.
This document will be updated regularly to reflect our progress and changing circumstances. We welcome feedback and questions about our approach to building responsible AI for hiring.
Last updated: 16 August 2025