shortlistd.io AI Policy | Responsible AI in Recruiting

Last Updated: December 2, 2025

Purpose and Commitment

Shortlistd is committed to building and deploying artificial intelligence responsibly, ethically, and in compliance with emerging AI regulations worldwide. This policy establishes our framework for developing AI-powered recruitment tools that are fair, transparent, accountable, and respect human dignity and rights.

This policy applies to all AI systems, models, and algorithms used in our Services, including:

  • Candidate discovery and sourcing

  • Contact information enrichment

  • Resume analysis and skill extraction

  • Interview assistance and evaluation

  • Candidate-job matching and recommendations

  • Automated communication and outreach

Our Core Principles:

  1. Human Dignity: AI serves humans; humans always make final decisions

  2. Fairness: Actively prevent bias and discrimination

  3. Transparency: Clear explanations of how AI works and why it makes recommendations

  4. Accountability: Clear responsibility for AI outcomes

  5. Privacy: Minimize data collection and respect individual rights

  6. Safety: Prevent harm and misuse

1. Human-in-the-Loop: AI as Assistant, Not Decision-Maker

1.1 Mandatory Human Oversight

Absolute Rule: Our AI systems NEVER make final hiring, rejection, or employment decisions without human review and approval.

What This Means in Practice:

  • AI generates recommendations, suggestions, and insights

  • Humans (recruiters, hiring managers) review AI output

  • Humans make all final decisions about candidates

  • Humans can override, ignore, or modify any AI recommendation

  • Humans remain accountable for all hiring outcomes

Prohibited: Automated rejection, automated ranking that excludes candidates from consideration, or any system where AI makes binding decisions without human intervention.

1.2 Levels of AI Involvement

Level 1 - AI Assistance Only (Our Current Approach):

  • AI discovers and surfaces relevant candidates

  • AI provides information summaries and skill extraction

  • AI suggests questions or evaluation criteria

  • Human makes all selection and contact decisions

  • This is our standard operating model

Level 2 - AI Scoring with Human Review:

  • AI generates competency scores or match percentages

  • Scores are advisory and accompanied by explanations

  • Humans review scores and underlying reasoning

  • Humans make final decisions

  • Candidates can request manual re-evaluation

Prohibited - Level 3 - Fully Automated Decisions:

  • AI automatically rejects candidates → NOT ALLOWED

  • AI automatically advances candidates without review → NOT ALLOWED

  • AI makes binding decisions → NOT ALLOWED

1.3 Candidate Rights Regarding AI

Right to Human Review:

  • Any candidate can request that a human review AI-generated assessments

  • No additional cost or penalty for requesting human review

  • Responses provided within reasonable timeframes

Right to Explanation:

  • Candidates can ask how AI evaluated their profile

  • We provide clear, understandable explanations

  • We identify which factors influenced AI recommendations

Right to Challenge:

  • Candidates can contest AI-generated information

  • We provide mechanisms to correct errors

  • We escalate disputes to human reviewers

2. Bias Prevention and Fairness

2.1 Prohibited: Sensitive Attribute Inference and Use

Absolute Prohibition: Our AI systems are designed to NOT infer, generate, process, or use the following sensitive characteristics in any way:

Protected Categories We Do NOT Process:

  • Race or ethnicity

  • National origin or citizenship status (except where legally required for work authorization)

  • Gender identity or sex

  • Sexual orientation

  • Religious beliefs or philosophical views

  • Political opinions or affiliations

  • Trade union membership

  • Health information, medical conditions, or disability status (except reasonable accommodations voluntarily disclosed)

  • Genetic data or biometric identifiers

  • Pregnancy or family planning

  • Age (except minimum legal working age verification)

  • Marital or family status

Technical Implementation:

  • Input filters to remove sensitive data from processing

  • Output sanitization to prevent generation of sensitive inferences

  • Regular auditing of AI outputs for accidental bias indicators

  • Training data curation to exclude discriminatory patterns

If Sensitive Data Appears:

  • Clients must not use such information in hiring decisions

  • Clients must report it immediately to info@shortlistd.io

  • We will investigate and improve filtering

  • Affected candidates will be notified if their data was inappropriately processed

2.2 Fairness-by-Design Approach

Principle: Our AI systems are designed from the ground up to minimize bias and promote fair evaluation.

Design Choices:

  1. Competency-Based Evaluation:

    • Focus on demonstrated skills and relevant experience

    • Avoid proxies that correlate with protected characteristics

    • Use standardized, job-relevant criteria

  2. Structured Assessment:

    • Standardized questions and evaluation frameworks

    • Consistent criteria applied to all candidates

    • Reduces subjective human bias

  3. Diverse Training Data:

    • Training data represents diverse candidate populations

    • Actively balance datasets to prevent demographic skews

    • Regular data quality audits

  4. Fairness Testing:

    • Regular testing for disparate impact across demographic groups

    • Statistical analysis of hiring outcomes

    • Third-party bias audits (planned as we scale)

2.3 Bias Monitoring and Mitigation

Ongoing Monitoring:

  • Track hiring outcomes by client and role type

  • Analyze patterns that might indicate bias

  • Collect and review candidate feedback

  • Monitor AI recommendations for potential discriminatory patterns

Corrective Actions:

  • Immediate investigation of bias indicators

  • Model retraining or adjustment when bias detected

  • Client notification and guidance on fair practices

  • Continuous improvement of fairness safeguards

Transparency in Limitations:

  • We acknowledge that no AI system is perfect

  • We openly communicate known limitations

  • We commit to continuous improvement

  • We invite feedback and external scrutiny

3. Transparency and Explainability

3.1 How Our AI Works

High-Level Process:

Step 1: Candidate Discovery

  • AI searches public sources for candidates matching job requirements

  • Uses semantic search and natural language understanding

  • Retrieves publicly available professional information

Step 2: Data Enrichment

  • AI extracts and structures information from profiles

  • Identifies skills, experience levels, and qualifications

  • Finds contact information from legitimate public sources

Step 3: Matching and Ranking

  • AI compares candidate profiles to job requirements

  • Generates match scores based on relevant criteria

  • Provides explanations for match scores

Step 4: Assessment Assistance

  • AI helps structure interviews and evaluations

  • Suggests competency-based questions

  • Analyzes interview responses for key skills and qualifications

  • Generates summaries and evaluation support

Step 5: Human Decision

  • Human reviewers examine AI recommendations

  • Humans consider AI inputs alongside their own judgment

  • Humans make all final hiring decisions

3.2 Explainability Standards

For Every AI Recommendation, We Provide:

  • What: Clear statement of the recommendation or score

  • Why: Specific factors that influenced the recommendation

  • Evidence: Concrete examples from candidate data supporting the recommendation

  • Confidence: Indication of AI's certainty level

  • Limitations: What the AI did not consider or could not evaluate

Example Explanation:

"Match Score: 85% Reasoning: Candidate has 5+ years of experience with Python and demonstrated expertise in machine learning projects (see GitHub portfolio). Relevant skills include TensorFlow and PyTorch, which match job requirements. Strong communication skills evidenced by technical blog posts. Limitations: AI cannot evaluate culture fit or soft skills not demonstrated in written content."

3.3 Transparency to Candidates

What Candidates Can Learn:

  • That AI was used in their evaluation

  • What information the AI analyzed

  • What factors influenced their match score or recommendation

  • How to request human review or contest AI outputs

How to Request Information:

  • Email info@shortlistd.io with "AI Processing Inquiry"

  • Include your name and relevant job application details

  • We respond within 30 days with clear, non-technical explanations

4. Data Minimization and Purpose Limitation

4.1 Collect Only What's Necessary

Principle: We collect and process only the personal data necessary for legitimate recruitment purposes.

What We Collect:

  • Professional work history and experience

  • Relevant skills and qualifications

  • Education and certifications

  • Public contact information (business email/phone)

  • Public portfolios and professional content

What We Do NOT Collect (Unless Voluntarily Provided):

  • Personal (non-business) contact information

  • Social security numbers or government IDs

  • Financial information (except for payroll, which we don't handle)

  • Personal social media content unrelated to professional qualifications

  • Private communications or documents

  • Sensitive personal information (see Section 2.1)

4.2 Purpose Limitation

Permitted Uses of Candidate Data:

  • Matching candidates with job opportunities

  • Facilitating communication between candidates and employers

  • Evaluating candidate qualifications for specific roles

  • Providing recruitment insights to clients

  • Improving our Services (using aggregated, anonymized data)

Prohibited Uses:

  • Marketing non-recruitment services to candidates without consent

  • Building or selling candidate databases

  • Training AI models on personal data without consent

  • Any use unrelated to recruitment and hiring

4.3 Storage and Retention Minimization

Data Lifecycle:

  1. Collection: Gather only necessary data

  2. Active Use: Process for specific recruitment purpose

  3. Retention: Keep only as long as needed

  4. Deletion: Remove when purpose is fulfilled or upon request

Retention Periods:

  • Active recruitment data: Duration of job search or client engagement

  • Cached search results: Maximum 90 days

  • After deletion request: Immediate removal + suppression list entry

  • See Privacy Policy Section 4 for detailed schedules

Suppression Lists:

  • Candidates who request deletion are permanently suppressed

  • Prevents re-discovery and re-processing

  • Honors "do not process" requests indefinitely

5. Consent and Control

5.1 Candidate Consent and Notification

For Publicly Available Data:

  • We rely on clients' legitimate interest in recruitment (GDPR lawful basis)

  • Candidates are notified when contacted for opportunities

  • Candidates can opt out at any time

Article 14 Notices (GDPR): When we collect candidate data indirectly (from public sources), we ensure:

  • Candidates are informed about our processing

  • Information about data sources, purposes, and retention

  • Clear instructions for exercising privacy rights

  • Notification within 30 days of collection or upon first contact

Explicit Consent Required For:

  • Processing beyond recruitment purposes

  • Training AI models on individual candidate data

  • Sharing data with third parties beyond recruitment clients

  • Any use of sensitive personal information

5.2 Client Consent and Requirements

Clients Using Our Services Must:

  • Establish lawful basis for processing candidate data

  • Provide required notices to candidates (or authorize us to do so)

  • Use candidate data only for recruitment purposes

  • Honor candidate privacy rights and deletion requests

  • Document consent where required

  • Sign our Data Processing Agreement

Client Responsibilities:

  • Ensure recruitment practices comply with employment laws

  • Avoid discrimination and bias in hiring decisions

  • Conduct due diligence beyond AI recommendations

  • Maintain records of hiring decisions and rationale

5.3 Individual Control and Rights

Candidates Can:

  • Request access to their data

  • Correct inaccurate information

  • Request deletion (with suppression to prevent re-discovery)

  • Object to processing

  • Restrict how their data is used

  • Request human review of AI assessments

  • Opt out of AI-assisted processing where legally required

How to Exercise Rights:

  • Email info@shortlistd.io with "Candidate Privacy Request"

  • Include your name and any known contact information

  • We respond within 30 days (or as required by applicable law)

6. AI Training Data and Model Governance

6.1 Training Data Standards

Data Sources for AI Training:

  • Publicly available, anonymized datasets

  • Synthetic and simulated data

  • Properly licensed commercial datasets

  • Aggregated, de-identified usage data (with consent)

Prohibited Training Data:

  • Personal data without explicit consent

  • Datasets known to contain bias or discriminatory patterns

  • Illegally obtained or scraped data

  • Proprietary data from competitors

  • Sensitive personal information

Data Quality and Curation:

  • Regular audits of training data for quality and fairness

  • Removal of biased or problematic data

  • Balanced representation across demographics

  • Documentation of data sources and lineage

6.2 Model Development and Testing

Development Standards:

  • Fairness testing throughout development lifecycle

  • Validation on diverse test datasets

  • Performance metrics that include fairness indicators

  • Documentation of model architecture and decision logic

Pre-Deployment Testing:

  • Bias testing across protected characteristics

  • Accuracy and performance benchmarks

  • Edge case and adversarial testing

  • User acceptance testing with real recruiters

Post-Deployment Monitoring:

  • Ongoing performance monitoring

  • Real-world bias detection

  • Feedback collection from users and candidates

  • Regular model audits and updates

6.3 AI Model Transparency

Documentation We Maintain:

  • Model architecture and design decisions

  • Training data sources and characteristics

  • Known limitations and failure modes

  • Performance metrics and fairness evaluations

  • Version history and change logs

Available Upon Request:

  • General information about how our AI works

  • Explanation of specific recommendations

  • Information about data sources and processing

  • Details on fairness testing and bias mitigation

Proprietary Protection:

  • We balance transparency with protection of trade secrets

  • We provide meaningful explanations without exposing proprietary algorithms

  • We prioritize user rights over business secrecy

7. Security and AI Safety

7.1 AI System Security

Protecting AI Infrastructure:

  • Secure model storage and access controls

  • Encrypted data pipelines

  • Authentication and authorization for AI system access

  • Monitoring for unauthorized use or tampering

Adversarial Robustness:

  • Testing against adversarial attacks

  • Input validation and sanitization

  • Output verification and sanity checks

  • Anomaly detection in AI behavior

7.2 Preventing Misuse

Prohibited Uses of Our AI:

  • Discrimination or bias in hiring

  • Privacy violations or unauthorized surveillance

  • Manipulation or deception of candidates

  • Circumventing employment laws or regulations

  • Harassment or abuse of candidates

Enforcement:

  • Monitoring for misuse patterns

  • Client education on proper use

  • Suspension or termination for violations

  • Cooperation with authorities on illegal activities

7.3 Safety and Harm Prevention

Risk Assessment:

  • Regular evaluation of potential harms from AI use

  • Impact assessments for new features or models

  • Consideration of vulnerable populations

  • Mitigation strategies for identified risks

Incident Response:

  • Clear procedures for AI system failures or errors

  • Rapid response to bias or discrimination reports

  • Transparent communication about incidents

  • Corrective actions and lessons learned

8. Third-Party AI and Data Providers

8.1 Vendor Selection and Management

Due Diligence for Third-Party AI:

  • Evaluation of vendor AI ethics and fairness practices

  • Assessment of data sources and training methodologies

  • Review of privacy and security practices

  • Contractual requirements for compliance

Current Third-Party Providers:

  • Professional data enrichment

  • Public data search and discovery

  • Language models for analysis and summarization

  • Cloud infrastructure and hosting

Vendor Requirements:

  • Compliance with applicable data protection laws

  • Transparent data sourcing practices

  • Fairness and bias mitigation

  • Data Processing Agreements (DPAs) in place

  • Regular security and compliance audits

8.2 Control Over Third-Party AI

Our Responsibilities:

  • Select vendors with strong ethical practices

  • Ensure DPAs and security agreements are in place

  • Monitor vendor performance and compliance

  • Maintain ability to switch vendors if needed

  • Propagate data deletion requests to vendors

Limitations:

  • We rely on vendors' representations about their practices

  • We conduct reasonable due diligence but cannot audit vendors' internal systems

  • Third-party AI may have its own limitations and biases

  • We take responsibility for vendor selection and management

9. Compliance with AI Regulations

9.1 Current Regulatory Landscape

We actively monitor and comply with emerging AI regulations, including:

European Union AI Act:

  • Classification of our AI as "high-risk" for employment use

  • Enhanced transparency and documentation requirements

  • Bias testing and fairness validation

  • Human oversight mandates

U.S. State and Local AI Laws:

  • New York City Local Law 144 (automated employment decision tools)

  • Illinois Artificial Intelligence Video Interview Act

  • California AB 2029 (automated decision systems)

  • Other emerging state regulations

Sector-Specific Regulations:

  • EEOC guidance on AI in hiring

  • OFCCP compliance for federal contractors

  • Industry-specific employment laws

International Regulations:

  • UK AI regulation (following EU AI Act principles)

  • Canada AIDA (Artificial Intelligence and Data Act)

  • Other jurisdictions as applicable

9.2 Proactive Compliance Measures

What We Do Now:

  • Design AI systems with regulatory requirements in mind

  • Maintain documentation required by regulations

  • Conduct impact assessments for high-risk AI

  • Provide transparency and explainability

  • Implement human oversight and control

Future-Proofing:

  • Monitor regulatory developments worldwide

  • Adapt practices to meet new requirements

  • Engage with regulators and industry groups

  • Participate in AI ethics discussions

  • Regular compliance audits

9.3 Client Compliance Support

We Help Clients Comply With:

  • Employment laws and anti-discrimination regulations

  • Data protection laws (GDPR, CCPA, etc.)

  • AI-specific regulations in their jurisdictions

  • Industry-specific requirements

Resources We Provide:

  • Transparency documentation for AI systems

  • Data Processing Agreements (DPAs)

  • Privacy notices and candidate communication templates

  • Guidance on lawful basis and legitimate interest

  • Audit trails and record-keeping support

10. Accountability and Continuous Improvement

10.1 Governance Structure

Responsible Parties:

  • CEO: Ultimate accountability for ethical AI practices

  • CTO: Technical implementation of AI ethics principles

  • Data Protection Officer (DPO): Privacy and data protection oversight

  • Product Team: Embedding ethics in feature development

  • Compliance Team: Regulatory monitoring and adherence

Decision-Making:

  • Ethics considerations in all AI development decisions

  • Cross-functional review of new AI features

  • Regular executive review of AI practices and impacts

  • Board oversight of AI strategy and risk

10.2 Feedback and Reporting

Internal Feedback:

  • Team members encouraged to raise ethical concerns

  • Regular ethics discussions in development process

  • No retaliation for good-faith ethics concerns

External Feedback:

  • Candidate and client feedback channels

  • Third-party audits and assessments (as we scale)

  • Engagement with AI ethics researchers

  • Participation in industry working groups

Reporting Concerns:

  • Email info@shortlistd.io with "AI Ethics Concern"

  • Confidential reporting for sensitive issues

  • Investigation and response procedures

  • Transparent communication about resolutions (where appropriate)

10.3 Continuous Improvement

What We Track:

  • AI system performance and accuracy

  • Fairness metrics and bias indicators

  • Candidate satisfaction and feedback

  • Client satisfaction and hiring outcomes

  • Regulatory compliance status

  • Incident reports and resolutions

How We Improve:

  • Regular review of metrics and feedback

  • Iterative model improvements and updates

  • Updated policies based on lessons learned

  • Adoption of new best practices and technologies

  • Response to regulatory changes

Transparency About Progress:

  • Annual AI ethics report (planned)

  • Public disclosure of major changes to AI practices

  • Honest communication about limitations and failures

  • Ongoing dialogue with stakeholders

11. Contact and Questions

11.1 For Candidates

If you have questions about how AI was used in your recruitment process:

  • Email: info@shortlistd.io

  • Subject: "AI Processing Inquiry" or "Candidate Question"

  • Include: Your name and relevant job application details

Response Time: Within 30 days

11.2 For Clients

For guidance on ethical use of our AI tools:

11.3 For Researchers and Advocates

We welcome engagement with AI ethics researchers, advocates, and organizations:

11.4 For Reporting Concerns

To report ethical concerns, bias, or potential violations:

  • Email: info@shortlistd.io

  • Subject: "AI Ethics Concern" or "Bias Report"

  • Confidential handling available upon request

12. Commitment and Disclaimer

12.1 Our Commitment

We are committed to:

  • Building AI that respects human dignity and rights

  • Preventing bias and discrimination in hiring

  • Transparent and explainable AI systems

  • Compliance with all applicable laws and regulations

  • Continuous improvement of our ethical practices

  • Accountability for AI outcomes and impacts

12.2 Honest Acknowledgment of Limitations

We Acknowledge That:

  • No AI system is perfect or completely free of bias

  • AI technology and best practices are rapidly evolving

  • We are learning and improving as we grow

  • Unexpected issues may arise despite our best efforts

  • We cannot guarantee perfect fairness or accuracy

Our Promise:

  • We will be transparent about limitations and failures

  • We will respond quickly to issues when they arise

  • We will continuously work to improve

  • We will prioritize ethics and fairness over commercial convenience

13. Policy Updates and Versioning

Review Schedule: This policy is reviewed and updated at least annually, or more frequently as:

  • AI regulations evolve

  • We deploy new AI capabilities

  • We learn from experience and feedback

  • Industry best practices change

Notification of Changes:

  • Material changes will be communicated via email and website notice

  • "Last Updated" date will be updated

  • Previous versions available upon request

Feedback Welcome: We actively seek input on this policy. Email info@shortlistd.io with "AI Policy Feedback" to share your thoughts.

This Ethical AI Policy is integral to our Terms of Service and Privacy Policy and should be read in conjunction with those documents.