Principal, AI Governance and Tooling
Global Payments
Every day, Global Payments makes it possible for millions of people to move money between buyers and sellers using our payments solutions for credit, debit, prepaid and merchant services. Our worldwide team helps over 3 million companies, more than 1,300 financial institutions and over 600 million cardholders grow with confidence and achieve amazing results. We are driven by our passion for success and we are proud to deliver best-in-class payment technology and software solutions. Join our dynamic team and make your mark on the payments technology landscape of tomorrow.
Summary of This Role
The Principal, AI Testing & Monitoring is a senior technical and governance role responsible for independently validating and continuously monitoring AI/ML models to ensure they meet enterprise standards for robustness, fairness, transparency, and compliance.
As part of the Enterprise Data & AI Governance Program, this role supports the Responsible Use of AI (RUAI) framework by designing and executing rigorous validation protocols, deploying advanced monitoring capabilities, and leveraging AI governance tools and platforms to track, report, and remediate risks throughout the AI lifecycle.
The Principal will work closely with Data Science, Engineering, Legal, Compliance, Risk, and IT teams to assess AI model performance, conduct bias and fairness audits, implement explainability techniques, and ensure adherence to ethical AI principles and applicable regulatory requirements. This role also serves as a technical authority on model reliability, security, and operational risk, providing actionable recommendations to leadership for continuous improvement.
What Part Will You Play?
- Lead the design, development, and execution of independent AI/ML model validation frameworks across various use cases.
- Conduct bias audits, adversarial testing, and stress testing to evaluate model robustness, fairness, and resilience against vulnerabilities.
- Apply statistical testing, benchmarking methodologies, and explainability (XAI) techniques to ensure models are transparent and interpretable.
- Utilize synthetic data generation and automated testing frameworks to simulate edge cases and rare scenarios for risk assessment.
- Document validation methodologies, findings, and risk-based recommendations for stakeholders, ensuring traceability and audit-readiness.
- Develop and implement enterprise AI monitoring frameworks for deployed models, focusing on real-time performance tracking, bias detection, and compliance verification.
- Apply anomaly detection and AI observability solutions to identify and remediate performance degradation, drift, or ethical risks.
- Oversee incident response for AI failures, coordinating with risk, compliance, and engineering teams to ensure timely mitigation.
- Integrate monitoring insights into governance dashboards and reporting platforms to inform executives and regulatory stakeholders.
- Ensure all testing and monitoring activities align with RUAI principles, industry best practices, and applicable regulations (e.g., EU AI Act, GDPR, CCPA, Colorado AI Act, NIST AI RMF).
- Leverage AI governance platforms and risk assessment tools to centralize validation evidence, compliance records, and ongoing monitoring metrics.
- Partner with Legal, Compliance, and Risk to interpret regulatory requirements and translate them into actionable technical and operational controls.
- Provide expert guidance to data scientists and engineers on bias mitigation, fairness optimization, and explainability best practices.
- Stay informed on emerging trends in AI risk assessment, validation methodologies, monitoring tools, and regulatory developments.
- Lead workshops, training sessions, and cross-functional knowledge sharing to advance organizational maturity in AI testing and monitoring.
- Contribute to enterprise AI governance strategy by identifying technology investments, process enhancements, and automation opportunities.
- Provides guidance and mentoring to analysts, as needed.
- Not an exhaustive list; other duties as assigned.
What Are We Looking For in This Role?
Minimum Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Information Systems, or related field.
- 5+ years in independent AI/ML model testing and validation, including robustness, fairness, and compliance verification.
- 3–5+ years in AI monitoring and risk management, including real-time model performance tracking, anomaly detection, and compliance monitoring.
- Proven experience developing and executing rigorous validation frameworks and performing bias, adversarial, and stress testing.
- Strong knowledge of AI governance principles, ethical AI frameworks, and relevant regulations (EU AI Act, GDPR, CCPA, Colorado AI Act, NIST AI RMF).
- Hands-on experience with validation tools, statistical testing frameworks, synthetic data generation, automated testing platforms, and AI observability tools.
- Deep expertise in model validation, fairness audits, and explainability techniques.
- Proficiency in monitoring and logging frameworks for AI/ML systems.
- Strong analytical and problem-solving skills with the ability to identify risks and propose actionable mitigations.
- Excellent written and verbal communication skills to document findings, influence stakeholders, and present to executive leadership.
- Ability to work across diverse teams and translate complex technical concepts into clear operational and compliance guidance.
Preferred Qualifications
- Master's Degree
- Typically, Masters in Business Administration, Computer Science, Information Management, Quantitative Analytics, Data Science, or similar discipline
- Experience establishing AI Governance programs, specifically processes, procedures, and frameworks for AI model testing and validation, AI solution monitoring, and AI risk management.
What Are Our Desired Skills and Capabilities?
- Skills / Knowledge - Having broad expertise or unique knowledge, uses skills to contribute to development of company objectives and principles and to achieve goals in creative and effective ways. Barriers to entry such as technical committee review may exist at this level.
- Job Complexity - Works on significant and unique issues where analysis of situations or data requires an evaluation of intangibles. Exercises independent judgment in methods, techniques and evaluation criteria for obtaining results. Creates formal networks involving coordination among groups.
- Supervision - Acts independently to determine methods and procedures on new or special assignments. May supervise the activities of others.
Global Payments Inc. is an equal opportunity employer. Global Payments provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex (including pregnancy), national origin, ancestry, age, marital status, sexual orientation, gender identity or expression, disability, veteran status, genetic information or any other basis protected by law. If you wish to request reasonable accommodations related to applying for employment or provide feedback about the accessibility of this website, please contact jobs@globalpay.com.