< Back to search

Faculty AI • London - Hybrid

Lead Research Scientist

Employment type:  Full time

< Back to search

Job Description

About Faculty


At Faculty, we transform organisational performance through safe, impactful and human-centric AI.

With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme.

Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI.

Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world.


About the role

As a Lead Research Scientist at Faculty, you will be leading scientific research, and other researchers, in the area of AI safety that progresses scientific understanding. You will contribute to both external publications, and Faculty’s commercial ambition to build safe AI systems.

This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world.

What you’ll be doing

Your role will evolve alongside business needs, but you can expect your key responsibilities to include:

Research Leadership:

  • Lead the AI safety team’s research agenda, setting priorities and ensuring alignment with Faculty’s long-term goals.

  • Conduct and oversee the development of cutting-edge AI safety research, with a focus on large language models and other safety-critical AI systems.

  • Publish high-impact research in leading conferences and journals (e.g., NeurIPS, ACL, ICML, ICLR, AAAI).

  • Support Faculty’s positioning as a leader in AI safety through thought leadership and stakeholder engagement.

Research Agenda Development:

  • Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities.

  • Interface with the wider business to ensure alignment between the R&D team’s research efforts and the company's long term goals with a specific focus in the AI safety and commercial projects in the space.

Team Management and Mentorship

  • Build and lead a growing team of researchers, fostering a collaborative and innovative culture across a wide-range of AI Safety-relevant research topics

  • Provide mentorship and technical guidance to researchers across diverse AI safety topics.

Technical Contributions:

  • Lead hands-on contributions to technical research.

  • Collaborate on delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners.




Who we are looking for

  • A proven track record of high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs).

  • Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability)

  • Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation.

  • Advanced programming and mathematical skills with Python and an experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.)

  • The ability to conduct and oversee complex technical research projects.

  • A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others.

  • Excellent verbal and written communication skills.


The following would be a bonus, but are by no means required:

  • Commercial experience applying AI safety principles in practical or high-stakes contexts.

  • Background in red-teaming, evaluations, or safety testing for government or industry applications.

  • We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary

What we can offer you:

The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day.

Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals.

Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.

About Faculty


At Faculty, we transform organisational performance through safe, impactful and human-centric AI.

With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme.

Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI.

Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world.


About the role

As a Lead Research Scientist at Faculty, you will be leading scientific research, and other researchers, in the area of AI safety that progresses scientific understanding. You will contribute to both external publications, and Faculty’s commercial ambition to build safe AI systems.

This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world.

What you’ll be doing

Your role will evolve alongside business needs, but you can expect your key responsibilities to include:

Research Leadership:

  • Lead the AI safety team’s research agenda, setting priorities and ensuring alignment with Faculty’s long-term goals.

  • Conduct and oversee the development of cutting-edge AI safety research, with a focus on large language models and other safety-critical AI systems.

  • Publish high-impact research in leading conferences and journals (e.g., NeurIPS, ACL, ICML, ICLR, AAAI).

  • Support Faculty’s positioning as a leader in AI safety through thought leadership and stakeholder engagement.

Research Agenda Development:

  • Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities.

  • Interface with the wider business to ensure alignment between the R&D team’s research efforts and the company's long term goals with a specific focus in the AI safety and commercial projects in the space.

Team Management and Mentorship

  • Build and lead a growing team of researchers, fostering a collaborative and innovative culture across a wide-range of AI Safety-relevant research topics

  • Provide mentorship and technical guidance to researchers across diverse AI safety topics.

Technical Contributions:

  • Lead hands-on contributions to technical research.

  • Collaborate on delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners.




Who we are looking for

  • A proven track record of high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs).

  • Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability)

  • Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation.

  • Advanced programming and mathematical skills with Python and an experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.)

  • The ability to conduct and oversee complex technical research projects.

  • A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others.

  • Excellent verbal and written communication skills.


The following would be a bonus, but are by no means required:

  • Commercial experience applying AI safety principles in practical or high-stakes contexts.

  • Background in red-teaming, evaluations, or safety testing for government or industry applications.

  • We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary

What we can offer you:

The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day.

Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals.

Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.

Company benefits

Unlimited annual leave
Health insurance – Full coverage through Axa
Dental coverage
Enhanced maternity leave
Enhanced paternity leave
Work from home budget – Full office set up including desk, chair etc
Equity packages
Open to compressed hours
Open to part time work for some roles
Work from anywhere scheme – 20 days per year
Free meals – Free lunches on set days
Enhanced WFH tools
Open to part-time employees
Eye Care Support
Enhanced sick days
Mental health platform access
Family health insurance – Add your family to our Axa Plan
Compassionate leave
Health assessment
Travel loan
Optional unpaid leave
Shared parental leave
Adoption leave
Cycle to work scheme
Bike parking
Fully stocked snack cupboard
Sabbaticals
Employee assistance programme
Lunch and learns

We need to ask employees of Faculty AI what it's like to work there before we assign the company FlexScore®.

Location flexibility
?
More information needed
Hours flexibility
?
More information needed
Benefits
?
More information needed
Work-life balance
?
More information needed
Role modelling
?
More information needed
Autonomy
?
More information needed

Working at Faculty AI

Company employees

390

Gender diversity (male:female)

65:35

Funding levels

circa. £40m

Currently Hiring Countries

United Kingdom

Office Locations