Head of Testing - AI Safety Institute
Department for Science, Innovation & Technology
Apply before 11:55 pm on Monday 18th November 2024
Details
Reference number
Salary
Job grade
Contract type
Business area
Type of role
Working pattern
Number of jobs available
Contents
Location
About the job
Job summary
AI is bringing about huge changes to society, and it is our job as a team to work out how Government should respond. It is a once-in-a-generation moment, and an incredibly fast-paced and exciting environment.
AI Safety Institute
Advances in artificial intelligence (AI) over the last decade have been impactful, rapid, and unpredictable. Advanced AI systems have the potential to drive economic growth and productivity, boost health and wellbeing, improve public services, and increase security.
But advanced AI systems also pose significant risks, as detailed in the government’s paper on Capabilities and Risks from Frontier AI published in October. AI can be misused – this could include using AI to generate disinformation, conduct sophisticated cyberattacks or help develop chemical weapons. AI can cause societal harms – there have been examples of AI chatbots encouraging harmful actions, promoting skewed or radical views, and providing biased advice. AI generated content that is highly realistic but false could reduce public trust in information. Some experts are concerned that humanity could lose control of advanced systems, with potentially catastrophic and permanent consequences. We will only unlock the benefits of AI if we can manage these risks. At present, our ability to develop powerful systems outpaces our ability to make them safe. The first step is to better understand the capabilities and risks of these advanced AI systems. This will then inform our regulatory framework for AI, so we ensure AI is developed and deployed safely and responsibly.
The UK is taking a leading role in driving this conversation forward internationally. We hosted the world’s first major AI Safety Summit and have launched the AI Safety Institute. Responsible government action in an area as new and fast-paced as advanced AI requires governments to develop their own sophisticated technical and sociotechnical expertise. The AI Safety Institute is advancing the world’s knowledge of AI safety by carefully examining, evaluating, and testing new types of AI, so that we understand what each new model is capable of. The Institute is conducting fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI. The Institute will make its work available to the world, enabling an effective global response to the opportunities and risks of advanced AI.
Job description
The Testing Team is a high-profile and high -performing team in AISI, responsible for delivering against AISI’s core mission: developing and conducting evaluations on frontier AI systems, before and after they are deployed. - AISI is the world’s first state-backed organisation doing this work. We work closely with frontier labs, and our work on testing delivers impact to both companies and governments around the world by providing high quality empirical evidence of frontier AI capabilities and risks.
The objectives of the team are:
- Ensuring the UK government, international partners and the public have high quality, accurate information about frontier AI system capabilities and how these are developing over time, in order to make informed judgements about how AI systems might impact people and society.
- Providing an independent source of information to frontier AI developers about system capabilities and safety, enabling iterative improvements to the safety of their overall systems.
- Running smooth and efficient testing processes for the organisation which maximise the impact of finite technical resource in government on improving AI Safety.
The Testing Team sits within the Research Unit, which is the centre of AISI’s technical work – it is where our technical teams are housed and is responsible for advancing our work on frontier AI evaluations and impact assessments, safeguards and interventions, risk modelling, and foundational AI safety research.
Key Responsibilities
- Developing a clear vision and strategy for the testing team and translating this into clear deliverables. This includes developing and refining a theory of impact for this work, prioritising testing efforts to maximise impact and identifying new routes to impact.
- Oversight of testing delivery – leading a cross-functional team of technical and policy leads to ensure that testing projects run efficiently and effectively, too often challenging timelines. Refining these processes over time to ensure we are learning lessons and making efficiency improvements.
- Senior stakeholder management – building and leveraging relationships with:
- UK government – engaging national security colleagues, ministers and senior officials in DSIT on testing progress and outputs.
- Frontier AI labs – working through the approach to evaluations and ensuring feedback is impactful and drives improvements to model safety. You will work closely with the company engagement team.
- US and other AISIs – operationalising joint testing between the UK and the US, and broader engagement with other AISIs on testing and evaluation.
- Building and developing a high-performing team, effective and happy multi-disciplinary team. We’ve recently decided to grow the team to build a “science of evaluations” function – both to provide internal scrutiny and quality assurance on our technical results from testing, but also to ensure that we are tracking the latest research and incorporating this into our approach. You will work with a technical co-lead to build this team.
Person specification
Essential skills and experience
- Setting strategy and directing work towards impact in a rapidly changing landscape.
- Experience of leading high performing teams in a fast-paced environment – establishing clear processes and setting direction.
- Building trusted relationships with senior stakeholders across industry and government.
- Expertise in frontier AI safety and/or demonstrable experience of upskilling quickly in a complex new policy area.
Desirable skills and experience
- Experience in frontier AI Policy.
- Experience of working with the National Security community.
Candidates will be required to have hold or be willing to get SC clearance.
Behaviours
We'll assess you against these behaviours during the selection process:
- Seeing the Big Picture
- Communicating and Influencing
- Delivering at Pace
- Leadership
Benefits
The Department for Science, Innovation and Technology offers a competitive mix of benefits including:
- A culture of flexible working, such as job sharing, homeworking and compressed hours.
- Automatic enrolment into the Civil Service Pension Scheme, with an employer contribution of 28.97%.
- A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30.
- An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue.
- Access to a range of retail, travel and lifestyle employee discounts.
Office attendance
The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home-based location for 40-60% of the time over the accounting period.
Things you need to know
Selection process details
As part of the application process, you will be asked to complete a CV and personal statement.
Further details around what this will entail are listed on the application form.
The CV should set out candidate’s career history, with key responsibilities and achievements. Candidates should ensure they provide employment history that relates to the essential and desirable criteria.
Please note your Personal Statement (max 750 words) should explain how you consider your personal skills, qualities and experience provide evidence of your suitability for the role in reference to the person specification. Candidates may also choose to reference the desirable skills listed.
The sift will be conducted on candidate’s personal statement and CV.
In the event of a large number of applicants, applications will be sifted on the candidates personal statement.
Candidates who pass the initial sift may be progressed to a full sift, or progressed straight to assessment/interview.
Interview 1
- Interview 1 will be a behaviour and strengths based interview.
- Candidates will be asked to prepare a presentation and will be assessed on the behaviour 'Seeing the Bigger Picture'.
- Candidates will also be expected to complete an exercise and will be assessed on the behaviour 'Delivering at Pace'.
Interview 2
- SCS interview
Sift and interview dates
Expected Timeline subject to change
Sift dates: w/c 18th November 2024
Interview dates: w/c 25th November 2024
Interview Location: MS Teams.
Candidates are asked to note the above timetable, exercising flexibility through the recruitment and selection process.
Further Information
Reasonable Adjustment
We are proud to be a disability confident leader and we welcome applications from disabled candidates and candidates with long-term conditions.
Information about the Disability Confident Scheme (DCS) and some examples of adjustments that we offer to disabled candidates and candidates with long-term health conditions during our recruitment process can be found in our DSIT Candidate Guidance. A DSIT Plain Text Version of the guidance is also available.
We encourage candidates to discuss their adjustment needs by emailing the job contact which can be found under the contact point for applicants' section.
If you are experiencing accessibility problems with any attachments on this advert, please contact the email address in the 'Contact point for applicants' section.
If successful and transferring from another Government Department a criminal record check may be carried out.
New entrants are expected to join on the minimum of the pay band.
A location-based reserve list of successful candidates will be kept for 12 months. Should another role become available within that period you may be offered this position.
Please note terms and conditions are attached. Please take time to read the document to determine how these may affect you.
Any move to the Department for Science, Innovation and Technology from another employer will mean you can no longer access childcare vouchers. This includes moves between government departments. You may however be eligible for other government schemes, including Tax Free Childcare. Determine your eligibility https://www.childcarechoices.gov.uk
DSIT does not normally offer full home working (i.e., working at home); but we do offer a variety of flexible working options (including occasionally working from home).
DSIT cannot offer Visa sponsorship to candidates through this campaign.
DSIT holds a Visa sponsorship licence but this can only be used for certain roles and this campaign does not qualify.
In order to process applications without delay, we will be sending a Criminal Record Check to Disclosure and Barring Service on your behalf.
However, we recognise in exceptional circumstances some candidates will want to send their completed forms direct. If you will be doing this, please advise Government Recruitment Service of your intention by emailing Pre-EmploymentChecks.grs@cabinetoffice.gov.uk stating the job reference number in the subject heading.
Applicants who are successful at interview will be, as part of pre-employment screening, subject to a check on the Internal Fraud Database (IFD). This check will provide information about employees who have been dismissed for fraud or dishonesty offences. This check also applies to employees who resign or otherwise leave before being dismissed for fraud or dishonesty had their employment continued. Any applicant’s details held on the IFD will be refused employment.
A candidate is not eligible to apply for a role within the Civil Service if the application is made within a 5-year period following a dismissal for carrying out internal fraud against government.
Vetting
For further information on National Security Vetting please visit the following page https://www.gov.uk/government/publications/demystifying-vetting
Feedback
Feedback will only be provided if you attend an interview or assessment.
Security
Nationality requirements
Working for the Civil Service
We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window).
Diversity and Inclusion
Apply and further information
Contact point for applicants
Job contact :
- Name : Karina Kumar
- Email : karina.kumar@dsit.gov.uk
Recruitment team
- Email : active.campaigns@dsit.gov.uk