Responsible Tech Careers #16: Building a Career in Trust & Safety
Plus 30+ new roles from the Responsible Tech Job Board, and additional resources!
Hope everyone is having a wonderful summer! All Tech Is Human is getting ready to head to London for a couple of gatherings later this month — please join us. This Responsible Tech Careers newsletter features a deep dive into curated resources on how to build a career around Trust & Safety, with a panel discussion on the Future of Trust & Safety, plus our founder David Ryan Polgar shares the Story of All Tech Is Human.
If you’d like to plug into the larger community at All Tech Is Human, be sure to join our Slack and tune into our regular livestream series tackling thorny topics in Responsible Tech.
Senior level roles toward the bottom of the newsletter tend to get cut off from the email format, so don’t forget to click “View in Browser” to read about all of our resources and opportunities. Thanks!
🧠 In this newsletter, you’ll find:
Featured Read: The Story of All Tech Is Human
Featured Resource: Building a Career in Trust & Safety
Featured Panel: The Future of Trust & Safety
Featured Livestream: Vote With Your Phone: A Discussion with Bradley Tusk
30+ new Responsible Tech roles!
🤝 All of these roles and hundreds more can be found on All Tech Is Human’s Responsible Tech Job Board. In addition to our job board, you will find that numerous Responsible Tech jobs are shared and discussed every day through our large Slack community, which includes 12k people across 110 countries (Sign In | Apply).
👑 Also subscribe to All Tech Is Human’s flagship newsletter, focusing on issues and opportunities in the Responsible Tech ecosystem at-large and arriving in your inbox every other week (opposite weeks from this Careers newsletter).
Now, onto the newsletter! 👇
Featured Read: The Story of All Tech Is Human
The All Tech Is Human community is made up of tens of thousands of people from different backgrounds and perspectives, a beautiful mix of technologists, ethicists, academics, artists, designers, students, attorneys, and everyone in between. Meshed together to learn, cross-pollinate, and collaborate.
That is entirely by design.
And that is one of the main reasons why All Tech Is Human exists and needs to exist: our typical approach to tackling thorny tech & society issues doesn’t work anymore.
We need an interdisciplinary approach that understands the broader public, rapidly distributes knowledge to key stakeholders and levers of change, and can move at the speed of tech.
We needed to be more participatory, less top-down, more agile, and capable of distributing collective intelligence and insight across a multistakeholder network while moving quickly.
Featured Resource: Trust & Safety Careers
Build your career in Trust & Safety! Check out our new resource that includes:
An overview of the field
Roles in Trust & Safety
Key organizations to connect with
40 leaders in Trust & Safety to follow
Books and related resources to read
Trust & Safety refers to the strategies, policies, and practices organizations implement to protect users, employees, and stakeholders from harm while fostering a secure and trustworthy environment. This concept typically applies to online platforms, social media, e-commerce, and technology companies, focusing on minimizing risks such as fraud, abuse, harassment, and harmful content.
In a broader context, ‘Trust’ refers to the confidence that users or stakeholders have in a platform or organization to act in their best interests, uphold privacy, and maintain security. ‘Safety’ refers to the measures put in place to protect individuals from threats, both physical and digital, ensuring a safe user experience. Together, Trust & Safety efforts aim to create environments where users feel protected, respected, and confident in their interactions.
See our mini-report here:
Featured Panel: Exploring the Future of Trust & Safety
On Wednesday, June 4, 2025, All Tech Is Human hosted a gathering of Trust and Safety leaders in Washington, D.C. at Union Stage. Leaders across the tech industry, civil society, academia, and government convened for two panel discussions, one on the trust and safety policy landscape and another on the future of Trust and Safety, and a networking reception. This gathering, in collaboration with Resolver Trust and Safety, featured a panel moderated by Resolver’s Henry Adams, Director of Trust and Safety Intelligence; he was joined by Colleen Mearn (Anthropic), Scott Vlachos (CENSA), and Juliet Shen (ROOST).
Colleen opened the discussion by addressing one of the most persistent challenges in Trust and Safety work: how to meaningfully measure success in an environment where traditional metrics often fail to capture the full picture. The field has long relied on quantitative measures, but these metrics increasingly feel inadequate for assessing the true impact of Trust and Safety efforts.
The challenge lies in the fact that effective methods that prevent harm never become visible in traditional metrics. Preventive successes are difficult to quantify but may represent the most meaningful outcomes of trust and safety work. Colleen suggested that organizations are beginning to explore more sophisticated success metrics that focus on ecosystem health rather than just enforcement actions. However, developing these metrics requires significant investment in research and data infrastructure that many organizations struggle to prioritize.
Scott provided insights into the evolving world of adversarial networks and how malicious actors are leveraging AI for harmful purposes. The sophistication of these networks has increased dramatically, with bad actors adopting many of the same AI tools that platforms use for detection and prevention.
Juliet explored the emerging landscape of open-source tooling in trust and safety, highlighting both the potential and significant challenges of this approach. The open-source movement in T&S represents a fundamental shift from proprietary, platform-specific solutions toward collaborative, transparent tool development. She noted that transparency in tool development allows for broader community review and improvement, potentially leading to more robust and effective solutions. Open-source approaches can also democratize access to sophisticated T&S capabilities, allowing smaller platforms and organizations to benefit from tools that would otherwise be prohibitively expensive to develop independently.
Featured Livestream: Vote With Your Phone
Join All Tech Is Human's executive director, Rebekah Tweed, in conversation with venture capitalist, political strategist, philanthropist and writer, Bradley Tusk, on his national campaign to bring mobile voting to all U.S. elections. The discussion will build on Bradley's July 15th TED talk as well as his 2024 book, Vote With Your Phone: Why Mobile Voting Is Our Final Shot at Saving Democracy.
Bradley Tusk is a venture capitalist, political strategist, philanthropist and writer. He is the co-founder and Managing Partner of Tusk Venture Partners, the world’s first venture capital fund that invests solely in early stage startups in highly regulated industries, and the founder of political consulting firm Tusk Strategies. He also recently launched Tusk Ventures, an equity-for-service firm offering world-class regulatory, communications, and strategic expertise to help startups and growing businesses to navigate regulation, expand into new markets, and reduce regulatory and political risk. Bradley’s family foundation is funding and leading the national campaign to bring mobile voting to all U.S. elections. Tusk Philanthropies also runs and funds anti-hunger campaigns that have led to the creation of anti-hunger policies and programs, including universal school breakfast programs, in 21 different states, helping to feed nearly 13 million people.
Responsible Tech Roles (listed from entry to senior roles)
🎉 FEATURED OPPORTUNITIES 🎉
Center for Democracy & Technology - AI Governance Fellow, AI Governance Lab
The Center for Democracy & Technology (CDT) is seeking a Fellow with research and/or applied technical expertise on issues relating to the governance of artificial intelligence. The Fellow will contribute to the work of CDT’s AI Governance Lab, focused on the responsible design, testing, monitoring and regulation of AI systems. The Fellow will contribute to original research, writing, and recommendations on questions that are core to current AI governance efforts in the public and private sectors. Example topics include: developing effective AI auditing ecosystems, analyzing the implications of increasingly personalized and agentic AI systems, evaluating and advancing transparency practices such as system cards and model disclosures, and evaluating emerging risks from foundation models and their downstream applications. We welcome applicants interested in both technical and sociotechnical approaches to AI risk assessment and mitigation. As part of CDT’s AI Governance Lab team, the Fellow will have the opportunity to shape cutting-edge efforts to establish norms and requirements for AI governance at a critical moment for the field. Recent developments—including rapid advances in foundation models and frontier AI systems, increasing AI deployment across sectors, and evolving regulatory frameworks—make this work particularly urgent.
Centre for the Governance of AI - Research Scholar
The Research Scholar role is a one-year visiting position. It is designed to support the career development of AI governance researchers and practitioners, while offering an opportunity to do impactful work. As a Research Scholar, you will have freedom to pursue many different projects. This could include conducting policy research, social science research, or technical research; advising policymakers; or starting new applied projects. For example, past scholars have used the position to help create the field of technical AI governance, support UK policymaking through a government secondment, and launch a new organisation to facilitate international AI governance dialogues. The topics we work on include, but are not limited to: frontier AI safety frameworks, threat modelling, AI regulation, international governance, technical governance, agent governance, the economics of AI, and risk assessment and forecasting. Over the course of the year, you will deepen your understanding of the field, connect with a network of experts, and build your skills and professional profile within an institutional home that offers both flexibility and support.
Oxford Internet Institute - Postdoctoral Researcher in AI, Privacy, and Policy
The Synthetic Society Lab at the Oxford Internet Institute invites applications from enthusiastic and motivated candidates for a postdoctoral position working on cutting-edge research at the intersection of Machine Learning and Privacy-Enhancing Technologies, with a focus on public interest technology research. We are looking for an individual who is interested in developing their own research questions in alignment with the research team's expertise and focus areas. We are particularly interested in investigating how modern privacy-enhancing technologies (e.g. based upon synthetic data or using formal differential privacy guarantees) impact research integrity and reproducibility. This is an exciting line of research with potential to not only improve how researchers access crucial sensitive data around the world, but also build tools for them to understand if the anonymised data they receive is “good enough” to conduct rigorous research. The successful candidate will join a welcoming and inclusive multidisciplinary research group led by Dr Luc Rocher that investigates the societal impact of AI and technology. Our team conducts independent research to guide the development of accountable, sustainable, and safe algorithms that serve the public interest.
Patrick J. McGovern Foundation - Full Stack Developer
The Patrick J McGovern Foundation (PJMF) is seeking an experienced, multi-faceted and self-driven Full Stack Developer (FSD) to join our Products and Services function. This role will be directly engaged with the development, deployment and maintenance of data and AI products that will drive positive social impact around the world. The FSD is responsible for full-stack development of cloud solutions that are based around ML model predictions, using a rapid prototyping development approach. We are a small, high-performing team and in many ways function as a startup. As such, the ideal candidate will be someone who can develop an end-to-end solution around which our ML products will be deployed.
Internship
Brookings Institution - Fall 2025, Research & Events Internship, AI and Emerging Technology, Executive Office
The intern will assist in coordinating events conducting research on public policy and emerging technology. Brookings scholars affiliated with the Initiative work on a wide-range of tech policy issues and projects, from to automation the future of work, disinformation campaigns, the geopolitics of tech, and more. Interns will be responsible for a range of event management, research and publication tasks in support of ongoing projects in the Initiative. Applicants should identify specific topics of interest in their cover letter. This internship is an opportunity for undergraduate students in their junior or senior year, or recent college graduates, working towards a degree in a relevant field (e.g., computer science, engineering, social science, public policy, etc.).
Fellowships
Centre for the Governance of AI - Research Fellow (2-year term)
Research Fellows are experienced researchers who play a central role in GovAI’s research agenda. They conduct independent, high-quality research with direct relevance to governance and policy and contribute to our vibrant intellectual community. As a Research Fellow, you will help define and address some of the most important questions in AI governance. Your work might take the form of academic publications, policy memos, blog posts, or strategic advising. You will also mentor early-career researchers, shape core research directions at GovAI, and collaborate with other researchers in our broader network. The topics we work on include, but are not limited to: frontier AI safety frameworks, threat modelling, AI regulation, international governance, technical governance, agent governance, the economics of AI, and risk assessment and forecasting.
Centre for the Governance of AI - Winter Fellow 2026
Seasonal Fellows join GovAI to conduct independent research on a topic of their choice, with mentorship from leading experts in the field. Each fellow is paired with a supervisor from the GovAI team or network. They spend the first two weeks of the fellowship exploring the AI governance landscape, before settling on a research proposal with input from their supervisors. Their research could result in a report, white paper, journal article, op-ed, or blog post targeted at an audience relevant to AI governance. The Research Managers and our broader team will offer additional support in deciding what project and output would be most valuable for the fellow to work toward. You can read about the topics our previous cohorts of Summer and Winter Fellows worked on here and here. Alongside their research and weekly meetings with their supervisors, fellows will also have the opportunity to widen their professional network and upskill on AI governance. GovAI will organise a series of Q&A sessions with AI governance experts; workshops and seminars aimed at building relevant skills and subject-matter knowledge; work-in-progress meetings that facilitate peer-to-peer feedback; as well as social events. Fellows will also be encouraged to discuss follow-on career opportunities with our team and network.
Heliopolis Consulting - NextGen Intiative Fellow
NextGen is a community of action, led by our volunteer co-chairs and organized through issue working groups and city-specific local networks. Competitive applicants will showcase both compelling expertise and experience, but also a desire to participate actively in our community. Cohort members benefit from four pillars of activities: knowledge building, political engagement, professional development and social connections. NextGen members also have access to tailored mentorship initiatives and an extraordinary network of members and advisors. In addition, NextGen members have the opportunity to engage across the full range of FP4A efforts including community events, advocacy, and political activities. Foreign Policy for America’s annual Leadership Summit every May also creates a unique opportunity to meet fellow NextGenners, FP4A Leadership Circle members, and foreign policy leaders from across the country.
Johns Hopkins University - Postdoctoral Fellow, Normativity Lab
Professor Gillian Hadfield is seeking a highly-qualified postdoctoral scholar to join her team at the Normativity Lab in Baltimore, MD, or Washington, DC, to investigate the foundations of human normativity and how these insights can inform the development of AI systems aligned with human values. The ideal candidate will have a track record in computational modelling that explores the dynamics of AI systems and the development of autonomous AI agents, experience with machine learning, reinforcement learning, and generative AI, and a background in interdisciplinary research. This is a full-time one-year position, with the possibility of extension. How can we ensure AI systems and agents align with human values and norms? Maintain and enhance the complex cooperative economic, political and social systems humans have built? What will it take to ensure that the AI transformation puts us on the path to improved human well-being and flourishing, and not catastrophe? Existing approaches to alignment, such as RLHF, constitutional AI and social choice methods, focus on eliciting human preferences, aggregating them across multiple, pluralistic values if necessary, and fine-tuning models to satisfy those preferences. In the Normativity Lab we believe these approaches are likely to prove too limited to address the alignment challenge and that the alignment questions will require studying the foundations of human normativity and human normative systems. We bridge computational modeling, specifically multi-agent reinforcement learning and generative agent simulations, and economic, political, and cultural evolutionary theory to explore the dynamics of normative systems and explore how to build AI systems and agents that have the normative infrastructure and normative competence to do as humans have learned to do: create stable rule-based groups that can adapt to change while ensuring group well-being.
Pulitzer Center - AI Accountability Fellowship
AI and other predictive technologies have been used to make policy decisions, understand disease, teach our children, and monitor our work for years. The hype around generative AI is now supercharging the spread of these systems while citizens have little insight into how they work, who profits from them, and who gets hurt. Through the AI Accountability Fellowships, the Pulitzer Center aims to support in-depth, high-impact reporting projects that document and explain the opportunities, harms, and regulatory and labor issues surrounding AI systems. The Fellowship program provides selected journalists with financial support, a community of peers, mentorship, and training to pursue in-depth reporting projects that interrogate how AI systems are funded, built, and deployed by corporations, governments, and other powerful actors.
Entry Level
Amazon - Trust & Safety Specialist I, AWS Trust and Safety
Amazon Web Services (AWS) Customer Service (CS) provides global support to a wide range of external customers as they build mission-critical applications on top of AWS services such as Amazon EC2 and Amazon S3. The AWS teams help our customers understand what Cloud Computing is all about, and whether it can be useful for their business needs. As part of our team you will collaborate on AWS Abuse Investigation & Prevention initiatives. As an Abuse Investigation & Prevention specialist, you will be faced with scenarios where AWS hosted resources negatively impact third parties on the Internet. You will be key in maintaining the reputation of AWS’s IP space by vetting potential abuse issues and contacting AWS customers in order to put a stop to these harmful acts. Abuse Investigation & Prevention acts as the first line of defense for AWS by analyzing trends and reporting findings to AWS service teams as needed. The team devotes their time and attention to helping identify impactful customer scenarios such as these. They classify incoming reports of abuse while exercising sound judgment in the decisions they make. Team members display strong technical skills while providing complex AWS account support to our customers and other AWS teams.
Deloitte - Consultant Responsible Data & Analytics
As a Consultant within the Responsible Data & Analytics team, your primary goal is empower diverse clients including governmental organizations, financial institutions and corporate entities in optimizing their (data) processes and addressing intricate data-related challenges. Your strategic involvement could potentially lead to significant improvements in data management, engineering, and analytics, ultimately fostering a culture of responsible data usage. We are looking for motivated candidates with an interest to support clients with data related challenges including data management and data engineering related challenges.
Knight Foundation - Program Associate, Information & Society team
Knight Foundation’s Information & Society team is looking for a curious, motivated, mission-driven individual to join our exciting new department as it launches. Reporting to the Director/Information & Society in Knight’s Miami headquarters, you will support the development and execution of the Information & Society program’s new strategy. This includes assisting with research and data analysis, contributing to the exploration of new ideas and emerging issues, supporting grants management, and coordinating day-to-day team operations. This is a cross-functional role that will involve collaborating closely with department staff and colleagues across the foundation. The Information & Society program advances understanding of the evolving dynamics in how Americans seek and share information, form beliefs, and connect with one another. We aim to inform policy and social practice in ways that strengthen core democratic values such as free expression, access to information, and accountability. This work is carried out through a combination of grantmaking, commissioned research, strategic partnerships, convenings, and mission-aligned investments.
Snap - Trust & Safety Specialist
We’re looking for a Trust & Safety Specialist to join Team Snapchat! We’re looking for someone who is able to keep a cool head under pressure, and is excited about improving workflows and proactively seeking ways to enhance the Snapchat experience. From developing support resources to tracking and escalating community issues, we’re looking for a utility player who is eager to help our community, keep Snapchatters safe, and isn’t afraid to roll up their sleeves and support wherever needed. Due to the nature of Trust & Safety work, you should be willing and able to work with sensitive issues and content that may be considered offensive or disturbing. Our team has carefully devised measures and support in place to ensure wellness for all our team members. Given that Trust and Safety work never stops, we are part of a global team that provides support to our users at all times. This role may be required to have a standing schedule that includes a weekend day.
Early Career
Amnesty International, Algorithmic Accountability Lab (AAL) - Consultant, Ban the Scan
Amnesty International is seeking a consultant to deliver two distinct research outputs addressing the deployment of facial recognition technologies (FRT) and their associated human rights harms. This work will directly support Amnesty's global Ban the Scan campaign and its mission to halt the invasive and discriminatory use of FRT by authorities. The consultant's research will highlight the lived experiences of communities impacted by FRT and inform the next phase of Amnesty's campaign through a case study focused on Buenos Aires, Argentina, as well as scoping future locations for further research and advocacy.
Funga - Operations Manager, Science
Funga is seeking help to scale core systems and processes that support our team, research and forestry partners, and scientific discovery functions in the southeastern US and beyond. In this role, you will own the design and execution of operational infrastructure across forestry, nursery operations, and research and development platforms. While our team focuses on quality execution of our R&D and commercial work in the field, you will be our Mission Controller, ensuring our operational focus is compliant, in line with strategic priorities, and the team is positioned to deliver successfully. You’ll report to the Head of Applied Science and work closely with cross-functional leads to turn vision into execution. This is a builder role for someone who loves building structure from scratch, thrives in fast-paced environments, and brings a systems-oriented approach to operational excellence. We are a people-centric team that focused on hiring great people and giving them the resources to do their jobs exceptionally well. You must bring a passion for and experience cultivating a dynamic, nimble, and highly engaged workforce.
Microsoft - Member of Technical Staff – MachineLearning, AI Safety
As a Member of Technical Staff – Machine Learning, AI Safety, you will work on the Technical Safety Squad to ensure that Copilot’s messages comply with content policies and the larger values of the organization. You may be responsible for developing new methods to evaluate LLMs, experimenting with data collection techniques for safety post-training, or training content classifiers to support the Copilot experience. We’re looking for someone with experience in artificial intelligence, but also a strong communicator and great teammate. The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.
OpenAI - Trust & Safety Analyst
We are looking for experienced Trust & Safety Analysts to collaborate closely with internal teams to ensure safety and compliance on OpenAI platforms. You will be a stakeholder in the design and implementation of policies, processes, and automated systems to take action against bad actors and minimize abuse at scale, handle high risk & high visibility customer cases with care, and build feedback loops to improve our trust & safety policies and detection systems. Ideally, you have worked in a high-paced startup environment, have handled a breadth of integrity related issues of varying sensitivity and complexity, and are comfortable with building processes and systems from zero to one.
Penn Center for Media, Technology, and Democracy - Communications and Research Manager
The new Penn Center for Media, Technology, and Democracy works to advance the scientific understanding of the information ecosystem and leverage that research to strengthen the foundations of democracy. We seek a professional committed to advancing democracy and passionate about technology and media studies to serve as a Communications and Research Manager. They will develop an audience for the Center’s research work by writing a newsletter, maintaining an aggregation webpage, creating social media content, writing research briefs and contributing to the Center’s annual report. The Communications and Research Manager will further assist in the development of a communications strategy for the Center, pitch press stories, and support communication with key Center stakeholders. The Communications and Research Manager will work closely with the center’s Executive Director, and candidates are encouraged to use Penn tuition benefits to deepen their knowledge of technology, media, public policy, law, or a related field.
Trip Advisor - Lead Content Moderation Specialist, Trust & Safety, Places
Millions of travelers are empowered by Tripadvisor to plan and book their perfect trips. The Trust & Safety team plays a critical role in protecting this experience. Ensuring the integrity and quality of the Tripadvisor’s location data is job #1 for the Trust & Safety - Places Team. We manage the backbone of Tripadvisor’s travel content: our ever-growing database of accommodations, attractions, restaurants and geographic locations. We are seeking a Lead Content Specialist to join our team. In this role, you will leverage your analytical skills and policy knowledge to develop actionable insights that optimize the quality of our Places dataset and scalability of our processes.
UK AI Security Institute - Research Engineer, Societal Resilience
The AI Security Institute research unit is looking for exceptionally motivated and talented Research Engineers to work in the Societal Resilience team. Societal Resilience Societal Resilience is a multidisciplinary team that studies how advanced AI models can impact people and society, studies the prevalence and severity high-impact societal risks caused by frontier AI deployment, and develops mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium-term risks. In this role, you’ll join a strongly collaborative technical research team to help design and develop technical research projects into societal risks. These can include analysis of usage data, designing sociotechnical audits and evaluations of AI-driven products and services, and gathering and curating datasets that help us monitor the exposure, severity, and vulnerability of different risks. Person Specification Successful candidates will work with our research scientists to design and run studies that answer important questions about the effects AI will have on society. For example, how are AI systems being adopted and used in different sectors of the economy? How can AI agents collude with each other in real world simulations? How might AI systems be used to bypass safeguards when used to commit fraud? Research engineers will support a range of research projects into societal resilience by providing specialised technical expertise, building data pipelines, and creating demos and simulations.
Mid-Career
CommunityShare - Full Stack Web Developer
As a fast-growing, but still small, nonprofit venture CommunityShare is re-imagining the relationship between communities and schools. Through our online platform and offline programs we ignite civic engagement and real-world learning experiences by connecting the wisdom, skills and lived experiences of community partners with educators and students. A team player who is a creative problem-solver with exceptional back-end & front-end web development skills. This individual will work closely with CommunityShare’s Product Manager and product team. This is an exciting time to join our team as we expand our work nationally. We are looking for an individual who is mission-driven and excited to apply their skills to reimagining education and creating a more equitable world.
CommunityShare - UX/UI Designer
As a fast-growing, but still small, nonprofit venture CommunityShare is re-imagining the relationship between our communities, schools and out-of-school learning spaces. Through our digital platform and programs we ignite multigenerational, real-world learning experiences by connecting the wisdom, skills and lived experiences of community partners with educators and youth. A team player who is creative, entrepreneurial and understands the science and art of growing and nurturing relationships and community. As a key member of our team, you will engage new and existing member networks in CommunityShare’s national network. Our members include school districts, school networks, coalitions, education service agencies, nonprofits, and many others who are committed to thriving regional learning ecosystems. As UX/UI Designer, you will help us draft solutions for thorny challenges in user experience & user interface design, build prototypes, wireframes, and workflows to test our solutions, and integrate that all into a fluid design that delivers an outstanding user experience. We’re focused on taking our digital platform to the next stage of its evolution through rapid cycles of design, testing and iteration. You will lead the vision and implementation of the interactive design elements of the platform. This is your opportunity to join a small, talented, and passionate product team and take ownership of the user experience through the look and feel of the platform.
Deutsche Bank - AI Controls & Governance Advisor
Indra is the central program driving the introduction and safe scaling of AI at Deutsche Bank. The focus is to identify AI potential across various banking operations, driving funded use cases into production to create value and confidence and scale across the bank, creating selected shared services with embedded safety to enable low-cost scale, developing an AI Workbench for developers for safe AI development at pace, and introducing AI controls whilst aiming to maintain time to market. You will be responsible for establishing and enforcing AI governance frameworks, ensuring AI risks are proactively managed and regulatory requirements are met. You will drive cross-functional collaboration, oversee AI control implementation, and enhance the organisation's AI risk culture.
Morgan Stanley - AI Security Developer, VP/Director
We’re seeking someone to join our Security Development team as an AI Security Developer in Cyber. In the Technology division, we leverage innovation to build the connections and capabilities that power our Firm, enabling our clients and colleagues to redefine markets and shape the future of our communities. This is a Software Engineering III position at Director level, which is part of the job family responsible for developing and maintaining software solutions that support business needs. The Security Development team is - amongst other things - responsible for developing and engineering the Firm’s core security controls. The technology and solution stack spans all Firm employees as well as external clients of the Institutional Security and Wealth Management Businesses. It consists of home-grown software, 3rd party software, open source products, appliances, and auxiliary services and solutions. technical documentation and user assistance material, requiring excellent oral/written communication.
Ofcom - Senior Technical Advisor, Online Safety Technology
The Online Safety Technology team conducts research to build knowledge and understanding in subject areas fundamental to online safety. This includes, for example, AI and machine learning, digital identity, privacy enhancing technologies, decentralisation, user experience, gaming, network infrastructure, and digital forensics. The team provides technical expertise to policy, supervision and enforcement colleagues in the wider Online Safety Group, and others in Ofcom, during policy development processes and industry engagement activities. The successful candidate will play a leading role within the Online Safety Technology team to help design and deliver a programme of work that will develop and share fundamental understanding of the technologies that underpin online services to help meet Ofcom's Online Safety objectives. We are looking for somebody who will act as an authoritative ‘go to’ subject matter expert on the changing landscape of specific areas of technology related change, diffusing relevant insight from applied research and experimentation and raising stakeholders’ understanding of technology trends having an impact on online trust and safety.
Ofcom - Senior AI/ML Technology Advisor, Online Safety Technology
The Online Safety Technology team conducts research to build knowledge and understanding in subject areas fundamental to online safety. This includes, for example, AI and machine learning, digital identity, privacy enhancing technologies, decentralisation, user experience, gaming, network infrastructure, and digital forensics. The team provides technical expertise to policy, supervision and enforcement colleagues in the wider Online Safety Group, and others in Ofcom, during policy development processes and industry engagement activities. We are looking for a technically strong and strategically minded Senior AI/ML Technology Advisor to provide expert guidance on the technical aspects of artificial intelligence and machine learning as they relate to online safety. The successful candidate will play a leading role within the Online Safety Technology team to help design and deliver a programme of work that will develop and share fundamental understanding of the Artificial Intelligence and Machine Learning technologies that underpin online services to help meet Ofcom's Online Safety objectives. This position is ideal for candidates with deep technical expertise and a passion for applying AI responsibly in complex socio-technical environments.
Roblox - Senior Product Policy Manager - Trust by Design
Roblox is looking for a highly motivated and experienced Senior Product Policy Manager to join our Safety organization. In this role, you will be a key leader in supporting our risk review and management function, specifically within our Trust by Design risk management program. You will be responsible for overseeing the identification, evaluation, and mitigation/management of safety-related risks associated with the development of new product features and iteration of existing products. You will play a pivotal role in ensuring that safety is a core consideration throughout the entire product lifecycle, fostering a culture of proactive risk management.
Senior/Executive Level
JPMorgan Chase - Vice President - Generative Artificial Intelligence Policy and Governance
The Generative AI team in Consumer Home Lending is enabling the practical application of generative AI to transform how Chase serves customers and empowers employees. We operate across three pillars: Solutions (building production-ready AI applications), Governance (ensuring responsible AI deployment), and Enablement (spreading AI capabilities throughout the organization). As a Vice President in Generative Artificial Intelligence Policy & Governance, you will oversee the end to end process of governance of Generative AI use cases and contribute to the development of new governance policies and procedures. Adept navigation through ambiguity, adaptation to change, and leveraging of advanced analytical reasoning and influencing skills are essential for driving mutually beneficial outcomes. Your exceptional communication abilities will foster productive relationships with stakeholders, cross-functional teams, and clients. Through your technical fluency and thought leadership, you will play a pivotal role in achieving business goals, shaping the firm's technology landscape, and moving work forward that has firmwide impact.
Microsoft - Senior Director, Chief Economist, AI for Good Lab
Microsoft's AI for Good Lab is seeking a highly skilled and experienced Senior Director, Chief Economist – AI for Good Lab, to lead our efforts in leveraging artificial intelligence to address some of the world's most pressing challenges. This role will involve working closely with a multidisciplinary team of researchers, engineers, and policy experts to develop and implement AI-driven solutions that promote social good. A key responsibility will be overseeing the Microsoft AI Economy Institute, a program designed to advance the understanding of the economic impacts of AI and inform action on building a robust, inclusive AI economy.
🗒️You can find these roles, and more being updated daily on our Responsible Tech Job Board along with being shared in our Slack community.
💪Let’s co-create a better tech future
Our projects & links | Our mission | Our network | Email us
Subscribe to All Tech Is Human’s main newsletter for Responsible Tech updates!
🦜Looking to chat with others in Responsible Tech after reading our newsletter?
Join the conversations happening on All Tech Is Human’s Slack (sign in | apply).
We’re building the world’s largest multistakeholder, multidisciplinary network in Responsible Tech. This powerful network allows us to tackle the world's thorniest tech & society issues while moving at the speed of tech.
Reach out to All Tech Is Human’s Executive Director, Rebekah Tweed, at Rebekah@AllTechIsHuman.org if you are hiring and would like to work with All Tech Is Human to find candidates who are passionate about responsible technology or if you’d like to inquire about featuring a role in this newsletter!