Responsible Tech Careers #11: Responsible Tech Certificates: A Worthwhile Expense?
Plus resources, advice from ATIH Mentor Carlin Scrudato, Senior Advisor, Bryson Gillette, and 40+ new roles from the Responsible Tech Job Board!
Hello, and thank you for reading All Tech Is Human’s Responsible Tech Careers newsletter! We hope this is a helpful resource as you build your career in Responsible Tech!
This edition features a reflection on Responsible Tech Certificates from All Tech Is Human’s Siegel Research Fellow, Deb Donig, plus takeaways from a recent ATIHx event, "Launch Your Career in Responsible Tech: Empowering the Next Generation of Social Entrepreneurs," an upcoming livestream entitled “Meeting the Moment: The Road Ahead for the Responsible Tech Movement,” and insights from one of our mentors, Carlin Scrudato.
If you’d like to plug into the larger community at All Tech Is Human, be sure to join our Slack community, subscribe to our flagship newsletter, and tune into our regular livestream series tackling thorny topics in Responsible Tech!
🧠 In this newsletter, you’ll find:
Featured Resource: Responsible Tech Certificates: A Worthwhile Expense?
Featured Event: "Launch Your Career in Responsible Tech: Empowering the Next Generation of Social Entrepreneurs”
Featured Livestream: “Meeting the Moment: The Road Ahead for the Responsible Tech Movement”
Advice from an ATIH Mentor: Carlin Scrudato, Senior Advisor, Bryson Gillette
40+ new Responsible Tech roles!
🤝 All of these roles and hundreds more can be found on All Tech Is Human’s Responsible Tech Job Board. In addition to our job board, you will find that numerous Responsible Tech jobs are being shared and discussed every day through our large Slack community, which includes 12k people across 107 countries (Sign In | Apply).
👑 Also subscribe to All Tech Is Human’s flagship newsletter, focusing on issues and opportunities in the Responsible Tech ecosystem at-large and arriving in your inbox every other week (opposite weeks from this Careers Newsletter).
Now, onto the newsletter! 👇
Featured Resource: Responsible Tech Certificates: A Worthwhile Expense?
Check out this curated list of Responsible Tech Certificates, compiled by All Tech Is Human’s Siegel Family Endowment Research Fellow Deb Donig!
Deb recently explored the value of going through a certificate program for growing in the Responsible Tech and Public Interest Technology space. Certificate programs mentioned include IEEE, IAPP, ForHumanity, TechCongress, Digital.gov, Aspen Tech Policy Hub, Markkula Center for Applied Ethics, and more.
Read Deb’s reflection below!
Featured Event: "Launch Your Career in Responsible Tech: Empowering the Next Generation of Social Entrepreneurs"
The New York Institute of Technology recently hosted Launch Your Career in Responsible Tech, a free, ATIHx event designed to provide NYC-area undergraduates with the skills and connections to land a job, this time with a particular focus on social entrepreneurship.
Held at New York Tech’s Columbus Circle campus, the event provided students with a variety of sessions all centered around giving them the tools they would need to start a career in social enterprise. Sessions included lightning talks with best practices and advice from leaders in the field, interactive skill-building workshops, and exclusive networking opportunities.
At its core, the event aimed to diversify the pipeline into social impact careers and inspire the next generation of social entrepreneurs. As such, the organizing committee was thrilled to welcome a diverse audience of over 50 students across 21 schools and over 20 degree programs, all eager to explore how they could make a difference through their work.
💡You can host your own ATIHx event!
Featured Livestream:
Meeting the Moment: The Road Ahead for the Responsible Tech Movement
May 8th 1pm EDT | Livestream
Where are we in the responsible tech movement? What's working, and what needs work? What does the road ahead look like for the Responsible Tech movement? There have been some major obstacles that have impacted hundreds of civil society orgs doing crucial work around reducing online harms, understanding the impacts of emerging technology, and aiming to align our tech future with the public interest.
Join us for "Meeting the Moment: The Road Ahead for the Responsible Tech Movement," a discussion exploring what it means to drive responsible technology causes in uncertain times. This webinar will bring together cross-sector leaders, builders, and advocates in responsible tech to reflect on the road ahead and what's urgently needed to cement our efforts' long-term impact.
All Tech Is Human will be holding this online conversation on Thursday, May 8th at 1pm EDT. We hope you can join us!
🗣 Advice from an All Tech Is Human Mentor: Carlin Scrudato, Senior Advisor, Bryson Gillette
From Carlin Scrudato -
“As a Senior Adviser at Bryson Gillette, my role focuses on navigating complex technological and societal challenges by providing strategic insights, facilitating innovation, and guiding stakeholders through evolving issues. I work closely with cross-functional teams to identify and address the ethical, regulatory, and social implications of emerging technologies. This includes advising on data privacy, AI ethics, national security, and the societal impacts of technological advancements.
By leveraging a deep understanding of both technology and its broader societal context, I help bridge the gap between technical teams and non-technical stakeholders, ensuring that solutions are both innovative and responsible. My work involves assessing risks, crafting policies, and offering actionable recommendations that help clients make informed decisions while balancing progress with social responsibility.
Ultimately, my goal is to help clients navigate the intersection of technology, regulation, and public concern in a way that drives sustainable, meaningful impact.
My career journey has been shaped by a passion for using technology to bridge gaps in sports, politics, and society. A key turning point in my career was my work in building CrowdTangle, where we tackled the intersection of tech and social impact. We worked across areas like misinformation, election integrity, and global policy—impacting everything from elections to the COVID-19 pandemic and vaccine hesitancy. This work involved large-scale research and collaboration to understand and address the challenges posed by social media’s influence.
Throughout my career, I have always prioritized investigation and research, but the real turning point came when I took a calculated risk. I moved to DC with no connections, left government to build products, and stepping out of my comfort zone allowed me to seize opportunities I hadn't imagined. My advice to anyone pursuing a similar path is simple: embrace risk, keep learning, and always look for ways to bridge technology with real-world issues. Innovation thrives when you're willing to step into the unknown and challenge conventional boundaries.
Children’s online safety is a growing concern in society for me, especially with rising issues around privacy, harmful content, and online predators. Efforts like the Kids Online Safety Act (KOSA), and California’s design code are essential steps in addressing these risks. At the same time, lawsuits against chatbots and the growing use of AI-driven platforms for children underscore the need for clear regulations that protect digital rights. By bridging policy, tech, and funding, we can create a safer digital landscape for children while ensuring responsible AI use.”
Responsible Tech Roles (listed from entry to senior roles)
🎉 FEATURED ROLES 🎉
Kapor Center - Legislative Tech Policy Manager
The Kapor Center works to dismantle racial and gender disparities and create a more equitable technology ecosystem. With Kapor Foundation, Kapor Capital, and SMASH, the Kapor Center is a recognized leader in the movement to enhance diversity and inclusion in technology and entrepreneurship ecosystem. Its work focuses on expanding access to tech and STEM education programs, conducting research on access and opportunity in computing, investing in community organizations and gap-closing tech startups, and increasing capital access for diverse entrepreneurs. The Kapor Center is hiring a Legislative Tech Policy Manager to enact legislative policies and serve as an expert primarily on artificial intelligence, machine learning, algorithmic bias, data privacy, and on other key tech sector issues as needed. They will ensure that the Kapor Center has a consistent public voice on relevant issues and is contributing to advancing equitable tech policy by adding to the public record and co-leading efforts with partners and officials who seek to enact impactful policies and would benefit from added subject matter expertise to do so confidently. We currently work at all levels of government including efforts with the White House, Congress, and state government leaders.
Partnership on AI - Head of AI, Labor & Economy
To advance the organization’s mission and vision, PAI is hiring a highly-experienced, strategic, and dynamic Head of AI, Labor, & Economy (AILE) to be based remotely in the US or Canada. This role will be filled by an experienced strategic thinker, team leader, and project manager with expertise at the intersection of artificial intelligence and the future of work(ers) to lead our AI, Labor, & Economy program. We believe AI will have transformative effects on workers and the economy; the mission of the AILE team is to ensure those effects are beneficial, not harmful, and that workers have a role in steering the development of AI. Reporting to the Chief Programs and Insights Officer (CPIO), the Head of AILE is a critical part of the Programs and Research (P&R) Team and is responsible for leading a program that covers workers throughout the AI supply chain (especially data enrichment workers), as well as AI’s impacts on the broader labor market and economy. Our approach is multi-stakeholder in nature; as an organization and as a society, we need experience and expertise across labor, the AI industry, and research to identify opportunities and approaches to advance the well-being of workers and create broadly shared prosperity. The Head of AILE will engage externally with our community of Partners and stakeholders, and internally with the Programs & Research team, the Executive Team, and with other colleagues across the PAI staff team, advance the organization’s mission and vision, PAI is hiring a highly-experienced, strategic, and dynamic Head of AI, Labor, & Economy (AILE) to be based remotely in the US or Canada. This role will be filled by an experienced strategic thinker, team leader, and project manager with expertise at the intersection of artificial intelligence and the future of work(ers) to lead our AI, Labor, & Economy program. We believe AI will have transformative effects on workers and the economy; the mission of the AILE team is to ensure those effects are beneficial, not harmful, and that workers have a role in steering the development of AI.
Fellowships
Institute for Al Policy and Strategy - 2025 AI Policy Fellowship
The IAPS AI Policy Fellowship is a fully-funded, three-month program for professionals from varied backgrounds seeking to strengthen practical policy skills for managing the challenges and opportunities of advanced AI. Fellows work with designated experts on projects that will influence national and global AI policies. Key policy fields include national security implications of AI, government acquisition and procurement of AI, CBRN capabilities of AI models, export controls, evaluations, geopolitics of AI, hardware-enabled mechanisms, jurisdictionally-focused policy questions (e.g., U.S., U.K., and E.U.), among several others. Fellows’ projects can have a variety of formats depending on each expert and fellowship coordinator, and will be set in partnership with each fellow. In general, fellows are expected to finalize advanced drafts of policy writing, lead presentations or events for key stakeholders, or produce smaller outputs responding to short-term policy opportunities. These concrete outputs help strengthen fellows’ AI policy track record while achieving impactful outcomes.
Metagov - Public AI Fellow (Japan)
We are seeking a dynamic and mission-driven Public AI Fellow based in Japan to support research and relationship-building efforts around Japan's national AI strategy and ecosystem. This fellow will play a key role in laying the groundwork for Airbus for AI—an ambitious initiative to build a frontier AI lab across middle-power countries. This is an opportunity to shape Japan’s role in a high-level international initiative, and gain a front-row seat to global AI diplomacy and infrastructure strategy. You’ll work with a powerful network of international technologists, policymakers, and public interest organizations.
Safe AI Forum (SAIF) - Research Fellow / Senior Research Fellow / Special Projects Fellow
SAIF is seeking applications for a 6 or 12-month fellowship developing and executing on projects related to SAIF’s mission. We are open to individuals with a range of backgrounds and experience in AI safety and governance who are eager to conduct high-impact work. Fellows will be welcomed as part of the team and have access to SAIF’s broad network of experts and collaborators. The fellowship is fully remote.
We are looking to expand upon our existing routes to impact by bolstering our team’s research capacity. We are excited for applicants who have AI safety and governance research expertise to join the team as Fellows. Your role will be to design and execute projects we think have great potential for impact within the AI safety and governance space. Exceptional candidates may also propose their own project.
Umeå University - AI Policy Lab Fellowship
The AI Policy Lab at Umeå University is an interdisciplinary research environment hosted by the Department of Computing Science and led by Wallenberg Scholar Prof. Dr. Virginia Dignum. We aim to facilitate cross-faculty and cross-disciplinary collaboration within Umeå University and beyond, connecting academic research with policymakers, practitioners, and other societal stakeholders. By providing a vibrant shared space for short-term, part-time research engagements, we support a dynamic community working on the ethical, societal, and policy dimensions of AI. The AI Policy Lab Fellowships give researchers the opportunity to dedicate focused time to urgent research questions within the emerging field of AI Policy. Projects are expected to be interdisciplinary and have strong societal relevance. All fellows are expected to spend time at the Lab during the fellowship period and will be featured on the AI Policy Lab website. Proposals can address any aspect of AI policy, with preference given to those that demonstrate a strong multidisciplinary approach.
Entry Level
Bytedance - Research Scientist, Responsible AI
Our team at ByteDance Research is conducting research on Responsible AI, particularly from the viewpoint of AI agents and AI foundation models. We are looking for outstanding researchers to join our team and conduct cutting-edge research in the area. Responsibilities: 1. Developing advanced technologies in AI, particularly AI foundation models and AI agents; 2. Conducting research on the technologies of AI to make them more reliable, safe, and trustworthy; 3. Contributing to the development of products in the company with regards to AI safety and AI ethics.
Harvard Kennedy School, Growth Lab at the Center for International Development - Front-end Developer (Data Visualization)
The Growth Lab’s Viz Hub is an online portfolio of data analysis and visualization platforms built in-house by the Digital Development & Design team. Our flagship platform, The Atlas of Economic Complexity, delivers the ability to discover new economic growth opportunities for every country. Our award-winning platform, Metroverse, allows users to explore urban growth opportunities for over 1000 cities worldwide. In addition to these tools, the team has built over 35 additional software products, prototypes and digital storytelling features. Our digital tools serve over 30,000 monthly users, including policymakers, journalists from leading media outlets, experts at multilateral organizations, and members of the Harvard community. As a front-end developer at the Growth Lab, you’ll work in a collaborative environment that encourages creativity and professional growth, with access to Harvard’s world-class facilities, events, and classes.
Stanford University, Cyber Policy Center - Research Scholar, Youth in Tech
The Cyber Policy Center at the Freeman Spogli Institute (FSI) is Stanford's premier hub for the interdisciplinary study of issues at the nexus of technology, governance, and public policy. The Cyber Policy Center’s research, teaching and policy engagement aims to bring new insights and solutions to national governments, international institutions, and industry. The Adolescent and Well-Being initiative focuses on identifying and evaluating practical strategies that families can use to ensure adolescents not only stay safe online but also thrive. By bringing together academics, practitioners, and policymakers, the initiative aims to enrich the dialogue on fostering positive online experiences for young people. Reporting to the Social Media Lab Associate Director, you will lead, support and facilitate research related to children, youth, social media and well-being, and interventions and strategies to support youth and families. You will support large scale evaluation studies on the impact of phone related policies to youth and families.
Early Career
Amii - Applied Research Scientist
Reporting to the Director of AI Trust and Safety, the Applied Research Scientist will lead innovative research on AI Trust and Safety, with a focus on advanced AI systems and systemic societal risks. They will contribute to shaping an agile research agenda that aligns with Amii’s strengths, addresses key gaps in the trust and safety landscape, and aligns with priority research areas identified by the Canadian AI Safety Institute (CAISI).
The Applied Research Scientist will leverage research outputs—such as academic papers, whitepapers, conference presentations, and practical tools—to establish Amii as a thought leader in AI safety. Their work will advance understanding of AI risks, drive the development of more trustworthy AI systems, and support responsible AI adoption. The position focuses on achieving excellence in three main accountabilities:
Trust and Safety AI Research Projects: Define and validate all projects in the Trust and Safety research portfolio.
Leadership & Strategy: Provide scientific thought leadership across industry and ML domains.
Project Execution: Ensure successful implementation and delivery on our most ambitious Trust and Safety projects.
AXA Group Security - Gen AI Security Product Lead
AXA Group Security is seeking a skilled GenAI expert to join our Governance and Security Data Hub Team. As a GenAI expert, you will play a pivotal role in harnessing the power of GenAI technologies to enhance security operations and drive strategic initiatives across the organization. Your responsibilities will include leveraging GenAI solutions, particularly with practical experience using GPT and similar API-based technologies, to address security challenges, drive efficiency, and unlock the value of data for a more data-driven security organization. For example, you will leverage GenAI to verify the evidence provided by AXA entities to confirm they have implemented security controls, augmenting one of Group Security's critical activities: security assurance.
CIFAR - Program Manager, AI & Safety
The Canadian Institute for Advanced Research (CIFAR) is a globally influential research organization proudly based in Canada. We mobilize the world’s most brilliant people across disciplines and at all career stages to advance transformative knowledge and solve humanity’s biggest problems, together. The Program Manager, AI & Safety, plays a critical role in the development, management, and administration of the Canadian AI Safety Institute (CAISI) research program at CIFAR. In this position, you will contribute to advancing a national research agenda focused on AI safety, helping to shape a rapidly evolving field that sits at the intersection of science, society, and policy.
Google - Global Threat Analyst
Security is at the core of Google's design and development process: it is built into the DNA of our products. The same is true of our offices. You're an expert who shares our seriousness about security and our commitment to confidentiality. You'll collaborate with our Facilities Management team to create innovative security strategies, investigate breaches and create risk assessment plans for the future. You believe that providing effective security doesn't come at the expense of customer service - you will be our bodyguard (and our long lost pal). As a Global Threat Analyst, you will keep Googlers safe and secure to managing disruptive events, the ability to anticipate, deter, detect, and act as the pillars of Google’s Global Security and Resilience Services (GSRS) team. You will develop a culture where safety, security, and resiliency are integrated into every facet of Google, including the creative process. You will help us continually identify, evaluate, and monitor enterprise risks that could affect business activities and provide business leaders with the information they need to make critical decisions. You will collaborate with cross-functional teams to create innovative strategies and develop programs that drive sustainable effectiveness.
HCA Healthcare - Technical Manager Responsible AI
The role of the Technical Manager Responsible AI is to analyze Machine Learning (ML), Artificial Intelligence, Natural Language Processing, and Deep Learning (DL) solutions and processes to support the care of HCA Healthcare patients. The Technical Manager Responsible AI will lead and support HCA Healthcare processes that will increase the ethical & equitable care and treatment of our patients through emerging technologies.
Lila Sciences - AI Safety Lead
As AI Safety Lead, you will define and execute Lila Sciences’ AI safety strategy, ensuring that our AI systems—used in scientific research and driving physical experimentation—are developed and deployed responsibly. You will establish risk assessment frameworks, oversee model benchmarking and evaluation, and integrate safety principles into AI governance. This role requires a deep technical understanding of AI risk, model evaluation, and system safety, as well as the leadership and communication skills to interface with internal teams, executive leadership, and the broader AI safety community.
Lucid - Offensive AI Security Engineer – Red Team
We are seeking an Offensive AI Security Engineer to join our AI Red Team within the Security Engineering team. This role focuses on adversarial machine learning (ML), AI-driven offensive security, and red teaming AI systems to uncover vulnerabilities in AI-powered automotive security models and vehicle platforms. As part of Lucid’s Offensive AI Security team, you will attack, manipulate, and exploit AI/ML models to identify real-world threats and weaknesses in AI-driven security solutions. You will develop AI-enhanced security automation tools, perform LLM-based penetration testing, and integrate AI/ML attack techniques into offensive security operations.
Meta - Program Coordinator III
Meta is seeking a product-focused strategic thinker to join its Product Policy & Strategy team as a Product Policy Coordinator and multimodal red-teamer to provide policy support for Generative AI products. In this role, you will work to ensure that Meta’s AI models and products are unbiased and can understand and respond to different viewpoints on contentious issues. You’ll accomplish this goal by directly engaging in adversarial testing of our Large Language Models, including multimodal models, and by tracking the development of emerging risks and questions around political processes, breaking news, and other significant global events. You will also be analyzing competitor’s model behaviors and policies, and advising on forward-looking, policy-aligned product development.
Open AI - Trust & Safety Analyst
We are looking for experienced Trust & Safety Analysts to collaborate closely with internal teams to ensure safety and compliance on OpenAI platforms. You will be a stakeholder in the design and implementation of policies, processes, and automated systems to take action against bad actors and minimize abuse at scale, handle high risk & high visibility customer cases with care, and build feedback loops to improve our trust & safety policies and detection systems. Ideally, you have worked in a high-paced startup environment, have handled a breadth of integrity related issues of varying sensitivity and complexity, and are comfortable with building processes and systems from zero to one.
Resaro - Responsible AI Scientist (Computer Vision)
Resaro was founded on the belief that AI will change the world in ways we cannot even imagine, but every new technology needs safeguards. As AI adoption accelerates and increases, the challenge over the next decade and beyond is to harness AI safety with the appropriate levels of governance and assurance to build trust in these advanced algorithmic systems. Most enterprises do not have the capability to do this, which is why we founded Resaro. We are an AI assurance company that provides services to validate AI systems for accuracy, robustness, explainability, fairness, privacy and security. We are looking for a Computer Vision data scientist with experience in evaluating and testing deep learning based computer vision models in production settings to work with a category-defining AI assurance venture that will help companies test and audit their AI systems. You will work with the team to evaluate and stress-test AI models to make sure they are fit for purpose and safe to be deployed. We value strong technical ability and real world experience and there will be room to solve challenging problems and adopt cutting edge technology into business applications.
Resaro - Responsible AI Data Scientist, LLM
Resaro was founded on the belief that AI will change the world in ways we cannot even imagine, but every new technology needs safeguards. As AI adoption increases, the challenge in the next decade is to harness AI safety with the appropriate levels of governance and assurance to build trust in these advanced algorithmic systems. Most enterprises do not have the capability to do this and we are a new AI assurance venture that provides solutions to validate AI systems for accuracy, robustness, explainability, fairness, privacy and security. We are looking for a data scientist to be based in Munich or Singapore, with experience in deep learning based language models to work with a category-defining AI assurance venture that will help companies test and audit their AI systems. You will help evaluate and stress-test AI models to make sure they are fit for purpose and safe to be deployed. We value strong technical ability and real world experience and there will be room to solve challenging problems and adopt cutting edge technology into business applications.
TikTok - Youth Safety Program Manager, Age Assurance, Trust & Safety
Youth Safety is a core pillar of the T&S team, bringing together the expertise of Product Managers and Program Managers into one group. Our mission is to protect young people on TikTok - and throughout the wider ecosystem - by building protections and education that help keep young people safe on our platform and participating in industry-wide strategies and development. This role will be responsible to support the development of best-in-class protections for younger users, including optimizing moderation practices, implementing external solutions, working in partnership with our legal, public relationships and various cross-functional teams, and onboarding new features and product areas across the company into our age-based protections.
TikTok - Youth Safety, Technical Program Manager, Trust & Safety
The Youth Safety team sits at the core of TikTok’s Trust & Safety organization, uniting product and program managers with a shared mission: to protect young people on TikTok and contribute to industry-wide child safety efforts. We focus on building protections, driving education, and shaping systems that keep youth safe across our platform and the wider digital ecosystem. We are looking for a Technical Program Manager to join TIkTok's Youth Safety - Child Sexual Exploitation & Abuse (CSEA) product team. In this role, you’ll be responsible for high-impact execution and technical program leadership, working across product, engineering, legal, policy, and operations to strengthen TikTok’s safety infrastructure. You will lead key projects to close operational gaps, manage tooling and feature delivery, and support the review of new products and features to ensure they launch with the appropriate safeguards in place. You’ll also support the expansion of specialized moderation workflows and develop strategies to address emerging threats — including the detection and mitigation of AI-generated child sexual abuse material (AIG-CSAM).
Mid-Career
AI Forensics - Communications Lead
AI Forensics is a **non-profit organization** defending digital rights through algorithmic investigations. Our team has been pioneering algorithmic investigations techniques for more than 5 years, holding platforms accountable to their users and to the law. We are looking for a Communications Lead to maximise the impact of our algorithmic investigations. You will **develop and lead the execution of the AI Forensics Communications strategy**, which will enable us to take our work in defending digital rights to the next level. **Press relations with technology journalists** will be a key part of the job. You will accelerate the success we have had in generating coverage in the most reputable international media to drive visibility and engagement with our research. As the Communications Lead you will **build the AI Forensics community**. This will involve increasing our social media presence, newsletter readership and developing our signature voice by creating a regular flow of high quality content (social media, web, email etc).
AI Forensics - Software Developer
We are looking for a Software Developer to use creative ways to collect data from Big Tech platforms to expose digital violations of human rights. You will **develop our Digital Evidence Infrastructure,** which enables researchers to automate experiments on platforms like TikTok, YouTube, or Microsoft Copilot, emulating the behavior of real users. You will develop the Infrastructure to test and monitor algorithms on large platforms, such as commercial Conversational Large Language Models. Your job will involve **engineering data pipelines, developing scraping modules**, and orchestrating emulated mobile devices and browsers.
Amazon - Senior Security Engineer, AI Security
We are seeking a Senior Security Engineer to join our GenAI Security team, which provides security guidance and builds security tooling and paved path solutions to ensure Generative AI (GenAI) based experiences developed by Amazon uphold our high security standards. As a Senior Security Engineer, you will be responsible for defining security standard, providing security guidance, and developing security guardrails to secure GenAI products and services at Amazon scale. You will collaborate with applied scientists, software engineers, security engineers, as well as internal partners and external researchers to develop innovative technologies to solve some of our hardest security problems, and build paved path solutions that support builder teams across Amazon throughout their software development journey, enabling Amazon businesses to accelerate the use of GenAI to enhance our user experiences and delight our customers.
Billigence - AI Governance Lead
Billigence is a boutique data consultancy with global outreach & clientele, transforming the way organizations work with data. We leverage proven, cutting-edge technologies to design, tailor, and implement advanced Business Intelligence solutions with high added value across a wide range of applications from process digitization through to Cloud Data Warehousing, Visualisation, Data Science and Engineering, or Data Governance. Headquartered in Sydney, Australia with offices around the world, we help clients navigate difficult business conditions, remove inefficiencies, and enable scalable adoption of analytics culture. About the Role: We are seeking an AI Governance Lead to join our team on an initial 3-month contract, operating inside IR35. In this position, you will take the lead in ensuring that artificial intelligence (AI) systems are developed, deployed, and managed responsibly, ethically, and in compliance with regulatory standards. You will provide strategic direction and leadership in crafting AI governance frameworks, policies, and best practices, with the aim of mitigating risks while maximizing the positive impact of AI technologies for our client.
Blueprint Biosecurity - Researcher/Senior Researcher
Blueprint Biosecurity is building an interdisciplinary technical team that translates academic and industry research into actionable plans to prevent and suppress future pandemics.
You will work directly with the Research Director to produce publications such as our Blueprint for far-UVC, provide technical advice to our program teams, and inform our pandemic prevention strategies through careful and rigorous analysis of scientific, industrial, and policy landscapes. We anticipate hiring multiple positions to support our programs on personal protective equipment (PPE), far-UVC, and other engineering controls to suppress pathogen transmission in the built environment.
We are particularly excited to build our internal expertise in aerosol science, environmental engineering, microbiology, molecular biology and modeling techniques relevant to our work. However, we are open to candidates from a wide variety of technical backgrounds who can contribute to a highly collaborative and interdisciplinary research environment.
Doordash - Group Product Manager: Fraud, Trust & Safety
Our Integrity, Payments Fraud and Trust & Safety Product team is dedicated to ensuring a safe and trusted experience for all customers across our platform, including Consumers, Merchants, and Dashers. We tackle complex challenges, from preventing payment fraud to ensuring authenticity and preventing ATO, to building regulatory compliance capabilities, all while balancing risk mitigation and providing a seamless user experience. As the Group Product Manager for Fraud you will lead a team of ~5 PMs spanning fraud initiatives, including Payments Fraud, Credit & Refunds abuse, and fraudulent misuse of the DoorDash ecosystem. Your work will help reduce financial risk, enhance KYC efforts, and ensure a healthy ecosystem of Consumers, Merchants and Dashers across diverse and growing global markets. This role is an opportunity to drive innovation in fraud prevention, machine learning-driven risk assessment, and customer trust. Your success will be measured by key metrics such as payment fraud minimization, chargeback and dispute rates, Know Your Customer (KYC) compliance, and critical measures of health/integrity of the ecosystem. In this role, you will tackle ambiguous, high-impact problems, working closely with engineering, analytics, and business leaders to develop scalable solutions.
Freedom of the Press Foundation - Senior UX Engineer
Freedom of the Press Foundation (FPF), a nonprofit organization dedicated to protecting, defending, and empowering public-interest journalism, is hiring a senior UX engineer to implement improvements to our websites Freedom.press, the U.S. Press Freedom Tracker, and SecureDrop.org. Reporting to the engineering manager (web and Dangerzone), this position will join a small team of web developers working on high-impact projects like the integration of “Action Center” functionality on the FPF website, deeper integration between our CMS and email newsletters, and improved data visualizations for the U.S. Press Freedom Tracker.
Hinge - Lead Policy Manager
Hinge is seeking a Lead Policy Manager to lead the implementation and management of high-level platform policies that safeguard our community, maintain regulatory alignment, and promote trust. As a leader within the Trust & Safety team, you’ll serve as a cross-functional connector and strategic thought partner—elevating how Hinge operationalizes platform integrity and safety at scale. This is a player-coach role, meaning you’ll contribute directly to strategic policy work with the future potential to manage and mentor a small, high-impact team.This role is fully hands-on and highly collaborative, driving long-range policy initiatives while mentoring peers and influencing broader enforcement strategy. You’ll lead complex, cross-company efforts that ensure Hinge's local policies and guidance are principled, data-driven, and future-ready. The ideal candidate brings a blend of domain expertise, systems-level thinking, and deep empathy for users especially when navigating nuanced integrity risks.
Hinge - Senior Machine Learning Engineer, Trust & Safety
Join Hinge as a Senior Machine Learning Engineer, where you'll lead the application of AI and machine learning to effectively mitigate the impact of bad actors, remove policy violating content, and ensure user safety on the platform. Working closely with an expanding Trust & Safety team, you will collaborate with product managers, data scientists, engineers, and analysts to develop impactful AI/ML solutions. In this high-impact position within a small, dynamic team, you'll have the opportunity to play a foundational role in shaping how Hinge utilizes AI and machine learning in various contexts. Your expertise will be instrumental in creating a safer and more meaningful user experience on their journey to find intimate connection.
Meta - AI Policy Manager, LATAM
Meta is seeking a highly specialized and experienced Latam AI Policy Manager to lead our AI-related public policy initiatives within the Latin America region. This role requires demonstrated technical knowledge in Artificial Intelligence and its intersection with public policy, with the capacity to coordinate with key stakeholders such as governments, industry peers, developers/technical community, international institutions and think tanks. The position will serve as the expert voice on AI within the Latam Public Policy team and externally, providing technical expertise for product counseling and regulatory matters.
Sentinel One - Staff AI Security Engineer
At SentinelOne, we’re redefining cybersecurity by pushing the limits of what’s possible—leveraging AI-powered, data-driven innovation to stay ahead of tomorrow’s threats. From building industry-leading products to cultivating an exceptional company culture, our core values guide everything we do. We’re looking for passionate individuals who thrive in collaborative environments and are eager to drive impact. If you’re excited about solving complex challenges in bold, innovative ways, we’d love to connect with you. What are we looking for? We are seeking a Staff AI Security Engineer with deep expertise in large language model (LLM) security, including red teaming, anti-jailbreaking techniques, and prompt injection mitigation. In this role, you will lead efforts to identify, assess, and mitigate vulnerabilities in AI systems, ensuring their robustness against adversarial attacks.
Unicredit - Senior AI Governance and Ethics Specialist
We are seeking a Senior AI Governance and Ethics Specialist to join our AI Governance team. In this role, you will play a key part in defining and implementing the Responsible AI Framework for the entire organization. The Senior AI Governance and Ethics Specialist will collaborate cross-functionally with various teams to guide the definition, evaluation, and implementation of the Responsible AI Framework in full compliance with the EU AI Act. On this role you will focus on contributing to the development of AI risk taxonomies, defining appropriate metrics for the monitoring and evaluation of AI systems, and ensuring adherence to regulatory requirements. Additionally, you will support the integration of AI governance tools and assist in ensuring that AI systems are developed and deployed responsibly across the organization. The role requires a blend of technical expertise in AI, a strong understanding of governance frameworks, and the ability to navigate complex regulatory environments.
Wells Fargo - Cybersecurity AI Risk & Governance SME
Wells Fargo is seeking a Cybersecurity AI Risk & Governance SME. In this critical role, you will define and document a dynamic Cybersecurity for AI governance framework; including governance routines such as adherence monitoring, controls, metrics, reporting, key performance indicators, and escalation. You will also establish a process for sustaining this dynamic Cybersecurity for AI governance framework, including review and ingestion of regulatory changes to AI; maintenance of mapping to AI instances across the firm to ensure visibility of full scope of AI activity; and defined plans on how to pivot the program as regulatory and firm environmental changes necessitate. In defining a dynamic Cybersecurity for AI governance framework, you will partner across Cybersecurity to establish AI requirements to secure AI across the firm; drive the incorporation of AI requirements into the appropriate documentation, including policy, baselines, developer design requirements; and work with engineers, cloud SMEs, data scientists, among others, to translate requirements into patterns and/or security assessments controls.
Senior/Executive Level
Ardent Health - Director, Data & AI Governance
Ardent Health is a leading provider of healthcare in growing mid-sized urban communities across the U.S. With a focus on people and investments in innovative services and technologies, Ardent is passionate about making healthcare better and easier to access. Through its subsidiaries, Ardent delivers care through a system of 30 acute care hospitals, 24,000+ team members and more than 280 sites of care with over 1,800 affiliated providers across six states. The Director, Data & AI Governance will lead data governance efforts, shape AI strategy, and ensure compliance with standards and procedures within the organization.
Dow Jones - VP, AI Governance
At 130 years old, we have an amazing legacy but we're young at heart.We know the secret to our growth ambition is through data and innovative AI solutions. We're looking for an energetic VP, AI Governance to play a critical role in designing, implementing and maintaining a governing structure for leveraging Artificial Intelligence (AI) technologies in a responsible manner that aligns with Dow Jones and News Corp's values. You will oversee a multidisciplinary AI Steering Committee representing stakeholders from multiple departments and for developing training programs to educate staff on best practices in AI and AI policies. You will report to the EVP, Data & AI.
Exygy Inc. - Public Sector Growth Lead
As the Public Sector Growth Lead for CiviForm, you will play a pivotal role in expanding our footprint within the public sector. Your mission is to craft and execute a growth strategy that drives adoption of CiviForm, a civic tech product designed to simplify access to critical government services. This role requires expertise in government procurement, strategic sales, and digital marketing to build a robust pipeline of paying customers: you will source, nurture, and win new government customers who will implement CiviForm. You will collaborate closely with cross-functional teams, including Exygy's Growth team, CiviForm’s delivery team, and Exygy’s senior leadership. This is a high-impact, entrepreneurial role where you will define and prove the market for CiviForm by securing its first paying customers and laying the foundation for scalable growth. You will leverage your proven expertise and track record in civic tech sales and marketing to drive comprehensive strategic initiatives. Because of the scale of CiviForm’s potential market, we're seeking a self-starter who can confidently craft and execute strategies that focus our work and create results. While Exygy is a highly collaborative organization, your role as the growth leader for CiviForm will mean that an ability to operate autonomously and own your own craft will be essential for success.
Lumeris - Vice President, Data and AI Governance
Are you ready to join a highly innovative organization that is transforming healthcare at scale? We are seeking an experienced Vice President of Data and AI Governance to join our team and work on advancing Tom, our Agentic AI platform to deliver Primary Care as a Service. Tom is at the forefront of healthcare innovation and this role will be a key role in Tom’s vision setting and leadership. This role will report directly to our Chief Legal Officer, sits at the intersection of technology and compliance, and will play a critical role in shaping and implementing our data governance strategy. The ideal candidate will have a deep understanding of data governance, compliance, and risk management, particularly within the healthcare sector. As the VP of Data and AI Governance, you will ensure that our data management and AI practices adhere to legal and regulatory requirements while ensuring that they continue driving data quality, security, and innovation across the organization. You will be responsible for developing and implementing an enterprise-wide data governance strategy including AI generated data. Your expertise will guide the company's approach to data management, influencing everything from data quality assurance to protection and utilization. This position demands an innovative thinker with the technical acumen in data and AI, someone willing to roll up their sleeves to collaborate with key stakeholders, and an understanding of AI model nuances and generative capabilities to drive data governance initiatives and enhance the organization's ability to leverage data for products and for internal strategic decision-making. You will have the opportunity to shape how data governance is perceived and practiced within our company, ensuring that we remain at the forefront of industry best practices while maintaining compliance and fostering a data-driven culture.
OneTrust - Head of Privacy & AI Governance
OneTrust’s mission is to enable organizations to use data and AI responsibly. Our platform simplifies the collection of data with consent and preferences, automates the governance of data with integrated risk management across privacy, security, IT/tech, third-party, and AI risk, and activates the responsible use of data by applying and enforcing data policies across the entire data estate and lifecycle. OneTrust supports seamless collaboration between data teams and risk teams to drive rapid and trusted innovation. Recognized as a market pioneer and leader, OneTrust boasts over 300 patents and serves more than 14,000 customers globally, ranging from industry giants to small businesses. The Challenge: The Head of Privacy and AI Governance will be a pivotal leader within our team, shaping the future of privacy and AI governance at OneTrust. Reporting to the Chief Ethics & Compliance Officer, you will not only ensure compliance with global data privacy and AI regulations but also set the strategic vision for protecting stakeholder data. This role goes beyond a traditional privacy officer - because we build privacy and AI governance products, you’ll act as both a thought leader and our customer zero, stress-testing our solutions in real-world scenarios. If you're passionate about driving innovation while safeguarding trust, this is your opportunity to lead at the cutting edge.
Standard Chartered - Head, Model Evaluation
This role will be reporting to the Executive Director, AI Safety & AI Talent and sitting within the CDO office of the bank. The Head of Model Validation Team is responsible for leading the validation of machine learning (ML), data science, and Generative AI (GenAI) models, ensuring alignment with recent technical advancements, regulatory requirements, and established Model Risk Management (MRM) frameworks. The role involves overseeing the development and implementation of robust validation methodologies for AI and non-AI models, ensuring compliance, accuracy, and reliability across the organization’s model portfolio.
State of Illinois, Department of Innovation and Technology - Deputy Chief Technology Officer, Enterprise Hosting
The Department of Innovation & Technology (DoIT) is seeking to hire qualified candidates with the opportunity to work in a dynamic, creative thinking, problem solving environment. This position serves as the Deputy Chief Technology Officer for Enterprise Hosting under the administrative direction of the Chief Technology Officer, serving as the principal policy formulating administrator for the management processes for Enterprise Infrastructure Operations and Network Operations Information Technology (IT) services statewide. In this role you will lead and manage the development and implementation of new technology products and services by researching and recommending implementation of advanced information technology and drives adoption while fostering a culture of innovation, collaboration, and operational excellence among the infrastructure and network and teams In addition, you will lead and develop a high performing team and serve as an agency spokesperson.
State of Illinois, Department of Innovation and Technology - Deputy Chief Technology Officer, Network Services
The Department of Innovation & Technology (DoIT) is seeking to hire qualified candidates with the opportunity to work in a dynamic, creative thinking, problem solving environment. This position serves as the Deputy Chief Technology Officer of Network Services, in developing and implementing policy for the management and direction of the state's network and broadband services. In this role you will serve as a policy-making official, formulating, implementing, and interpreting policy for the total management process of Network Operations statewide and will manage statewide networking and broadband operations including the Illinois Century Network. In addition, you will serve as an agency spokesperson and will lead and develop a team of highly skilled IT professionals to achieve agency goals and objectives.
🗒️You can find these roles, and more being updated daily on our Responsible Tech Job Board along with being shared in our Slack community.
💪Let’s co-create a better tech future
Our projects & links | Our mission | Our network | Email us
Subscribe to All Tech Is Human’s main newsletter for Responsible Tech updates!
🦜Looking to chat with others in Responsible Tech after reading our newsletter?
Join the conversations happening on All Tech Is Human’s Slack (sign in | apply).
We’re building the world’s largest multistakeholder, multidisciplinary network in Responsible Tech. This powerful network allows us to tackle the world's thorniest tech & society issues while moving at the speed of tech.
Reach out to All Tech Is Human’s Executive Director, Rebekah Tweed, at Rebekah@AllTechIsHuman.org if you are hiring and would like to work with All Tech Is Human to find candidates who are passionate about responsible technology or if you’d like to inquire about featuring a role in this newsletter!