Responsible Tech Careers #17: Responsible AI Course coming this fall
Plus 30+ new roles from the Responsible Tech Job Board, and additional resources!
Greetings, all!
This Responsible Tech Careers newsletter features a new Responsible AI course coming in October from All Tech Is Human, a featured resource on the Responsible Tech Job Skills That Matter Most, and a featured livestream on the technical and policy considerations for Agentic AI with our friends for the Center for Democracy and Technology’s AI Governance Lab.
If you’d like to plug into the larger community at All Tech Is Human, be sure to join our Slack and tune into our regular livestream series tackling thorny topics in Responsible Tech.
Senior level roles toward the bottom of the newsletter tend to get cut off from the email format, so don’t forget to click “View in Browser” to read about all of our resources and opportunities. Thanks!
🧠 In this newsletter, you’ll find:
Featured Opportunity: Responsible AI Course from All Tech Is Human
Featured Resource: Responsible Tech Job Skills That Matter Most
Featured Livestream: Agentic AI: Technical and Policy Considerations
30+ new Responsible Tech roles!
🤝 All of these roles and hundreds more can be found on All Tech Is Human’s Responsible Tech Job Board. In addition to our job board, you will find that numerous Responsible Tech jobs are shared and discussed every day through our large Slack community, which includes 12k people across 110 countries (Sign In | Apply).
👑 Also subscribe to All Tech Is Human’s flagship newsletter, focusing on issues and opportunities in the Responsible Tech ecosystem at-large and arriving in your inbox every other week (opposite weeks from this Careers newsletter).
🌟 Are you interested in underwriting our Responsible Tech Job Board and Responsible Tech Careers Newsletter for 2026? Let us know!
Now, onto the newsletter! 👇
Featured Opportunity: Responsible AI Course from All Tech Is Human
🚨NEW Responsible AI Governance course coming soon from All Tech Is Human!
This October, All Tech Is Human is launching a dynamic new Responsible AI course designed for aspiring RAI practitioners and AI Governance professionals as well as those preparing to build AI Governance programs within an organization. Rooted in practical insights and real-world applications, the course offers a foundational understanding of Responsible AI: its principles, history, and evolution as a field, as well as an exploration of current roles, industry best practices, and the evolving governance landscape. Led by ATIH Executive Director, Rebekah Tweed, and award-winning AI Ethicist, Professor Renée Cummings, with contributions from ATIH affiliates Heidi Hysell and Savannah Thais and ATIH’s RAI working group, this course will equip participants with the essential foundational knowledge to begin the process of effectively operationalizing Responsible AI and AI governance programs within organizations.
Are you interested in being involved? Providing feedback on course material? Partnering with us to extend its reach? Reach out via the interest form below!
Featured Resource: Responsible Tech Job Skills That Matter Most
What are the skills that matter most for securing a role in Responsible Tech?
We just put together a helpful resource on The Responsible Tech Job Skills That Matter Most in 2025!
This resource is based on the research, and recent blog post, from our Siegel Family Endowment Research Fellow Deb Donig.
The resource covers:
Technical Implementation Roles
Governance and Compliance Roles
Strategic and Advisory Roles
The skills that matter now
The next phase of professionalization
See our mini-report here:
Featured Panel: Agentic AI - Technical and Policy Considerations
All Tech Is Human's This Month in Responsible AI webinar for June, "Agentic AI: Technical and Policy Considerations" featured a conversation with Center for Democracy & Technology AI Governance Lab's Director, Miranda Bogen and Fellows Chinmay Deshpande and Ruchika Joshi, moderated by ATIH Executive Director, Rebekah Tweed, to discuss the emerging technical and policy considerations for Agentic AI systems. The discussion focused on the guests’ recently published policy brief, "AI Agents In Focus Technical and Policy Considerations" and unpacked emerging issues such as agent security and misuse, user privacy, user control, technical and legal infrastructure for agent governance, the impact of human-like agents, and responsibility for agent harms.
Responsible Tech Roles (listed from entry to senior roles)
🎉 FEATURED OPPORTUNITIES 🎉
Partnership on AI - Director of Development
Partnership on AI (PAI) is the leading forum addressing the most important and difficult decisions on the future of artificial intelligence (AI). As a non-profit, PAI invites diverse voices into the process of technical governance, design, and deployment of AI technologies. Our vision is a future where AI empowers humanity by contributing to a more just, equitable, and prosperous world. Our mission is to bring diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society. PAI is seeking a Director of Development to lead the strategy and execution of fundraising efforts that support the organization’s mission, growth, and sustainability. This includes overseeing foundation and philanthropic funding, industry partner contributions, sponsorships and in-kind development. The Director is responsible for managing a full development lifecycle—bringing in new funding partners, cultivating long-term relationships, stewarding funding agreements, coordinating internal systems for invoicing and reporting, and executing high-profile campaigns such as PAI’s 10-Year Anniversary Working Capital Campaign.
Fellowships
The Alan Turing Institute - Open Source AI Fellowship Call 2025
The Alan Turing Institute is pleased to welcome applications for the 2025 Open Source AI Fellowship programme. We are seeking exceptional researchers and technologists with an interest and expertise in Specialised AI Model Development and Secure Systems Engineering to spend 12 months as a Fellow with The Alan Turing Institute and be embedded within a team at the UK's Department for Science, Innovation and Technology (DSIT) to develop high profile open-source use cases for the UK Government. Fellows will work on high-impact problems, which could include: Secure AI assistants for processing sensitive documents entirely on government systems – crucial for work like national security translation, where data must never leave secure environments. Planning and regulatory tools trained on UK law and policy to support faster, fairer decision-making for citizens. AI systems that can support emergency responders or NHS staff during power outages or network failures, by working fully offline when it matters most.
Entry Level
Anthropic - Research Engineer, Frontier Red Team (RSP Evaluations)
We are the team behind the Responsible Scaling Policy (RSP) Evaluations. The easiest way to understand what we do is to read the RSP sections of the Claude 3.7 system card and Claude 4 system card. We are the engineers who build automated systems to test whether frontier AI models are safe to release. Our evaluations determine if models have crossed critical capability thresholds in domains like autonomous replication, cybersecurity, and biological and chemical research. This is a research engineering role where you'll build sophisticated evaluation infrastructure while thinking creatively about how to probe model capabilities. You'll work with our existing distributed systems to create automated pipelines that can run thousands of evaluation variants.You'll also need the curiosity and adversarial mindset to design tests that reveal what models can really do when pushed to their limits.
Anthropic - Research Scientist, Frontier Red Team (Autonomy)
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We are looking for Research Scientists to develop and productionize advanced autonomy evaluations on our Frontier Red Team. Our goal is to develop and implement a gold standard of advanced autonomy evals to determine the AI Safety Level (ASL) of our models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP). We believe that developing autonomy evals is one of the best ways to study increasingly capable and agentic models. If you’ve thought particularly hard about how models might be agentic and associated risks, and you’ve built an eval or experiment around it, we’d like to meet you.
Anthropic - Research Scientist, Frontier Red Team (CBRN, Biosecurity)
This team will intensively red-team models to test the most significant risks they might be capable of in areas such as biosecurity, cybersecurity risks, or autonomy. We believe that clear demonstrations can significantly advance technical research and mitigations, as well as identify effective policy interventions to promote and incentivize safety. As part of this team, you will lead research to baseline current models and test whether future frontier capabilities could cause significant harm. Day-to-day, you may decide you need to finetune a model to see whether it becomes superhuman in an eval you’ve designed; whiteboard a threat model with a national security expert; test a new training procedure or how a model uses a tool; or brief government, labs, and other research teams. Our goal is to see the frontier before we get there. Our CBRN workstream is hiring for a Research Scientist, with an emphasis on biosecurity risks (as outlined in our Responsible Scaling Policy). By nature, this team will be an unusual combination of backgrounds.
Institute for Security and Technology (IST) - Nuclear Policy Associate
The Institute for Security and Technology (IST), a 501(c)(3) critical action think tank that unites technology and policy leaders to create solutions to emerging security challenges, is actively seeking a highly motivated practitioner to join our team as IST’s Nuclear Policy Associate. We are looking for candidates who want to contain the risks posed by the intersection of nuclear weapons and emerging technologies, care deeply about existential nuclear risks, want to dive into cutting-edge technical solutions to advance nuclear risk reduction solutions, and are broadly interested in the nexus of machine learning and national security. This role will support the Innovation and Catastrophic Risk portfolio at IST which includes our CATALINK initiative and work at the intersection of AI and nuclear weapons. The role requires project management and execution skills, analysis, research, and writing on nuclear policy, risk reduction, and AI issues. This position will also be involved in continuing efforts across our nuclear policy projects which includes the CATALINK initiative, nuclear policy work, and an ongoing and expanding project focused on AI and Nuclear Command, Control, and Communications. The ideal candidate will be responsible for carrying out research and analysis, report writing, and also various programmatic work including planning and organizing roundtables and Track II workshops on nuclear policy issues with a focus on crisis communications and the CATALINK initiative. The position will report to the lead of the nuclear policy vertical.
The Leverhulme Centre for the Future of Intelligence, Cambridge Institute for Technology and Humanity - Research Assistant
The Leverhulme Centre for the Future of Intelligence (CFI) within the Cambridge Institute for Technology and Humanity invites applications for a part-time Research Assistant to work on the Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures programme. There is flexibility between working 0.2 FTE, up to 0.5 FTE. This is a fixed-term post where funding is available for 12 months in the first instance. The position is part of the Desirable Digitalisation programme, a collaboration between the Universities of Cambridge and Bonn, and funded by Stiftung Mercator. The programme investigates how to place questions of social justice and environmental sustainability at the heart of technology development and how to translate theoretical insights about historical injustices and discrimination into new technology design methods and education tools. This is an exciting opportunity to support the development, research, and valorisation of impact-oriented projects at the intersection of AI politics, ethics and governance. The Research Assistant will support research and dissemination activities within the Desirable Digitalisation programme. They will be working with Drs Jonnie Penn, Aisha Sobey and Apolline Taillandier on key research areas of the project, including work on the sustainability of AI, the politics of lookism in AI, and the political history of AI and computing.
Early Career
Amazon - Senior Applied Scientist III, AI Security
We are seeking a Senior Applied Scientist III to lead the architectural vision and development of an autonomous AI security platform that will redefine how security assessments and enforcement are conducted at scale. This mission-critical platform will leverage AI-driven autonomous agents to conduct proactive, intelligent security operations across the company. The platform will integrate deeply with internal security, engineering, and cloud-native tools to provide self-serve, automated security insights, verifications, and enforcement mechanisms. This role requires strong technical expertise in AI/ML, LLMs, and distributed cloud infrastructure, as well as thought leadership to drive alignment across multiple teams and business units. This is an opportunity to shape the future of AI-driven security automation at an enterprise scale, defining standards, influencing company-wide security posture, and leading technical innovation at the highest level.
Barclays - AI Security Engineer
Join us at Barclays as an AI Security Engineer and help shape the future of financial security by designing and deploying solutions that safeguard our systems and sensitive data. You'll collaborate with a skilled team of data scientists and engineers to lead the GenAI portfolio within SISO, transforming how we secure innovation at scale.
Blackbaud - AI Governance Operations Specialist
AI governance is only as strong as its execution. We’re looking for someone who can make governance real—not just write the playbook but run the plays. In this crucial role, you’ll have the opportunity to shape and drive AI governance and successes at Blackbaud! If you’re passionate about responsible AI and love building systems that scale, we’d love to hear from you. This role is specifically designed for an individual who thrives on practical execution and hands-on operational work within AI governance platforms, particularly OneTrust. This is not a strategic consultant role; it requires direct, hands-on execution and configuration to operationalize AI governance workflows, controls, and processes.
JPMorgan Chase - AI Research Senior Associate, Trustworthy AI
Join J.P. Morgan AI Research, where you'll explore and advance cutting-edge AI research to develop impactful solutions for our clients and businesses. As an AI Research Senior Associate in J.P. Morgan AI Research, you will work on novel techniques, tools, and frameworks to model and solve complex large-scale problems, collaborating with experts in AI and machine learning to contribute to high-impact business applications and the broader AI community. Your role involves formulating problems, generating hypotheses, developing algorithms and models, conducting experiments, and communicating research significance. Your output will result in publications, high-impact business applications, open-source software, and patents.
RAND - Technical AI Policy Research Scientist
RAND’s Technology and Security Policy Center (TASP) is seeking mission-driven Technical AI Policy Research Scientists to contribute to and lead research, decision support, and/or technical work at the intersection of artificial intelligence and national or global security. Recent examples of our public research products include investigating AI's Power Requirements Under Exponential Growth, explaining and analyzing regulatory action such as the AI Diffusion Framework, and commentaries on US-China tech competition. Beyond our public work, we regularly advise key stakeholders on technical matters. Our team brings a wealth of experience in both technical domains and policy development, bridging the gap between the technical understanding of AI and governance considerations.
Roblox - Regulatory Compliance Product Manager, Transparency Reporting
Roblox is seeking a Regulatory Compliance Product Manager with deep expertise in transparency reporting and regulatory data operations to drive the next phase of our global compliance program. In this role, you will partner with legal, data science, engineering, and product teams to define and deliver systems that translate complex regulatory obligations into clear, actionable, and scalable metrics reporting products and processes. You will lead the roadmap for transparency and data disclosure reporting, ensuring compliance with global digital platform regulations such as the Digital Services Act (DSA), UK Online Safety Act (OSA), and others. The ideal candidate is a systems thinker with strong analytical skills, regulatory fluency, and a proven track record of shipping complex, cross-functional data products. You thrive in ambiguity, have excellent stakeholder management skills, and can operate at both strategic and tactical levels.
Snap - Trust & Safety Specialist
The Trust & Safety (T&S) Team plays an important role in protecting our Snapchatters from content that violates our terms of service or Community Guidelines — while constantly embodying our values of Kind, Smart, and Creative. The T&S team helps create a safe platform experience so that all of our users around the world are empowered to enjoy their experience on our platform, every day. We take great pride in our work as digital first responders and hope you would consider joining us. We’re looking for a Sr. Specialist to join Team Snap’s growing Proactive Trust & Safety Operations team! You will be expected to proactively investigate violations of our Community Guidelines and Terms of Service, and to flex in to support the larger T&S team as needed. We are looking for someone who is able to keep a cool head under pressure, and is excited about cross-functional work with our Public Policy, Product and Engineering teams, building and improving processes, and taking a hands-on, innovative approach to helping keep Snapchatters safe. You will become a subject matter expert in a variety of harm areas and own projects from start to finish. You will drive your own investigations, and also support larger investigations driven by other members of the team.
Yelp - Lead Product Manager, Trust and Safety
Are you passionate about building AI/ML/LLM-powered systems to stop fraud at scale? Do you thrive at the intersection of technology, data, and regulation? Are you ready to lead product efforts that directly impact the trustworthiness of a platform used by millions? Yelp’s Trust & Safety team is seeking a Lead Product Manager to drive core efforts for our Review Recommendation System and global compliance initiatives. You’ll play a critical role in shaping Yelp’s defenses against fake reviews and deceptive behavior, while also serving as the product lead for our evolving global compliance efforts. You’ll work closely with engineering, data science, legal, and operations to ensure Yelp remains a trusted platform for millions of users. This is a high-impact role at the heart of Yelp’s content trust strategy. You’ll lead the roadmap for AI/ML models—including LLMs—that power Yelp’s review recommendation system, combat spam, and safeguard account authenticity. You’ll also partner closely with Legal to ensure our systems align with regulatory requirements across the U.S., EU, and other jurisdictions.
Mid-Career
ADP - Director, Privacy By Design
As Director, Privacy by Design, you will lead the strategic integration of privacy protections into our product development lifecycle, ensuring that privacy remains a foundational element of our technology and business processes. You will report to the Global Chief Privacy Officer. You will be responsible for developing and maintaining a comprehensive PbD framework, including establishing and enforcing PbD standards. Additionally, you will work cross-functionally to ensure alignment on privacy requirements for products, and collaborate with the Global Security Organization (GSO) to define and implement technical safeguards for personal data.
DaVita - Director, AI Governance
The Director of AI Governance is a critical leadership role responsible for establishing and overseeing the company's AI governance framework. This individual will lead the development, implementation, and maintenance of policies, procedures, and controls to ensure the ethical, responsible, and compliant use of Artificial Intelligence (AI) technologies across the organization. The Director will collaborate with cross-functional teams, including AI/ML engineers, data scientists, legal, compliance, risk, and business stakeholders, to promote transparency, accountability, and fairness in AI systems.
Google DeepMind - Generative AI Portfolio Lead, Google DeepMind Impact Accelerator
The Google DeepMind Impact Accelerator (GDI) has a unique role in Google DeepMind (GDM), to develop solutions and resources built on GDM's technologies and expertise that extend the benefits to humanity. We are a path to real world impact, beyond ABC products and services or making our research public. Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. Working in partnership with GDI’s leadership team, you will develop and be responsible for building our AI product portfolio focused on the social impact of Google DeepMind's generative AI technology (such as Gemini). This will involve market research, internal organisational research, use case analysis, stakeholder engagement and product definition.
Google - Manager, Responsible AI, Trust and Safety
Google's brand is only as strong as our users' trust--and their steadfast belief that our guiding principles are what's best for them. Our Trust and Safety team has the critical responsibility of protecting Google's users by ensuring online safety by fighting web abuse and fraud across Google products like Search, Maps, Google Ads and AdSense. On this team, you're a big-picture thinker and strategic leader. You understand the user's point of view and are passionate about using your combined technical, sales and customer service acumen to protect our users. You work globally and cross-functionally with Google developers and Product Managers to navigate challenging online safety situations and handle abuse and fraud cases at Google speed (read: fast!). Help us prove that quality on the Internet trumps all. As a Manager in Trust and Safety, you lead a team responsible for protecting Google and its users by fighting abuse and fraud for at least one Google product. You ensure trust and reputation not only for this product, but also for Google as a broader brand and company. You are a strategic leader who possesses the ability to work globally and cross-functionally with several internal stakeholders through effective relationship building, influence and communication. You demonstrate analytical thinking through data-driven decisions. You have the technical know-how, charisma and ability to work with your team to make a big impact.
Google - Manager, Content Adversarial Red Team, Trust and Safety
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you are a big-picture thinker and a team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety. As a Manager for the Content Adversarial Red Team (CART), you will lead the US team responsible for conducting adversarial red teaming to uncover ‘unknown unknowns’ and new/unexpected loss patterns on Google’s premier generative AI products.
MacArthur Foundation - Program Development Officer
The New Work Exploration team is responsible for researching, developing, and proposing new areas of grantmaking and ways of working for the Foundation. With direction from the Board and Foundation leadership, we source ideas internally and externally; explore them through research, convening, consultation, and grantmaking; develop initial strategic frameworks; and propose forms these ideas could take, whether short-term initiatives or longer-term programs. We collaborate with teams across the Foundation in pursuit of these goals. The Program Development Officer is a strategic and visionary team player, responsible for identifying, exploring, and shaping emerging ideas into coherent, impactful program areas and initiatives aligned with the Foundation’s mission and values. In addition to grantmaking, this role involves deep research, stakeholder engagement, field scanning, and creative ideation to build the groundwork for future investments and initiatives – both what we focus on and how we contribute and engage to do so. The ideal candidate is an entrepreneurial thinker with a strong grasp of complex social issues, systems change, and program design, and an ability to translate strategy into action both in the field and inside philanthropy. The ideal candidate combines deep professional experience with strong project management skills and a passion for equity-centered social impact.
NVIDIA - Senior AI Safety Researcher
At NVIDIA we tackle challenges no one else can solve. Our work in Networking and AI is transforming the world's largest industries and profoundly impacting society. NVIDIA Networking product security team is looking for an outstanding technical AI safety researcher with hands-on experience to help us improve the safety posture of AI systems and their infrastructure. In this role you will reduce risk, threats, and vulnerabilities in NVIDIA networking AI products.
Southern California Edison (SCE) - Senior Specialist, AI Governance
Become a Senior Specialist, AI Governance at Southern California Edison (SCE) and build a better tomorrow. In this job, you’ll support the implementation and continuous improvement of our enterprise-wide AI governance program. This role will collaborate with cross-functional teams including Enterprise Risk, Cybersecurity, Law, Privacy, and IT Architecture to ensure responsible and compliant use of AI technologies across the organization. This is a hands-on, detail-oriented role ideal for someone with a foundational knowledge of AI/ML systems and a strong interest in governance, risk, and compliance.
TikTok - Head of Policy, Trust and Safety
The Trust & Safety (T&S) Policy team is dedicated to fostering a safe, inclusive, and positive environment across our platforms. We develop, refine, and enforce community policies that guide user behavior and content creation, ensuring our global community can express themselves freely while being protected from harm. Our work is critical in maintaining user trust and platform integrity. Joining our team means playing a critical role in shaping a safer digital world. We are seeking an experienced and strategic leader to serve as the Head of Policy for Trust & Safety. This senior leadership role is responsible for developing, defining, and overseeing the global policy framework that governs user behavior, content, and interactions on TikTok. You will lead a team of policy experts, set the vision for our safety policies, and ensure they are effectively implemented and enforced across the platform. You will be a key advisor to executive leadership and collaborate extensively across Product, Engineering, Legal, Operations, Communications, and other departments to navigate complex safety challenges, anticipate emerging threats, and ensure our policies reflect both our company values and global regulatory landscapes.
TikTok - Machine Learning Engineer Manager, Trust & Safety
Our Trust and Safety team is fast growing and responsible for building machine learning models and systems to protect our users from the impact of negative content. Our mission is to protect billions of users and publishers across the globe every day. We embrace state-of-the-art machine learning technologies and scale them to moderate the tremendous amount of data generated on the platform. With our team's continuous efforts, TikTok can provide the best user experience and bring joy to everyone in the world.
TikTok - Tech Lead Manager (Backend), Trust and Safety
The Trust and Safety( TnS) engineering team is responsible for protecting our users from harmful content and abusive behaviors. With the continuous efforts of our trust and safety engineering team, TikTok can provide the best user experience and bring joy to everyone in the world. Our team is responsible for achieving goals by building content moderation process systems, rule engine, strategy systems, feature engine, human moderation platforms, risk insight systems and all kinds of supportive platforms across TnS organization. We are looking for a Tech Lead Manager to join and build our backend software system.
Senior/Executive Level
Centene Corporation - Staff Vice President, AI Governance
Defines the overarching vision and strategy for Artificial Intelligence (AI) governance. Engages with various departments and stakeholders to foster a culture of responsible AI innovation while ensuring compliance with all applicable laws and regulations. Sets policies, frameworks and long-term governance objectives that guide the organization's use of AI technologies. Ensures the organization is compliant with AI regulatory standards and actively manages AI-related risks (ethical, security, data privacy) at the highest level, safeguarding the company's reputation and fostering trust with members and the public.
Global Payments - Head of Data & AI Governance
The Head of Data and AI Governance at Global Payments will lead, and mature the company’s data and AI governance functions. This leader will drive the development of robust frameworks, policies, and tooling to ensure responsible, compliant, and value-driven use of data and AI across the enterprise. The role requires enhancing enterprise literacy around data value, quality, reference data management, and lineage, while also designing and implementing AI governance. The leader will partner with legal, privacy, risk, cybersecurity, business and functions to embed governance as both a defensive and offensive business enabler, and as a prerequisite for AI-ready data.
Google DeepMind - Head of Frontier Policy Partnerships
This is a leadership role leading a new Frontier Policy Partnerships function in our Frontier Policy team. As the lead for this new group, you will develop and execute a global strategy to build deep, strategic partnerships with key governments and create highly impactful ways to land our public policy positions, shaping regulatory frameworks that advance the use of frontier AI systems while managing their risks. This is a unique opportunity to shape the global norms and regulations that will govern the development and deployment of frontier AI, ensuring its benefits are realized safely and responsibly. You will lead a specialized team focused on building holistic, trust-based relationships with governments in priority countries. This involves developing actionable frameworks for AI adoption in strategically important areas like scientific research, and public services, while also helping governments build their own capacity for AGI preparedness. You will be responsible for crafting and executing a sophisticated public affairs strategy, shaping policy debates and regulatory frameworks through high-impact, substantive engagement.
JPMorgan Chase - Vice President, Generative Artificial Intelligence Policy and Governance
The Generative AI team in Consumer Home Lending is enabling the practical application of generative AI to transform how Chase serves customers and empowers employees. We operate across three pillars: Solutions (building production-ready AI applications), Governance (ensuring responsible AI deployment), and Enablement (spreading AI capabilities throughout the organization). As a Vice President in Generative Artificial Intelligence Policy & Governance, you will oversee the end to end process of governance of Generative AI use cases and contribute to the development of new governance policies and procedures. Adept navigation through ambiguity, adaptation to change, and leveraging of advanced analytical reasoning and influencing skills are essential for driving mutually beneficial outcomes. Your exceptional communication abilities will foster productive relationships with stakeholders, cross-functional teams, and clients. Through your technical fluency and thought leadership, you will play a pivotal role in achieving business goals, shaping the firm's technology landscape, and moving work forward that has firmwide impact.
🗒️You can find these roles, and more being updated daily on our Responsible Tech Job Board along with being shared in our Slack community.
💪Let’s co-create a better tech future
Our projects & links | Our mission | Our network | Email us
Subscribe to All Tech Is Human’s main newsletter for Responsible Tech updates!
🦜Looking to chat with others in Responsible Tech after reading our newsletter?
Join the conversations happening on All Tech Is Human’s Slack (sign in | apply).
We’re building the world’s largest multistakeholder, multidisciplinary network in Responsible Tech. This powerful network allows us to tackle the world's thorniest tech & society issues while moving at the speed of tech.
Reach out to All Tech Is Human’s Executive Director, Rebekah Tweed, at Rebekah@AllTechIsHuman.org if you are hiring and would like to work with All Tech Is Human to find candidates who are passionate about responsible technology or if you’d like to inquire about featuring a role in this newsletter!