New Responsible Tech and Social Impact roles plus How to Get the Right Internship in Responsible Tech, Tips on Joining a Job Cohort, & more
It's December, but we're not yet seeing a slowdown in available opportunities!
👋 Happy December!
We’re excited to bring you a second installment of the All Tech Is Human Responsible Tech Careers newsletter. Every two weeks, you’ll receive:
Featured career-related resource
Featured program to help your career growth
Tip from the All Tech Is Human team
New opportunities in Responsible Tech (including all experience levels)
**If you’re not interested in receiving these resources, please feel free to unsubscribe at any time.**
This newsletter is focused on empowering you to build a meaningful career in Responsible Tech, Public Interest Technology, or Social Impact Tech.
🎯 Our aim is to provide you with more insight into the rapidly evolving Responsible Tech ecosystem and how it relates to getting a role or advancing in your career. Through this new careers-focused newsletter, you will receive updates every other week on new opportunities from the most popular job board in Responsible Tech, along with Responsible Tech career updates, trends we’re seeing, evergreen content from our archives, ecosystem-wide reports, and insights from experts about what’s next in Tech.
🤝 This newsletter comes out of our efforts this past year to combine our responsible tech talent pool with our social impact talent matchmaking platform Tekalo to provide better access to all of the existing resources at ATIH, which includes a job board, Mentorship Program, Responsible Tech University Network, free livestreams and in-person events, and Slack community. In particular, numerous Responsible Tech jobs are being shared and discussed every day through our large Slack community (Sign In | Apply).
👑 Also consider subscribing to All Tech Is Human’s newsletter, focusing on issues and opportunities in the Responsible Tech ecosystem at-large and arriving in your inbox every other week (opposite weeks from this Careers Newsletter).
🧠 In this newsletter, you’ll find:
Livestream: How to Get the Right Internship in Responsible Tech
Resource: How to Start Your Responsible Tech Journey
Program: All Tech Is Human’s Responsible Tech Slack Community
Tips on Joining a Job Cohort
33 of the best newly-posted roles from the Responsible Tech Job board
Now, onto the newsletter! 👇
Featured Resource:
Getting the Right Internship in Responsible Tech
How can you earn the right internship in Responsible Tech? Watch this recent livestream conversation with moderator Steven Kelts (All Tech Is Human’s Responsible Tech University Network) in conversation with Audrey Chang (Senior, Harvard College), Shivani Sundaresan (Software Engineering Intern, Microsoft), Carissa Anderson (MSc. Student, Vrije Universiteit Brussel), Nikayda Harris (AI Policy Analyst, Canadian Department of National Defence), and Nathan Darmon (Researcher for Exec. Director of Center for Humane Technology) for an interactive conversation about best practices and strategies about discovering the perfect fit for you.
📜 BONUS RESOURCE: If you’re new to Responsible Tech, it can feel overwhelming knowing where to get started! Check out our resource “Start Your Responsible Tech Journey” here.
📅 On March 4, 2025, we will be releasing a Responsible Tech Careers Report.
Featured Program: All Tech Is Human’s Responsible Tech Slack Community
Apply to join All Tech Is Human’s Responsible Tech Slack Community!
Our organization is bringing together thousands of people across the globe to share knowledge, connect with one another, find collaborators, learn about projects and initiatives, find job openings, and more!
Our Slack community is a springboard to action. We currently have over 11k members (and growing) across 101 countries focused on our key areas of:
Responsible AI
Trust & Safety
Cyber & Democracy
Tech Policy
Public Interest Technology
Youth, Tech, and Wellbeing
In addition, we have channels centered around specific cities and regions, channels for learning about events and jobs, sharing your projects and research, and so much more.
Tips for Growing Your Career in Responsible Tech: Join a Cohort
All Tech Is Human community member Samantha R., Product Manager and Community Builder, recently launched a Job Support Cohort in the All Tech Is Human Responsible Tech Slack! Join the weekly virtual meet-ups to benefit from this innovative community initiative.
From Samantha, “We took another collective step in this new job search strategy, which is to search as a cohort. The hope is to explore the ecosystem together, land a role, and build strong professional bonds and accountability partners with each other along the way.”
In early December, the cohort met virtually for an AMA session with Dr. Cari Miller, Head of AI Governance & Research at the Center for Inclusive Change.
Takeaways from the AMA, discussion, and additional job-seeking tips included:
Focus on a specific niche or area of expertise.
Volunteer extensively with relevant organizations.
Look for ways to turn volunteer work into paid consulting gigs.
Be willing to start with small paid opportunities and build from there.
Consider partnering on projects to access larger opportunities.
Utilize a self-reflection exercise like the Mnookin 2-Pager to clarify your career goals, strengths, weaknesses, and preferences.
Share your self-reflection with professional contacts you trust for feedback to refine your career goals and job search strategies.
Advice from Samantha R.: “Remember, you are not alone in your passion to see ethical practices implemented in the technology you see the world engaging with — and you are not alone in this crazy employment market. From CFOs to junior roles we are peers in this position of looking for work and we are better together. You don't have to search alone.”
New Opportunities in Responsible Tech (listed from entry to senior roles)
🎉 FEATURED ROLE 🎉
Fast Forward: Director of Development
Tech is ubiquitous. Tech solutions for social problems are not. An emerging class of startups is building tech to solve these social problems. They are tech nonprofits. Fast Forward bridges the tech and nonprofit sectors to build capacity for tech nonprofits, so they can scale solutions to our world’s most urgent problems. Building on Fast Forward’s existing creative fundraising strategy, this role focuses on opening doors to new opportunities for both Fast Forward and tech nonprofits. This hands-on position offers a unique opportunity to execute Fast Forward’s fundraising strategy, particularly through partnerships with corporate partners.
Internships
Hugging Face: Machine Learning Engineer Internship, AI Energy Score
At Hugging Face, we’re on a journey to democratize good AI. The energy requirements of machine learning models have been rising in recent years, raising concerns regarding the impacts of this on energy grids and the environment. Building upon the AI Energy Score project, this internship will continue experimentation and analysis to get a better understanding of the energy efficiency of different models and deployment contexts (hardware, optimization techniques, serving stacks).
Microsoft: Research Intern - Biomedical AI for Precision Health
The advent of big data heralds a new era of precision healthcare, where medicine is tailored to individuals, reducing cost, suffering, and missed treatment opportunities. Artificial intelligence (AI) can play a key role in this transformation by discerning knowledge from data and separating signal from noise. We aspire to advance AI toward developing health systems that can instantly incorporate any new information to optimize delivery and accelerate discovery. A key bottleneck is that current health systems are mired in overwhelming unstructured data and non-scalable manual processing. Recent advances in generative AI, such as large language models (LLMs), offer unprecedented “universal structuring” capabilities that can supercharge health information processing and unlock many high-value applications in real-world evidence and precision health.
Fellowships
Anthropic: AI Safety Fellow
The Anthropic Fellows Program is a 6-month external collaboration program focused on accelerating progress in AI safety research by providing promising talent with an opportunity to gain research experience. Our goal is to bridge the gap between industry engineering expertise and the research skills needed for impactful work in AI safety.
Impact Academy: Global AI Safety Fellowship
Impact Academy’s Global AI Safety Fellowship is a fully funded research program for up to 6 months for exceptional STEM talent to work with the world’s leading AI safety organisations to advance the safe and beneficial development of AI.
University of Oxford, Balliol College: Postdoctoral Research Fellow in Philosophy/Ethics in AI (AI and Theoretical Philosophy)
Applications are invited for a full-time Early Career Research Fellowship in Ethics in AI, with a focus on AI and Theoretical Philosophy (philosophy of mind, philosophy of logic and language, epistemology, and metaphysics). The position is associated with a research fellowship at Balliol College, Oxford. This position will be especially suitable for a candidate with a strong research background in some branch of Theoretical Philosophy who wishes to use this as a springboard for engaging with ethically significant philosophical questions raised by AI technology. The Institute will build upon the University’s world-class capabilities in the humanities to lead the study of the ethical implications of artificial intelligence and other new computing technologies. The Early Career Research Fellow will pursue their own research under the supervision of the Ethics in AI Institute Director, John Tasioulas.
Entry Level
Nvidia: Solution Architect, Earth-2 NVIS- New College Graduate
NVIDIA is building Earth-2, a platform to accelerate production of digitals twin of the Earth combining GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; machine learning emulation of physics predictions, along with vast quantities of observed and model data to learn from. NVIDIA aims at harnessing the power of artificial intelligence to substantially mitigate climate change. The Earth-2 Solution Architect team will support efforts in the fields of weather and climate risk modeling, mitigation, monitoring, adaptation, disaster response, and weather-extremes. Solution architects work closely with customers and help them deploy our solution stack on their systems. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. We are looking for a solution architect who can work jointly with the NVIDIA Infrastructure Specialists Team to deploy Earth-2 on the largest and fastest AI/HPC systems.
TrustPilot for Business: Escalations Specialist, Content Integrity
Content Integrity (CI) aspires to provide best-in-class customer service while maintaining and protecting the integrity of Trustpilot’s guidelines and brand, striving towards our mission of becoming a universal symbol of trust. As a Specialist in our Content Integrity function your priority is to ensure our customers (consumers and businesses) understand our values. Where content that violates our guidelines is identified our goal is to act quickly, providing a clear outcome for our customers. You will play a significant part in ensuring Trustpilot remains the go-to review site by ensuring that content can be trusted.
Early Career
Abcam: Data and AI Governance Lead (12 months Fixed Term Contract)
Lead data governance for AI at Abcam. Work across Life Science Operating Companies at Danaher and facilitate data governance to scale Commercial AI across Danaher. Design, implement and oversee practical data governance frameworks leveraging the latest technologies to support organizational needs, identify opportunities and make recommendations to improve data quality and trustworthiness and scale AI. Establish and implement standards and policies while engaging teams to pro-actively participate and drive common goals, enhancing data lifecycles processes and ensuring compliance with relevant laws and regulations. Lead data stewardship initiatives and promote a culture of data accountability and responsibility. Foster the utilization of a centralized data catalogue and business glossary. Provide expertise and support in the implementation of Customer Data Platforms (CDP), Master Data Management (MDM), Data Bases, and Customer Identity and Access Management (CIAM) solutions.
Ascensus: Responsible AI Governance and Risk Leader
The Responsible AI Governance and Risk Leader is a critical member of Ascensus’ AI leadership team and responsible for establishing, overseeing, and enforcing the Responsible AI governance framework across Ascensus by ensuring that AI initiatives adhere to ethical standards, legal and regulatory requirements, enterprise risk management principles, and organizational values. This role owns central coordination of AI policies, assists the company with assessing, measuring and mitigating risks related to AI, and promotes responsible AI practices to support transparency, accountability, and compliance.
Future of Life Institute: AI Compute Security & Governance Technical Program Manager
Founded in 2014, FLI is an independent non-profit working to steer transformative technology towards benefiting life and away from extreme large-scale risks. Our work includes grantmaking, research, educational outreach, and policy engagement. The Future of Life Institute (FLI) is seeking a technical program manager to support a multi-organizational initiative in AI compute governance, with a focus on mechanisms enabled by hardware security measures. FLI views Compute Security & Governance (CSG) as an integral component in ensuring the safety and control of next-generation frontier AI systems.
Google: AI Safety Analyst, Google Photos
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety. In this role, you will work directly with Product Managers, Engineers, Policy, Legal team to build and execute a comprehensive approach to push the Artificial Intelligence (AI) model to its limits and build resilience against malicious or unexpected inputs. Your creative thinking, deep expertise, problem-solving and collaboration skills will be invaluable to ensure a safe and responsible deployment of AI in Google Photos. This role also requires evaluation of model output, which may include sensitive, graphic, controversial, or upsetting content.
IKEA: Digital Ethicist
We are currently looking for an enthusiastic Digital Ethicist. In this role you will build awareness and drive topics around digital ethics and why it matters to IKEA and wider society. Digital ethics is a discipline that critically assesses the impact of data and digital technologies and produces actionable advice for co-workers. It involves thinking about regulations, accountability, responsibility, ethical principles and moral dilemmas throughout the lifecycle of data and digital technologies. You will empower others to implement best practices in their work by demonstrating how to apply ethical principles through examples, case studies and discussions. You will work closely with the Digital Ethics team and other related functions across Inter IKEA to co-create new practices and standards. You will also work closely together with peers across the Inter IKEA value chain to strengthen and contribute to a strong and relevant IKEA franchise offer by co-creating Inter IKEA’s vision on digital ethics. Together you will develop practices that enhance the IKEA brand around the world and ensure that we use data and digital technologies for good.
Microsoft: Project Manager, Democracy Forward Initiative (Contract)
Join Microsoft’s Democracy Forward Initiative (DFI) as a project manager to support initiatives focused on fostering healthy information ecosystems and backing journalists and publishers. As part of the Technology for Fundamental Rights (TFR) team, you will play a crucial role in safeguarding democratic processes globally and advocating for corporate civic responsibility.
Protect AI: Senior Product Manager
Protect AI is shaping, defining, and innovating a new category within cybersecurity around the risk and security of AI/ML. Our ML Security Platform enables customers to see, know, and manage security risks to defend against unique AI security threats, and embrace MLSecOps for a safer AI-powered world. This includes a broad set of capabilities including AI supply chain security, Auditable Bill of Materials for AI, ML model scanning, signing, attestation and LLM Security. Join our team to help us solve this critical need of protecting AI! Role Protect AI is seeking a Senior Product Manager to aid in driving our product strategy and roadmap for our AI Security platform and our open source efforts. The selected candidate will work closely with design, engineering, and go-to-market teams to define and deliver solutions that address the unique security challenges in deploying and operating AI systems.
Takeda Pharmaceutical: Responsible AI Manager
As a Responsible AI Manager at Takeda Pharmaceutical, you will play a pivotal role in developing, implementing, and overseeing responsible AI practices and policies across the organization. Your primary focus will be to ensure that AI technologies and systems are used ethically, transparently, and in a manner that aligns with Takeda’s values and regulatory requirements. You will collaborate with cross-functional teams, including data scientists, engineers, legal experts, and business leaders, to foster a culture of responsible AI and drive initiatives that promote fairness, accountability, and inclusivity in AI applications.
Tony Blair Institute: Associate - Tech & Digital Transformation
As a centre of excellence, Global Client Solutions is comprised of our centrally situated global and regional experts – functional and sectoral. Our experts provide valuable thought partnership, technical support, strategy and surge capacity, and lead the development and evolution of our solutions, methods and offers in the service of our partner governments. Through our country teams and embedded advisors working hand-in-hand with government, we assemble, tailor and distribute our offer to address specific and ever-evolving needs in the various contexts we operate across the world. Our functional offers include Strategy: supporting political leaders to identify the barriers to progress, Policy: finding the right solutions, Delivery: making change happen, and then harnessing the transformative power of Technology to support leaders to engage with forward-looking opportunities.
Umeå University, Faculty of Science and Technology: Staff scientist with a focus on AI policy and governance
The AI Policy Lab seeks a skilled Staff Scientist to drive analysis and thought leadership in the AI policy, governance, and regulatory landscape. In this role, you will monitor global AI policy developments, assess their implications, and deliver strategic recommendations that support responsible AI deployment. As a research-focused role you will work on pioneering analyses, policy briefings, and stakeholder engagement that align with the Lab’s mission to guide AI’s development for positive societal impact.
World Economic Forum: Specialist, AI Technology and Innovation – AI Governance Alliance
As artificial intelligence continues to redefine the global economy, influencing governments, businesses, and individuals, the Centre for the Fourth Industrial Revolution seeks to advance cutting-edge innovation while ensuring responsible governance. To support with these objectives, we are seeking an AI Technology and Innovation Specialist in our San Francisco office to help advance initiatives that focus on AI technology as part of the AI Governance Alliance. This role requires knowledge of recent developments in AI, particularly in emerging domains such as AI agents and large language models. The role entails an ability to translate technical developments into actionable strategies for long-term societal benefit. The Specialist will help coordinate high-impact discussions, produce impactful deliverables, and ensure that outputs contribute meaningfully to global conversations on advanced AI systems.
Mid Career
Center for AI Safety: Policy Lead
The Center for AI Safety Action Fund (CAIS AF) is a nonpartisan advocacy organization dedicated to advancing public policies that maintain U.S. leadership in AI and protect against AI-related national security threats. Alongside our sister organization, the Center for AI Safety (CAIS), we tackle the toughest AI issues with a mix of technical, societal and policy solutions. CAIS is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. The Policy Lead at CAIS AF is a pivotal role that involves steering and managing the organization's Federal policy work. This position requires a strategic mindset and the ability to navigate complex policy landscapes. For 2024-2025, this may include working to maintain U.S. leadership in AI chip manufacturing, compute governance, preventing malicious use of AI, and securing funding for key government agencies.
Credo AI: Product Manager - Workflows & Platform
Credo AI is seeking an experienced Product Manager to join our team on the journey to address the world’s biggest challenges in scaling responsible AI. As the Product Manager for Platform & Workflows, you will own shaping the workflows and core features that power our enterprise grade AI Governance platform. You will be responsible for deeply understanding the needs of our customers in order to shape the roadmap for user-friendly, configurable and automated workflows that allow customers to continually govern AI use cases across their enterprises. You will also own the delivery of other core platform features, such as entitlements, authentication, permissions, etc.
Equal AI: Director (Manager) of Policy & Programs
EqualAI is a high-profile nonprofit organization dedicated to reducing harms in AI and increasing transparency and accountability in artificial intelligence systems by advancing responsible AI governance. EqualAI seeks a strategic and experienced Director (manager) of Programs & Policy to lead and manage our high-impact initiatives aimed at identifying and promoting responsible AI practices. This role requires experience in executing programs and policy development. The Director (manager) of Programs & Policy will play a critical role in coordinating EqualAI’s programs, shaping policy discussions, and developing partnerships with key stakeholders.
Eticas: Head of Research Projects
Eticas teams up with organizations to identify black box algorithmic vulnerabilities and retrains AI-powered technology with better source data and content. Since our inception, we’ve built a track record with a proven methodology that equip clients with a more cognitively diverse algorithm to unearth more accurate outputs that can be turned into competitive advantages. As Head of Projects, you will lead and participate as a project lead in our socio-technical research projects and will coordinate our proposals (combining social science, statistics, data protection, algorithmic auditing, security, etc.) You will participate in projects mostly within the framework of the European Commission Horizon Programme, collaborating with consortia of multiple public and private sector partners. helping shape a more ethical technological future. We are also working with other partners such as the GIZ, UNDP, etc.
Fast Forward: Director of Corporate Partnerships and Fundraising
Tech is ubiquitous. Tech solutions for social problems are not. An emerging class of startups is building tech to solve these social problems. They are tech nonprofits. Fast Forward bridges the tech and nonprofit sectors to build capacity for tech nonprofits, so they can scale solutions to our world’s most urgent problems. Building on Fast Forward’s existing creative fundraising strategy that opens the door to new opportunities for Fast Forward and tech nonprofits. This hands-on position offers the unique opportunity to execute Fast Forward’s fundraising strategy from corporate partners.
LinkedIn - Senior Manager, Trust Investigations
The Trust Investigations Senior Manager is responsible for leading Trust Investigations’ operational function, a global team with managers and staff located principally in Omaha, NE and Bangalore India. The Sr. Manager will be responsible for setting strategy, ensuring cohesive operations, developing metrics, and ensuring the accuracy of our scaled abuse enforcements. A successful candidate will have experience leading investigations teams and ensuring their insights are applied to Trust systems, including to rules engines and ML systems. This role is cross functional in nature and will expose the right candidate to extensive experience working with various roles in Trust.
Strava: Group Product Manager, Trust & Safety
As the Group Lead of Trust & Safety, you will be responsible for building safety features, growing and scaling product launch processes, and shaping strategic product roadmaps and tools. You will be tasked with connecting the big-picture with the small details that make elegant and fully thought-through solutions. You will collaborate with cross-functional partners to set a broad strategic vision to prevent fraud and abuse and enhance trust. You will execute the vision by designing and implementing technical ML/AI solutions, increasing automation and working cross-functionally to address problems related to feature safety, privacy, and platform integrity.
Swift: AI Governance Lead
Are you motivated to take the lead in the forefront field of AI governance? Do you excel in a dynamic, collaborative environment where your ideas and leadership can craft the future of responsible AI? Joining Swift as an AI Governance Lead means being part of a world-class team that is at the forefront of AI policy and ethical development. Our team is dedicated to crafting a flawless and secure approach to AI governance, ensuring compliance and encouraging innovation. This is an outstanding opportunity to engage with collaborators worldwide, influence AI policy, and drive responsible AI practices within a leading financial messaging service provider.
Senior/Executive Level
Indeed: Director, Product Management - Responsible AI
Indeed is seeking a Product Director to help responsibly develop Indeed’s next generation of AI systems and ensure they are beneficial for job seekers, employers, and society. You will facilitate and drive the entire lifecycle of product development: building, owning, and maintaining the processes and tools that Indeedians use to build AI systems that are more fair and less biased. As the Director of Product Management for Responsible AI, you will work directly with product teams across Indeed, Legal, and senior leaders to align Responsible AI’s product vision and strategy with the company’s overall AI efforts. You will be responsible for creating a cohesive strategy to set Indeed up to do AI responsibly as a first principle by designing and developing cutting-edge technologies and processes across the AI lifecycle. You and your team will work closely with customers to establish how best to enable them to do AI responsibly. You will directly manage a cross-functional team composed of multiple pods of scientists, engineers, and product managers. Additionally, you will be responsible for scaling the team’s capabilities and impact through mentorship and management. You will apply your knowledge of AI-driven product development, innovation at scale, and navigating complex regulatory environments. Doing so while growing and leading a team that turns strategy into reality will make you successful in this role.
JPMorganChase: Corporate Responsibility – Executive Director, AI & Technology Policy Partnerships, Global Government Relations
The AI and Data Policy team is seeking an Executive Director, Artificial Intelligence (AI) and Technology Policy Partnerships, to lead and manage our external relationships on AI, data, and related technology policy issues. This role will involve strategic oversight and relationship management with third parties, including financial services, technology, and general business trade associations, coalition bodies, financial services industry and tech industry policy counterparts, research institutions, and global civil society contacts focused on AI and data policy issues. Key responsibilities include working with the Global AI Policy Office and firmwide partners to develop and execute advocacy strategies supported by third party engagement to influence AI, data, and technology policy at the national and international levels.
Kroll: Vice President, Data Scientist, AI Risk
We are the only company in the world with the expertise and resources to deliver global, end-to-end cyber risk management, supporting organizations through every step of their journey toward cyber resilience. The ideal candidate will possess a strong foundation in data science and machine learning with an ability to understand technical model components, and curiosity about the business of our clients. An understanding or awareness of relevant legal and regulatory frameworks is helpful. We are seeking a highly skilled Data Scientist with expertise in Artificial Intelligence (AI) and Machine Learning (ML) model validation to join our team. This role is critical in helping our clients ensure the accuracy, reliability, and compliance of AI/ML solutions.
Lyft: Senior Manager, Trust & Safety Policy
As a Trust & Safety Policy Development Manager, you will play a pivotal role in shaping and implementing platform safety policies that uphold these principles, ensuring our community remains safe and reliable. Lyft is looking for a Safety Policy Development Manager to join our Safety and Customer Cares team based in San Francisco. This is a manager role that includes both people management and senior IC work to build out a dedicated policy development function within Lyft’s Safety Operations team. This role will be responsible for creating, updating, and maintaining the policies our customer facing Safety teams apply when reviewing and responding to safety incidents on the platform.
U.S. Technology Transformation Services: Senior Advisor for Technology
The TTS Senior Advisor for Technology advises the TTS Director, Deputy Director, and Deputy Director of Operations on digital technology and IT architecture. This person helps identify cross-cutting technical challenges, evaluates possible solutions, and gets teams and stakeholders on the same page. They need to understand TTS' products, services, and operations, and bring that perspective to discussions and negotiations with TTS teams, other GSA offices, and external partners. The Senior Advisor also works with other advisors to ensure that policy, delivery, user needs, and technical perspectives are considered in TTS' leadership decisions.
U.S. Office of the Director of National Intelligence: Senior Analyst for Future Technology Impacts on Intelligence
The ADNI for RC&E is seeking officers with the right skills and experience to perform the following: Serve as lead analyst of technology issues for IC Net Assessments responsible for leading comparative assessments of the nature and character of future intelligence competitions and collaborating with counterintelligence components within the IC. Provide guidance to teams of senior-level analysts and contractors conducting strategic forecasting, technical analysis, and comparative assessments in support of strategic intelligence net assessments. Develop and conduct analytic methodologies like simulation, modeling, scenario planning, and war games to assess prospective intelligence net assessments. Develop, utilize, and refine a full range of methodological tools and approaches to gain a comprehensive understanding of complex and significant analytic issues and incorporate insights and findings into well-crafted, sophisticated intelligence products. Prepare expert findings, reports, briefing papers, and other communication vehicles to present net assessment findings, along with options and strategic considerations, to senior IC leadership. Engage IC stakeholders on IC strategic priorities, capabilities, needs and gaps and cross-IC interdependencies. Recognize, value, build, and leverage diverse collaborative networks with the ODNI and across the IC. Engage outside experts on emerging trends and future issues of relevance to future intelligence capabilities.
🗒️You can find these roles, and more being updated daily on our Responsible Tech Job Board along with being shared in our Slack community.
💪Let’s co-create a better tech future
Our projects & links | Our mission | Our network | Email us
Subscribe to All Tech Is Human’s main newsletter for Responsible Tech updates!
🦜Looking to chat with others in Responsible Tech after reading our newsletter?
Join the conversations happening on All Tech Is Human’s Slack (sign in | apply).
We’re building the world’s largest multistakeholder, multidisciplinary network in Responsible Tech. This powerful network allows us to tackle the world's thorniest tech & society issues while moving at the speed of tech.
Reach out to All Tech Is Human’s Executive Director, Rebekah Tweed, at Rebekah@AllTechIsHuman.org if you are hiring and would like to work with All Tech Is Human to find candidates who are passionate about responsible technology or if you’d like to inquire about featuring a role in this newsletter!