



Jennifer Charters On AI And The Human Side Of Tech


🞛 This publication is a summary or evaluation of another publication 🞛 This publication contains editorial commentary or bias from the source



Jennifer Charters on AI and the Human Side of Tech
In a candid interview published by Forbes on August 29 2025, Jennifer Charters—a former senior executive at Microsoft and a leading advocate for responsible AI—shares her perspective on the intersection of artificial intelligence and human values. Drawing on decades of experience in product design, policy, and research, Charters argues that the future of technology hinges on a deliberate, people‑first approach to AI. Her insights, distilled from her time at the forefront of AI development and her ongoing involvement with the Human-Tech Project, underscore the importance of empathy, transparency, and inclusive governance in shaping the next wave of intelligent systems.
A Career Bridging Tech and Humanity
Charters began her career in software engineering in the late 1990s, working on early cloud‑based services at Microsoft. By the 2010s she had risen to lead several high‑profile initiatives, including the company’s “Responsible AI” working group. “I’ve seen how quickly we can move from a vision of automation to a product that affects millions,” she says. “The real challenge is ensuring that vision remains rooted in human well‑being.”
During her tenure at Microsoft, Charters helped launch the company’s AI ethics framework, a set of principles that guide the design, deployment, and monitoring of algorithms. The framework, which is now a widely cited model, emphasizes fairness, inclusivity, privacy, and accountability—values that Charters argues must be baked into AI systems from day one. She credits the early adoption of these principles with preventing a host of high‑profile bias scandals that plagued the industry in the mid‑2010s.
The Human‑Centric Lens
Central to Charters’ message is the idea that AI is not a neutral tool; it is shaped by the data it learns from and the humans who build it. “Every algorithm is a reflection of its creators,” she explains. “If we don’t bring diverse voices into the design process, the technology will echo and amplify existing inequalities.”
She cites the 2023 study from the AI Now Institute—a collaboration between MIT and NYU—which documented how facial‑recognition systems performed less accurately on darker‑skinned faces. “That was a wake‑up call,” Charters says. “It demonstrated that bias is not a technical flaw but a systemic one, rooted in our data sets and our assumptions.”
To counter such issues, Charters recommends a multi‑layered approach:
- Data Auditing – Systematically assess the representativeness of training data.
- Human‑in‑the‑Loop (HITL) Oversight – Ensure that human reviewers can intervene before automated decisions are finalized.
- Continuous Monitoring – Track model performance over time and update models as societal norms evolve.
These practices are embedded in the Human-Tech Project’s “AI Health Check” toolkit, which Charters has championed in recent policy workshops. The toolkit, freely available on the project’s website, guides organizations from startup to Fortune‑500 level in implementing responsible AI safeguards.
From Theory to Practice: Real‑World Applications
Charters draws on concrete examples from her career to illustrate how human‑centric design can be operationalized. One case study involves the development of Microsoft’s “Copilot” series—AI assistants that help users write code, generate content, and even design user interfaces. While the products were praised for boosting productivity, early adopters reported concerns about over‑reliance and a subtle shift toward “algorithmic authority.” Charters describes how Microsoft introduced an “Explainability” feature that surfaced the underlying logic behind each suggestion, allowing users to make informed choices.
Another example comes from the nonprofit sector. Charters partnered with a global health NGO to deploy a predictive model that identifies regions at risk of malaria outbreaks. By involving local health workers in the model’s validation phase, the team achieved higher adoption rates and more accurate predictions than previous, purely data‑driven approaches. Charters argues that “the success of this project hinged on trust—a trust that can only be earned when humans are seen as co‑creators rather than passive recipients.”
Regulation, Policy, and Global Collaboration
Beyond industry best practices, Charters is an outspoken advocate for robust policy frameworks that keep pace with AI’s rapid evolution. She references the European Union’s 2022 AI Act—a comprehensive regulatory scheme that classifies AI systems by risk level—and notes that the United States is in the throes of drafting its own “AI Bill of Rights.” Charters stresses that regulation should be guided by ethical principles rather than reactive to crises.
In a recent Harvard Business Review op‑ed, she co‑authored a paper titled “AI and the Human Code,” arguing that policy makers must consider the socio‑cultural context in which AI operates. “Technology cannot be isolated from the societies it serves,” she writes. “Regulation should be a dialogue, not a monologue.”
She also champions international cooperation, citing the Global AI Governance Initiative launched by the World Economic Forum (WEF) in 2024. This initiative brings together academia, industry, and civil society to establish shared norms and standards for AI. Charters notes that “global collaboration is essential because data flows don’t respect borders.”
Looking Ahead: The Future of Human‑Centric AI
When asked about the next decade, Charters is cautiously optimistic. She predicts that AI will become increasingly pervasive—embedded in everything from personal assistants to public infrastructure. Yet she warns that “without intentional human stewardship, we risk creating systems that are efficient but alienated.”
Key areas she identifies for future focus include:
- Explainable AI (XAI): Making complex models transparent to users.
- Human‑Centered Design Education: Integrating ethics and design thinking into STEM curricula.
- Bias‑Resistant Data Pipelines: Building data governance frameworks that proactively address representativeness.
- Cross‑Sector Partnerships: Leveraging collaborations between tech companies, governments, and NGOs to create AI solutions that are socially responsible.
Charters concludes by emphasizing that the “human side of tech” is not a niche concern but a foundational pillar for all AI endeavors. “When we remember that the ultimate goal of technology is to enhance human flourishing, we’ll design AI that does not replace people, but elevates them.”
Key Takeaways
Point | Summary |
---|---|
Role of Empathy | AI systems must be built with an understanding of diverse human experiences. |
Ethical Frameworks | Principles like fairness, inclusivity, and accountability should guide design and deployment. |
Human‑in‑the‑Loop | Ongoing human oversight prevents blind automation. |
Policy Engagement | Regulation should be proactive, ethically grounded, and globally coordinated. |
Future Direction | Focus on explainability, education, and cross‑sector partnerships to ensure technology serves humanity. |
Jennifer Charters’ reflections illuminate a path forward where artificial intelligence remains a tool of empowerment rather than a driver of division. Her call to embed humanity at the heart of every algorithm resonates across the tech landscape, reminding us that the true measure of progress lies not in speed or scale, but in how well technology supports the people it is meant to serve.
Read the Full Forbes Article at:
[ https://www.forbes.com/sites/peterhigh/2025/08/29/jennifer-charters-on-ai-and-the-human-side-of-tech/ ]