Building Psychological Safety in AI Development

Conor Bronsdon
Conor BronsdonHead of Developer Awareness
Team collaborating on AI with focus on psychological safety
4 min readJanuary 29 2025

AI isn't just boosting productivity—it's transforming how developers interact, collaborate, and innovate. In a Chain of Thought podcast, Conor Bronsdon, Head of Developer Relations at Galileo, chatted with Rizel Scarlett, Staff Developer Advocate at Block, about AI's impact on open source, psychological safety, and inclusive developer environments.

They explored how open-source AI projects create genuine community engagement opportunities. "When people are contributors, they're excited," Scarlett explains, highlighting how these projects democratize development.

Their conversation reveals how AI is reshaping software development beyond productivity—it's creating inclusive spaces and building trust within teams, showing how AI and developers can collaborate safely, ethically, and innovatively worldwide.

The Transformation of Developer Experience Through AI Tools

The developer landscape is experiencing unprecedented transformation as AI integration redefines workflow practices and team psychological dynamics. Tools designed with developer experience in mind are creating new possibilities for collaboration, learning, and innovation across experience levels and backgrounds.

Subscribe to Chain of Thought, the podcast for software engineers and leaders building the GenAI revolution.
Subscribe to Chain of Thought, the podcast for software engineers and leaders building the GenAI revolution.

Redefining Productivity Beyond Code Output

Developer tools are evolving past mere code production acceleration. AI tools like Block's Goose are changing the landscape by streamlining engineering tasks, enabling developers to maintain focus while balancing coding with other responsibilities like maintaining open-source projects. This shift supports rapid AI deployment, centering on cognitive load reduction and allowing developers to focus on high-value creative work.

Strategic AI implementation through automating routine tasks enables engineers to preserve mental energy for complex problem-solving and architectural decisions,” explains Scarlett. By automating routine tasks and enhancing AI performance, AI enables engineers to preserve mental energy for complex problem-solving and architectural decisions.

The productivity transformation also reframes success metrics from lines of code to impact-oriented measures. Teams adopting AI collaborative tools often report greater satisfaction with work-life balance while simultaneously delivering more meaningful contributions to their projects.

Impact on Psychological Safety

AI integration in developer workflows is creating psychological safety, especially for junior developers and diverse teams. These tools enable developers to experiment without fearing mistakes, acting as supportive peer programmers and creating safe spaces for growth.

"If I had this when I was a junior developer, this would have made me so much more confident," Scarlett reflects. Modern AI tools function as collaborative partners, offering suggestions without judgment and encouraging experimentation. They serve as guardrails that prevent catastrophic errors while still allowing developers to learn through problem-solving.

The psychological impact extends beyond individual confidence to team dynamics, leveling experience gaps and reducing intimidation factors. When everyone has access to intelligent assistance, collaboration becomes more equitable and ideas flow more freely regardless of seniority.

Empowering Junior and Underrepresented Developers Through AI

The democratization of development knowledge through AI tools is reshaping entry pathways into tech careers. By reducing knowledge barriers and providing contextual assistance, these technologies create more accessible learning environments for developers from all backgrounds.

Creating Equitable Learning Environments Through AI

AI tools create safe spaces for judgment-free experimentation, allowing developers to learn at their own pace. Scarlett emphasizes how these tools transform uncertainty into confidence. For underrepresented groups, AI interactions level the playing field, breaking down barriers and fostering inclusion.

Many worry AI might replace junior developers, but the reality points in another direction. AI can act as a silent mentor, providing judgment-free support and enhancing productivity, helping developers build confidence, particularly for those from underrepresented groups who may lack traditional support networks within the industry.

The private learning environment eliminates social concerns that often prevent developers from asking questions—particularly benefiting individuals from underrepresented groups who may feel additional pressure to appear knowledgeable in workplace settings.

Learning patterns become more personalized, accommodating different processing styles and background knowledge levels. This flexibility creates room for diverse approaches to problem-solving, enriching the development ecosystem with varied perspectives and solutions.

Scarlett says AI offers immediate feedback, removing the "just Google it" hurdle. Tools like GitHub Copilot let developers tackle challenges without feeling intimidated by experienced colleagues. The confidence gained provides "psychological safety," empowering developers to take on once-daunting projects.

For developers from underrepresented backgrounds, AI creates equitable learning opportunities without fear of judgment. This support encourages a more inclusive tech landscape where everyone can grow without unnecessary barriers.

AI fosters a supportive environment where developers, especially those from diverse backgrounds, can pursue ambitious projects and drive innovation. It's not just a tool—it's a catalyst for positive change in the developer community.

Open-Source AI as a Framework for Transparency and Safety

Block made Goose open source to enhance psychological safety through transparency. As Scarlett explains, open-source serves as "authentic marketing" where contributors become natural advocates. "It's authentic and free marketing where you have people just excited," showing how open source generates genuine enthusiasm.

Transparency is a key open-source AI advantage. While closed systems feel mysterious, open source reveals what's happening inside. Scarlett captures this perfectly: "A lot of the tools with AI, it just feels like a black box. You're like, this is just magic. Like, what's really happening?" Open-source frameworks demystify AI, boosting confidence and understanding.

These projects also allow teams to customize tools for their specific psychological safety needs. This flexibility, along with efforts in standardization in AI, helps ensure safe AI integration, allowing AI solutions to integrate smoothly into different environments while respecting each team's unique setup.

Open-source AI transparency lets developers see how AI makes decisions, building trust and creating an environment where questioning and validating outputs become standard practice. Community-driven innovation pushes beyond what any single organization can achieve, continuously improving AI tools over time.

This combination of transparency and psychological safety creates strong team dynamics, encouraging open engagement with AI solutions and fostering inquiry and improvement. Open-source AI's collaborative nature drives powerful innovation within organizations while building trust and encouraging collective growth.

Balancing AI Implementation with Responsible Governance

Responsible governance is essential in today's AI advancement. "Don't just check in AI code. Review what it's giving you," advises Scarlett, emphasizing careful examination of AI outputs.

Implementing AI responsibly involves several strategic steps. Comprehensive training programs equip developers to handle AI-generated outputs effectively while maintaining high standards. Utilizing agentic AI frameworks can transform AI workflows and aid in secure deployment.

Also, strong evaluation frameworks, including both automated testing and human oversight, ensure reliable results. Platforms like Galileo offer advanced metrics for AI evaluation without traditional benchmarks, enabling faster improvements.

The review process remains critical for maintaining AI output quality. Teams should integrate AI-generated code into regular code reviews, treating it with the same scrutiny as human-written code. This practice improves output quality and strengthens psychological safety by providing reliable guardrails. Implementing effective AI evaluation strategies supports this review process.

Organizational changes must support effective AI adoption through continuous learning and adaptation. Introducing AI tools should align with evolving team dynamics to integrate new technologies without sacrificing quality or security.

Also, AI-specific training prepares team members to interpret and use AI outputs responsibly, maintaining high standards in software development. Evaluation frameworks ensure outputs meet quality standards while enhancing accuracy.

Responsible governance safeguards psychological safety during technological changes. Reliable systems reassure teams about AI applications' quality and security, creating a supportive environment where organizations can leverage AI's benefits while maintaining a trustworthy work environment.

Harnessing AI for More Inclusive Developer Ecosystems

Organizations can make a significant impact by adopting open-source strategies and thoughtful AI integration. This approach enables diverse teams to collaborate, innovate, and overcome challenges, enriching the developer ecosystem. Developers and companies should explore Galileo's AI evaluation tools to ensure both innovative and responsible transitions.

Listen to the full podcast conversation to dive deeper into this fascinating discussion on AI's transformative impact and discover practical strategies for creating more inclusive, innovative developer environments.

Want more insights on Generative AI? Listen to Chain of Thought episodes, where software engineers and AI leaders share stories, strategies, and practical techniques.