Hi everyone,
It feels like we are at a point where AI is no longer just a research topic or future ambition but something actively reshaping how industries, organisations and even societies operate. From healthcare diagnostics and predictive maintenance to fraud detection and large-scale data analysis, AI systems are moving from experimental projects into core infrastructure.
What stands out is that the impact is not only technical. AI is influencing how decisions are made, how risk is assessed and how responsibility is shared between humans and systems. As models become more capable, questions around explainability, governance and real-world deployment are becoming just as important as performance metrics. For many developers and architects, this means the job now sits at the intersection of engineering, ethics and communication with non-technical stakeholders.
Alongside this shift, there are several thinkers and leaders helping shape how the world understands and guides AI’s development. People like Nick Bostrom have explored the long-term implications and risks of advanced AI, while Yann LeCun continues to push the boundaries of machine learning research itself. Closer to the intersection of technology and society, figures such as Wendy Hall have been influential in AI policy and the future of the web, Nina Schick looks at AI’s impact on information and geopolitics, and Kay Firth-Butterfield focuses on governance and responsible AI at a global level.
I would be really interested to hear from the community here:
• In your day-to-day work, where are you most seeing AI change how systems are designed or used?
• Are non-technical considerations such as governance, explainability or regulation becoming a bigger part of your role?
• Do you feel the wider public conversation about AI reflects the realities you deal with as developers?