Artificial intelligence’s rapid evolution is one of our time’s defining challenges and opportunities. I work at the intersection of IoT, supply chain security, open-source communities, and documentation. I’ve always been fascinated—and concerned—by the trajectory of AI systems.
Questions like “How can we ensure AI aligns with human values?” and “What safeguards are necessary to prevent catastrophic risks?” have kept me awake at night. The BlueDot Impact’s Intro to Transformative AI course felt like a natural first step toward building safer AI systems.
🔍 What’s the course all about?
The course’s premise is compelling: A five-day intensive program designed for professionals eager to rapidly upskill. Participants learn about Transformative AI (TAI) and its implications for humanity—a perfect challenge for the last week of 2024.
The Key focus on AI safety stood out to me. A field that bridges technical rigor with ethical foresight. The course examined how AI models are trained, the people who make them, the risks and the benefits.
Got Accepted! 🎉
My Cohort
My cohort included professionals from law, policy, governance, engineering, and academia—all bringing unique perspectives to the table. Guided by our facilitator, Aishwarya Gurang – we engaged in structured debates and collaborative problem-solving exercises. The 15-hour commitment over five days proved intensive but incredibly rewarding, combining independent study with engaging group sessions. These were my highlight.

The Course: An Executive Summary
The week-long program was nothing short of “Transformative”. Here’s the kitchen sink of everything we went through:
Day 1: Technical Foundations: How AI systems work
Our first session challenged my assumptions about neural networks and scaling laws. Simon, a fellow participant with deep technical expertise, clarified these concepts during our breakout sessions. We discussed future career moves that people are considering in the field and how does the next 5 years look like for software engineers.
The next activity was based on Active Recall to speed our learning and clarify the concepts among peers. We used this prompt for the roleplay activity planned between a group of two people.
Teacher: Help the learner understand how GPT-4 was trained by explaining it to them for 2-3 minutes. Recalling information from memory improves your learning too, so don’t use your notes! Then answer their questions.
Learner: Ask many detailed questions to your teacher, so you improve your understanding. We encourage polite interruption!
The resources featured an in-depth reading package on how LLM models are trained, which we needed to recall. With questions, the students extended their own understanding and clarified the teacher’s concepts. Whether you get accepted or not, you can access the curriculum on the AI safety fundamentals website.

What AI systems are people building, and why?
The discussions around what drives AI development were eye-opening. We role-played various stakeholders who make the wheels spin in the AI race. From venture capitalists, CEOs, consultancies, and government officials. This revealed the complex interplay of motivations shaping AI’s future. After two rounds of discussions, here were the conclusions.
- Venture capitalists: Provides crucial funding to startups to seek ROI and Transformative impact balancing regulatory uncertainty and market risks. One more than the other.
- Large corporations: Major customers of AI. In this scenario, the larger corp was an investment bank. Balancing profit motives with regulatory compliance and workforce impact to get a competitive advantage.
- Government: Shapes AI policy and regulation while investing in domestic AI development. Pursues national security and economic growth balancing public benefit and international competition.
- Universities: Conducts fundamental AI research and develops future talent. Struggles with funding constraints while competing for academic excellence.
- AI companies: Balance commercial interests with ethical considerations while competing for talent and resources. Can’t slow down growth or hiring due to competition and VC pressure.
- Consultancies: Bridges the gap between AI technology and business implementation. Helps organizations adapt to AI while maintaining relevance in a rapidly changing landscape.
- Philanthropy: Funds AI initiatives focused on societal benefit and risk mitigation. Takes a long-term perspective while measuring the impact of investments.
The Promise of AI
Our third-day discussions painted ambitious visions for 2050, including:
- Universal access to high-quality healthcare and education
- Transformed conflict resolution through AI-mediated processes (This particularly was a hot take, intense discussions on both innovations and catastrophic risks)
- Intercultural understanding and species-wide communication
- Economic reforms including Universal Basic Income (Reference)
We explored how AI has evolved from basic tasks to now outperforming humans in Math Olympiads and generating human-like podcasts. The reality check came when we discussed how leading researchers now rank advanced AI’s risks alongside nuclear war and pandemics. For the first time — and probably not the last — a scientific breakthrough enabled by artificial intelligence (AI) has been recognized with a Nobel prize.
The 2024 chemistry Nobel was awarded to John Jumper and Demis Hassabis at Google DeepMind in London, for developing a game-changing AI tool for predicting protein structures called AlphaFold. AlphaFold is an AI system developed by Google DeepMind that makes state-of-the-art accurate predictions of a protein’s structure from its amino-acid sequence.
The Risks of AI
A particularly memorable debate centered on whether AI development should be paused or not. Team A argued for immediate pause until robust safety measures are in place, citing:
- Current difficulty in detecting AI-generated content
- Influence-seeking behaviors already observed in AI systems
- Risks to democratic institutions
- One primary risk that stood out was how an AI race is creating unprecedented environmental & socio-political risks. A study by OECD shows how much water does AI consume?. The results were shocking.
Team B countered that pausing would be counterproductive, arguing:
- Global coordination challenges make pausing impractical.
- Continued development will gradually improve safety measures.
- Potential benefits in biomedicine and other critical fields.
“The argument on stopping or pushing AI development doesn’t need to be extreme – we can slow until public-private agreement on safety and ethics”. This balanced perspective encapsulates the nuanced understanding I was able to develop throughout the course. Refer to The Cost of Caution for a detailed view on what a stop or slow down of AI development should be implemented.
Another theory: A 6 month pause on AI development won’t help us achieve the intended goals. These systems are at a precipice of ungodly financial, social, technological benefits for their creators if they are the first. A take was to let the development continue on narrow Transformative AI tools with limited access. Specialized deployments of these AI models in an isolated environment limits risks & has a higher chance of AI alignment.
“It doesn’t matter if we pause or not, more research on safety/alignment is needed”
This sentiment captured our cohort’s shared understanding after the debate. We realized that regardless of AI’s development trajectory, ensuring its safety requires sustained effort. A crucial challenge remains the alignment of various “actors” with intrinsic and extrinsic incentives for sustainable AI development.
Completing the Course: What’s next?
By the end of the week, I had deepened my understanding of Transformative AI – as not just a buzzword, but a premonition instead. I also gained actionable insights into how I could contribute meaningfully to the field. More importantly, I am left with a renewed sense of purpose.
If there’s one thing this course reinforced, it’s that building safer AI systems is not just a technical challenge—it’s a societal mission requiring collective action. Whether you’re an engineer, policymaker, or educator, there’s room for everyone to contribute.
As someone passionate about open-source development, I’m brainstorming projects that could contribute to responsible AI development—whether through tools for model evaluation or platforms that democratize access to safe AI technologies.


How to get involved!
For those considering to join the course at BlueDot Impact from reading this blog. I recommend the following complementary tracks:
- AI Alignment: Tackling technical challenges like robustness and value alignment.
- AI Governance: Shaping policies that ensure responsible development and deployment of advanced AI systems.
That’s about it, thanks for reading, live in the mix.