Grace Hopper Celebration 2025 Presentation
Demystifying the mathematics powering deep learning
Deep learning often feels like magic โ systems that can recognize faces, translate languages, and generate human-like text. But behind that magic lies mathematics โ the true engine driving every neural network.
This repository contains the presentation materials from my talk at Grace Hopper Celebration 2025, where I demystify the math that powers deep learning and show how concepts from linear algebra, calculus, probability, and optimization come together to make AI work.
Through intuitive visuals and real-world examples, this presentation explores:
- Linear Algebra: How vectors and matrices move data through neural network layers
- Calculus: How derivatives and gradients help models learn through backpropagation
- Probability: How probabilistic reasoning guides predictions and uncertainty quantification
- Neural Networks: Putting it all together to understand how deep learning systems function
This is not about solving equations โ it's about understanding the "why" behind the algorithms.
โโโ presentation/ โ โโโ GHC2025_Math_Behind_Deep_Learning.pptx โโโ sections/ โ โโโ 01_linear_algebra.pdf โ โโโ 02_neural_networks.pdf โ โโโ 03_calculus.pdf โ โโโ 04_probability.pdf โโโ resources/ โ โโโ references.md โโโ README.md Attendees will walk away with:
- Conceptual Understanding: A clear grasp of how mathematics shapes model behavior
- Practical Confidence: The ability to interpret AI systems more effectively
- Actionable Insights: Knowledge for building smarter, fairer, and more explainable AI solutions
- Vectors and matrices as data representations
- Matrix multiplication in forward propagation
- Dimensionality and transformations
- Weight matrices and their role in learning
- Architecture fundamentals
- Forward propagation
- Activation functions
- Layer compositions and deep networks
- Derivatives and gradients
- Chain rule in backpropagation
- Gradient descent optimization
- Learning rates and convergence
- Probabilistic predictions
- Loss functions and likelihood
- Uncertainty quantification
- Bayesian perspectives in deep learning
Aditya Hajare
Senior Software Architect, Enterprise Innovation - AI/ML
Fannie Mae
- ๐ IEEE Senior Member | AI Policy Committee 2025
- ๐ Co-Founder & CTO, Infinict (Web3 Fintech)
- ๐ 4 Patents | 15+ Publications
- ๐ Python Software Foundation Member
- ๐ก Passionate about education equity and AI explainability
Connect with me:
Grace Hopper Celebration 2025
The world's largest gathering of women and non-binary technologists
Session: The Math Behind the Magic: Understanding the Role of Mathematics in Deep Learning
Date: November 6 2025 Location: Chicago
For those interested in diving deeper:
- Deep Learning Book by Goodfellow et al.
- 3Blue1Brown Neural Networks Series
- Mathematics for Machine Learning
- Anthropic's Prompt Engineering Documentation
Found an error or have suggestions for improvement? Feel free to:
- Open an issue
- Submit a pull request
- Reach out directly
This repository is licensed under the MIT License - see the LICENSE file for details.
Special thanks to:
- Grace Hopper Celebration organizing committee
- The AI/ML community for continuous inspiration
- Everyone working to make AI more accessible and understandable
For questions, collaboration opportunities, or speaking engagements:
- LinkedIn: [(https://www.linkedin.com/in/itis-aditya-mehra/)]
โญ If you find this helpful, please consider starring this repository!
Last Updated: November 2025