30 Essential Questions and Answers on Machine Learning and AI
By Sebastian Raschka. Free to read. Published by No Starch Press.
Copyright © 2024-2025 by Sebastian Raschka.
Machine learning and AI are moving at a rapid pace. Researchers and practitioners are constantly struggling to keep up with the breadth of concepts and techniques. This book provides bite-sized bits of knowledge for your journey from machine learning beginner to expert, covering topics from various machine learning areas. Even experienced machine learning researchers and practitioners will encounter something new that they can add to their arsenal of techniques.
📘 Print Book:
📄 Read Online:
What People Are Saying
“Sebastian has a gift for distilling complex, AI-related topics into practical takeaways that can be understood by anyone. His new book, Machine Learning Q and AI, is another great resource for AI practitioners of any level.” –Cameron R. Wolfe, Writer of Deep (Learning) Focus
“Sebastian uniquely combines academic depth, engineering agility, and the ability to demystify complex ideas. He can go deep into any theoretical topics, experiment to validate new ideas, then explain them all to you in simple words. If you’re starting your journey into machine learning, Sebastian is your guide.” –Chip Huyen, Author of Designing Machine Learning Systems
“One could hardly ask for a better guide than Sebastian, who is, without exaggeration, the best machine learning educator currently in the field. On each page, Sebastian not only imparts his extensive knowledge but also shares the passion and curiosity that mark true expertise.” –Chris Albon, Director of Machine Learning, The Wikimedia Foundation
“Sebastian Raschka’s new book, Machine Learning Q and AI, is a one-stop shop for overviews of crucial AI topics beyond the core covered in most introductory courses…If you have already stepped into the world of AI via deep neural networks, then this book will give you what you need to locate and understand the next level.” –Ronald T. Kneusel, author of How AI Works
Table of Contents
Part I: Neural Networks and Deep Learning
- Chapter 1: Embeddings, Latent Space, and Representations
- Chapter 2: Self-Supervised Learning
- Chapter 3: Few-Shot Learning
- Chapter 4: The Lottery Ticket Hypothesis
- Chapter 5: Reducing Overfitting with Data
- Chapter 6: Reducing Overfitting with Model Modifications
- Chapter 7: Multi-GPU Training Paradigms
- Chapter 8: The Success of Transformers
- Chapter 9: Generative AI Models
- Chapter 10: Sources of Randomness
Part II: Computer Vision
- Chapter 11: Calculating the Number of Parameters
- Chapter 12: Fully Connected and Convolutional Layers
- Chapter 13: Large Training Sets for Vision Transformers
Part III: Natural Language Processing
- Chapter 14: The Distributional Hypothesis
- Chapter 15: Data Augmentation for Text
- Chapter 16: Self-Attention
- Chapter 17: Encoder- and Decoder-Style Transformers
- Chapter 18: Using and Fine-Tuning Pretrained Transformers
- Chapter 19: Evaluating Generative Large Language Models
Part IV: Production and Deployment
- Chapter 20: Stateless and Stateful Training
- Chapter 21: Data-Centric AI
- Chapter 22: Speeding Up Inference
- Chapter 23: Data Distribution Shifts
Part V: Predictive Performance and Model Evaluation
- Chapter 24: Poisson and Ordinal Regression
- Chapter 25: Confidence Intervals
- Chapter 26: Confidence Intervals vs. Conformal Predictions
- Chapter 27: Proper Metrics
- Chapter 28: The k in k-Fold Cross-Validation
- Chapter 29: Training and Test Set Discordance
- Chapter 30: Limited Labeled Data
Support the Author
You can support the author in the following ways:
- Subscribe to Sebastian’s Substack blog
- Purchase a copy on Amazon or No Starch Press
- Write an Amazon review