This year, my focus is on exploring the intersection of AI, Machine Learning, and Neuroscience. I'll be diving deep into reading material and codebases to better understand both the natural and artificial cognition. The following lists outline the books, textbooks, and code that I'm currently learning from and probably you should too if you are interested in AI, more than just the hype and actually care about the future of the field.
I'm thinking about pursuing a Master's in AI/Neuroscience but haven't settled yet. So, I compiled the most cited books and textbooks from top UK/US schools to see if I can get a sense of what I should be learning in a Master's program and meaningfully try to contribute to the field.
It's pretty easy to get lost in the countless number of papers you have to read to catch up with the latest research/engineering in AI right now. I default to the basics instead and I'm reading textbooks over papers lately. If you really want to be up to date with the latest research, it should be something you personally care about, you might want to head to Emergent Mind and check out all the papers that people are sharing right now (on X, Reddit, HN, etc.) because the space is moving so fast that as of today, this text might be deprecated too. But I won't advise people to start reading papers, because it's a lot of work and you'll probably get lost in a sea of noise.
Books¶
I'd advice you to start with the following books, which are a great starting point to fill your appetite for the field, and get a sense of the big picture, and where all we're going.
Textbooks¶
Once you have a grasp of the big picture, you might want to get a glimpse of the technical aspects. These books require minimal prerequisites and focus on concepts rather than heavy mathematics, and are also succinct, which makes them great for a quick read.
For those with some programming experience and basic math background. These books are a great starting point to understand the technical aspects of AI, and build a solid intuition before heading to the advanced level.
Once you get at least comfortable with the math of machine learning1, you might be interested in tackling these, which I found to be highly recommended by many, many Master's and PhD curricula. I also found that some AI/ML programs might benefit a lot from borrowing inspiration from classic neuroscience program textbooks, because we've seen fruitful results in the past from this cross-pollination. Without going into specifics, from interpretability to reinforcement learning, there's much to learn from neuroscience.
One small thing to note is that instead of the classic Pattern Recognition and Machine Learning2 mentioned everywhere, I prefer the newer version from the same author and his son Deep Learning3, which is more up-to-date and includes more recent research that better reflects the state of the art.
Code¶
Besides reading material, I also think it's important to get hands-on coding experience with codebases and frameworks that are currently being used in the industry. I've compiled a list of some of the most popular projects in the field that are mentioned in many job postings and are used by top AI labs. These are the ones you should be looking at:
- PyTorch by Meta
- XLA & JAX by Google DeepMind
- Transformers4 by Hugging Face
- MLX5, MLX examples & axlearn by Apple
- tinygrad by tiny corp
- ggml & llama.cpp by Georgi Gerganov
- Triton by OpenAI6
- CUDA by Nvidia & GPU Puzzles by Sasha Rush
And also some educational implementations:
- nanoGPT, micrograd, & llm.c by Andrej Karpathy
- x-transfomers by Phil Wang (aka lucidrains)
If you're not sure where to start, I'd recommend starting with tinygrad, MLX and MLX examples, which are minimalistic and easy to understand, and you can also run them using your own hardware if you own a Mac.
If you get a strong foundation with all the contents above, you might be in high demand in the industry. If you have any questions or suggestions for this post, feel free to reach out to me on X, or suggest edits on GitHub.
Happy learning!
.png)
