More than 20 years ago, I was in elementary school.
At that time computers were slow (though some software surprisingly rqn faster!), and the internet was just beginning to permeate every household, not just the foyer of nerds. Google was already the leading search engine but, for me and my friends, another website changed everything : wikipedia.org.
Wikipedia was a game-changer for anyone wanting to explore the world on the web, in real time, and for free! It was an unbelievable treasure trove of information, just a click away, with new content added daily by an incredible community of people over the world.
My friends and I were thrilled to explore and learn new things every day.
Everyone. Except our teachers.
I distinctly recall the skepticism teachers harbored against both Google and Wikipedia. They insisted that these online resources would never replace “official encyclopedias” (the ones bound in books), and that a community built on
non-professional contributors could never produce content as good and as precise.
They were, by a vast margin, wrong.
Wikipedia emerged as a masterpiece, demonstrating how a community could effectively collaborate to build the most complete, up-to-date, and freely accessible encyclopedia in the world.
This reflection of my teachers’posture makes me think of the current “AI haters” in the LLM (Large Language Models) era. Even if an “AI winter” comes (and I believe it might soon, again), LLMs have already taken the world and irrevocably changed various fields: writing, translation, coding, content creation, research, and more.
However, there is a big difference between the revolution 20 years ago, and the current one.
The big difference is quite simple but crucial: cognitive bias.
Wikipedia was meticulously built over years by a robust community of experts and enthusiasts who collaborated to establish a solid foundation of knowledge. If a mistake was made, it was almost instantly corrected by another member, and disruptive individuals were swiftly banned. For the Wikipedia community, nothing was more important than facts.
For LLMs, the situation is very different. LLMs are trained on every source of information. Not only factual or trusted, but very biased (and false) information. This is often referred to as data bias or algorithmic bias, as the models inherit and amplify the bad patterns. Also, an LLM will never answer to a complex question that it does not know. Its fundamental architecture is based on probabilistic prediction. An LLM will always have an answer, but sometimes it can be misleading informations, hallucinations, or purely unethical.
Just as information extracted from Wikipedia should always be taken with a grain of salt (even today), information derived from LLMs should be taken with a large salad bowl of salt.
So, how do we navigate learning in this new era? I think you must embrace and develop these core principles:
- Learn, don’t just read. Actively engage with the information. Don’t passively consume it.
- Think, don’t just absorb. Process the information, question it, and connect it to your existing knowledge.
- Develop a critical view; don’t blindly accept information. Always evaluate the source, context, and potential biases.
These aren’t just abstract ideas; they represent the essential competencies for thriving alongside LLMs. For instance, consider a developer interview scenario.
As an interviewer, instead of avoiding or prohibiting LLMs during a coding interview, embrace them! But evaluate the candidate not solely on their raw coding skills, but on their focus and critical approach to coding with an LLM.
For example, pose a significant, difficult technical question at the beginning of the interview. The candidate can ask questions to the interviewer, and can use an LLM during the coding test.
If the candidate continuously prompts the LLM and copy-pastes code without demonstrating learning, thinking, or developing a critical view of the LLM’s output (a “vibe-coding-monkey” style), then the person might not be a good engineer.
However, if the candidate develops a critical perspective on what the LLM returns, facts checks some response elements, and learns something during the interview about this difficult problem, it indicates a strong fit for an engineering position.
Whether we like it or not, we are undeniably in the midst of an information revolution. However, this does not mean LLMs can be blindly trusted, nor does it mean you should cease to learn deeply.
LLMs are powerful tools, and like all powerful tools it must be used with great care. They are excellent starting points for exploring new concepts, brainstorming, and learning.
LLMs are copilots, but not end solutions.