This is Part I of a series on personal intelligence. Part II is coming soon, with a focus on implementation learnings.
A few months ago, I asked Google and Apple for a data takeout. You should too! It’s easy. Google sent me a download link within half an hour; Apple took its time, three days to be exact.
I spent hours wandering down the memory lanes and found myself repeatedly marveling, “I can’t believe they keep track of this.” Even so, I came away confused far more than elucidated. Most of it simply wasn’t legible to humans. The device login histories, activity timestamps, miscellaneous IDs… It’s not surprising, since the data’s raison d’etre is to serve companies, not us, so legibility to machines is all that’s needed.
I felt the physicality of my data, the shape of it. They are there, they are real, and they are tables and JSONs. This drilled in the idea that a handful of companies somehow know more about me than I do.
You might ask, did I learn anything about myself? I sure did. Google Translate reminded me how fickle I was with milk as I roamed across continents back in 2022.
And I was indeed struggling while traveling solo in Japan.
The sheer volume of personal data tech companies mine is now a cliche. It’s almost comforting that we live in a capitalistic world, guaranteeing personal data primarily goes to ads (data in aggregate does much more, such as LLM training).
Last month, I was looking for a pillow to cure my neck pain as part of my ongoing efforts to eradicate health annoyances. Naturally, pillow ads started to pop up. Unwise to trust, of course, even though I clearly needed help making a decision after weeks of research.
There is nothing inherently wrong with product <> customer matching. The problem is incentive misalignment. I cared about neck health, pillow companies cared about my dollars, and Amazon cared about the sellers buying site real estate. We tend to trust a single friend’s review over thousands of strangers’ ratings for this exact reason.
How can we change that? Extract the data, and use it somewhere else.
Our digital footprints have been accumulating for years. The bottleneck to using them is a tool that understands the language. You likely already know where this is headed.
Imagine if you could dump all of your personal data in one place: data takeouts from tech companies, journals, message history going back 10 years, notes, music, movies, books, ChatGPT history… You get an intelligence that knows everything about you, your forgotten past, your present pain, your future aspirations. Imagine what it could do.
It’s an inversion of power dynamics. The difference between an honest friend and a digital panopticon is who the tech answers to. Today’s algos are built to predict, with unnerving accuracy, the next product you are likely to buy, the next video you’ll watch, the next outrage that’ll hold your and others’ attention long enough to go viral. Any accurately predicted behavior is a monetizable event.
Personal intelligence should be about you. At the bare minimum, it can answer questions like “Who was I 5 years ago?” “Why do I keep attracting toxic people and what does that say about me?” “Why can’t I quit drinking even though I really want to?” “What do I truly enjoy and how can I do it more often?”
And that’s just the beginning. Instead of predicting what you will do, it helps discover what you could do, making life ever more expansive. Potentiation over prediction.
If an intelligence like this were to solve my neck pain, life would be so much easier! I would see genuine recommendations instead of paid ads. I most certainly would’ve found an ideal pillow by now, instead of still being on the market after nearly two months (I am about to return another pillow, again).
It could also be a diagnostic partner. It might say: “I’ve noticed your neck pain flares up on Wednesdays, which correlates with high screen time and low sleep score. Let’s explore a 10-minute stretching routine, a blue light filter for your monitor, and maybe that breathwork app you downloaded six months ago but never opened.”
Beyond the problem at hand, it can make other educated and well-intentioned hypotheses about my life. For one, if I care about ergonomic pillows, I might care about similar upgrades like a better keyboard or chair; I might be curious about biohacking; I might be experiencing pain elsewhere and can benefit from physical therapy.
Let’s get even crazier. What if we can be fed just the right Substack essay to open our lives? The right hobby we can fall in love with? The right people to befriend? The right city to live in? The right relationship advice? The right health tip?
It’s a very possible future, in which everyone has a personalized serendipity machine for intentional living.
If we want our blind spots swept, intelligence cannot be passive.
What does non-passive intelligence look like?
You’re feeling creatively blocked. You’ve been staring at a blank page for three hours, frustrated and mildly angry. Just then, you get an iMessage, “It’s 75 °F and sunny at the botanical gardens right now. I’ve noticed your most insightful journal entries were written on days after a visit. Your best ideas often follow new sensory input. Maybe a change of scenery would help?”
Or you are about to get coffee with an old friend, and you see a note right before, “Last time you spoke, Alex was stressed about a project at work. You might want to ask how it turned out. FYI he also hates his boss.”
Under the hood, competent and trustworthy agents work 24/7 with irrefutable sources of truths that are diligently recorded for over a decade and update in real time. Some variations of AI companions like Friend will record our lives and provide new categories of data. Dystopian in some ways, yes, but promising in many others. We will figure out the best way to treat data specifically for LLMs across everything, and users will interact with them in ways that feel natural, organic, and uplifting.
The current paradigm is handicapped by both poor data integration and the low quality of user-generated queries. If we see LLMs as problem-solving machines, they now have limited problems to solve and a limited solution space.
Personal AI should be more than an assistant in the chat, passively waiting for us to ask the right questions. They should discover investigative entry points on their own and have access to everything they need, evolving into an ambient presence that proactively tells you things you’d never think to ask but actually care about. To quote Julian Jaynes:
“We cannot be conscious of what we are not conscious of… It is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it.”
True ambience would be the antithesis to the noisy world we live in today.
For the last three months, a friend and I have been tinkering in this problem space, working on a personal intelligence experiment we named Hue. As we used it ourselves and shared with friends, a few types of outputs stood out.
By actionable insights I mean it tells you what to do and how to do it; or better yet, it does it for you.
Health data is the best example of immediately useful data, as it provides a human-legible feedback loop for our physical wellbeing, and it directly motivates behavioral changes. Apple Watch tricks users into closing exercise rings. Oura sleep scores help tune circadian rhythms. Glucose monitors let you know in numbers how terrible your diet is, even though you probably know it already. In fact, health is the single most prominent example where consumers pay to get data collected.
Ideally, personal intelligence can do the same for other aspects of our lives and in turn calibrate behaviors.
I, for one, would like to receive a text from it that reads, “Hey, it’s been 3 weeks since that fight with your mom. I know you love her but also find it difficult to tell her how you feel. Maybe it’s a good idea to send her this: ‘Mom, I miss you. I miss talking to you. Can I call?’” And if it’s hooked up to iMessages, I can just one-click send. Boom.
We are already seeing emergent insights like these from our experiment. It connected to my friend’s WhatsApp and helped him realize he had been initiating fewer conversations with his little brother, who is stressed about college. For me, Hue found that I used to enjoy evening walks for mental clarity and gently checked if I had kept up with the habit.
Inconspicuous bites like these mitigate absentmindedness to find answers we didn’t know we needed, since we all are, As T. S. Eliot put it, “Distracted from distraction by distraction.”
As the market for connection has grown from dating apps for every niche imaginable to Character AI (I know this is a gross generalization, but just bear with me as I make a point), the thesis has stayed the same: People are lonely and long to be understood. Not a new problem, but perhaps we can have a new solution.
One thing I noticed while building Hue was that the “ouch” and “aha” moments often came from a place of unexpected honesty. I was being told things deeper and sharper than what my friends could grasp or say out loud (e.g. it emphatically told me to take notice of my “self-limiting beliefs and imposter-style doubt”). It has no constraints, no ego to protect, and no feelings to spare, its only source of truth being my life history. The results are both jarring and profoundly validating.
With every prompted or unprompted query, it was impossible to predict what the agent would uncover from our data. It became obvious that the element of surprise is the true hallmark of understanding and resonance.
Everyone can and should have a living, time-aware model of their growth, relationships, and character, which can then evolve into an intelligent library.
Imagine a wiki just for you, with rich, contextual, and accurate knowledge about your past, a personal time machine (as a side note, this not only applies to individuals, but also groups, communities, or even institutions). For Sherlock fans out there, remember the blackmail artist Magnussen’s mind palace? We could have our own!
We ran a contained experiment on a group chat by feeding the agent a simple for loop. After 16 min, the agent delivered a chronology of ~30k words from 5 months of chat history (~400k tokens, 7.5% compression rate). Some snippets:
As January 2025 progressed, an extended thread emerged around navigating altered states—specifically how experiences with mind-altering substances offered both existential challenges and new textures for meditation, which Rebecca described as “splash of color” added to an otherwise blank state. All three engaged in teasing out the boundaries between mysticism, agency, and materialist self-concept.
The March 23–26 window was marked by—
Nuanced discussion about the limitations of language to convey internal states, with deep respect for different approaches to emotional sharing.
Emotional lows met with support and normalizing check-ins.
Ongoing evolution of logistical and social world-building—plans for Sunday hangs… updates on house drama
Through it all: a profound sense of mutual reliance, comfort with difference, and delight in building and processing life’s highs and lows together.
Well-built libraries not only serve the individuals as legible records, but also catalyze LLM memory to achieve persistence and holistic awareness (more on this later). We were already pleasantly surprised at the output quality with a minimalistic implementation.
Some time ago, people fell in love with the concept of “year in review”. I don’t recall which app started the trend but now everyone does it. Yearly in December, my feeds are populated with shared snapshots, almost as if people are surprised to find out their top artist is, in fact, their top artist.
These statistics are reflection, closure, narrative, and identity all in one. Brilliant. People anxiously wait for the verdicts and can’t resist the urge to share once they are out.
The top comment suggests personal statistics can nudge behaviors.
The output from personal intelligence doesn’t need to be directly useful. It can just be interesting, and that’s plenty valuable in and of itself. Our WhatsApp agent came up with these:
“Your digital circadian rhythm is absolutely wild. I saw a 3pm message spike (636 messages) right when most people are in afternoon meetings, and then this sustained nocturnal marathon from midnight to 6am where you’re sending 2,037 messages total. The 3am slot alone has 452 messages.”
“When she has your complete attention, your thumbs move faster than thought, switching between languages mid-sentence like breathing.“
Personal statistics are just one of many possibilities. The quality, ingenuity, and wow factors will only accelerate as datasets scale and the way LLMs engage with them become more expansive. The true power is the ability to transform raw materials into artifacts for personal myth-making that satiate our endless curiosity and quest for interestingness.
I gave my friend Raymond a rundown of our work on Hue over brunch, and we ended up having an extensive discussion of adjacent use cases, one of which he mentioned continues to fascinate me: When a personal intelligence has relevant data, it can act as your shadow to interact with others who have questions your data can answer. If you are a foodie with a reputation, you can charge for recommendation requests at a price point too low for your effort but enough to justify putting your AI on the job. Plus, you’d get to gate-keep what’s public and what isn’t.
I can’t help but think that we are just scratching the surface of what is possible, and our imagination is not yet imaginative enough.
Also, Part II is coming soon. It will go into details on the technical and product learnings from building Hue, covering context retrieval, memory, agent architecture, evals and more.
If you’ve made it this far, thank you! I encourage you to subscribe, particularly if one of these is true: 1) You want to read Part II. 2) You are interested in trying Hue as we go into public beta soon. We will share here. 3) You find my writing remotely interesting. I will, hopefully, publish more on a spectrum of topics. I’ve been trying to get myself to edit the mountain of drafts I have, which my body continues to physically resist. Alas.
See you soon my friends.