The Next Frontiers of Search
I have spent quite some time working on (and daydreaming about) search, and mapping human knowledge. Here are the big themes in this area in the coming years.
Search engines today
Today, a vast amount of content that is available in public can be looked up within miliseconds.
In addition to the 10 blue links on the results page, search engines also provide:
It is an impressive feat of engineering, and has been great for answering simple questions and looking up facts. But what lies ahead if we dare imagine the future possibilities?
No Search at All. Intent Understanding
To get the piece of information you are looking for, going to google.com can sometimes feel like a hassle. A few clicks each time, plus the context switching, can add up in terms of time costs.
A more desirable experience is to have the knowledge delivered to you, to whatever app or workspace you are in. So that you can get the information and resume what you are doing.
You already see patterns like this today, where there many entry points to the search engine:
One day, we shall see a reincarnation of Clippy. One day.
The idea is that you want the information the moment you need it. We will find more and more ways to eliminate the friction that stands between you and what you are searching.
This brings us to our next point - searching across apps.
Information Layer Across Apps & Contexts
Everyone has experienced some version of trying to find a document or message across emails, instant messages, cloud storage, notes etc.
At the time of writing, this is still largely the case. Though there are a sea of big companies and startups alike working on solving this. It is a rather straightforward problem of integrating with various data sources and indexing the content.
So it is probably safe to expect this to be less of an issue over time.
We would always be looking for shortcuts to get what we are looking for, faster. This is a part of a much bigger trend of automation and connected apps, which is not limited to search.
But search engines, as a general-purpose intent understanding machine , have a big role to play in this. We can expect to see more deep links and actions being accessible from a central interface. Alexa playing your favorite podcast is only the beginning.
Querying Huge Language Models
When you have a question, the most natural thing to do for most people is to ask someone.
When Google came along and became a verb, the convention became to just ask the Googs. Sure there are still loads of questions it can't answer, that a real person can. But it's good enough. And it's instant .
That could very well change in the near future. Generative language models that are trained a huge corpus of human-generated data on the internet can spit out answers that are often as good as a real human's response.
At the time of writing, OpenAI recently released an API to its latest GPT-3 model. From all the anecdotes I have read, it seems it is pretty darn good! Generating a general purpose human-like response seems to good enough to be usable.
It does seem too far for a language model to do what the search engine currently does, provided that it can:
When that happens, perhaps we wouldn't even call it a search engine. Would "oracle" seem more apt?
Indexing the Unindexed Knowledge
Both search engines and large language models rely on crawled data on the web. This means the scope of their knowledge and capabilities are limited to that dataset.
In particular, the limitations include:
Given the limitations, here are a few exciting areas of development:
That covers the knowledge side of things. But so much of the human condition isn't about just knowing ...
Indexing Perception, Mmotions, and Experiences
What we know accounts for just a small fraction of our conscious experience. There are myriad sensations that are hard to even find words for. This is especially true for experiences in altered states - for instance, from meditation, bodily movements, or consuming substances.
But what if we can enlist the help of machines to map out and navigate how we feel ? Can we index an emotional experience and construct a sequence of steps to recreate or revisit this moment?
This sounds incredibly challenging. And by this point we are venturing quite far into the future. It would be fun though, to entertain a few possibilities.
It helps to recognize that we already have reliable heuristics, in various domains of art, that roughly predict what our experiences will be like. Eating ice cream on a hot summer day is satisfying. Stomping backbeats make music easy to dance to. Symmetry is generally pleasing to the eye. Exercising makes you tired in the moment, and you feel good at night.
These broad stroke heuristics are distilled from many instances of trial-and-error and observations. What happens if we more systematically gather data at scale and analyze it?
We could in principle achieve fine-grained predictions and control of human experiences: in terms of precision in time, evoking the exact sensation desired, and personalization.
In this setup, the role of human beings would be sensors and interpreters, while machines maintain and find patterns in the collective data.
Perhaps soon enough, we would be able to guess the inner state of any person, based on known information about the sequence of inputs and stimuli in the current environment. And the machine can construct a model of the person's mind, as a time-series sequence of perceptual frames, with each frame being a digital representation (say, an embedding) of the sensation.
This is a good place to wrap. I will elaborate and flesh out some of the themes above in future posts.
What an exciting world we live in!
Hope you enjoyed this post. Let's stay in touch.