In 2008, IBM's Roadrunner became the world's first petaflop supercomputer. It cost $100 million, and was housed in a 560 square meter facility (6,000 sq ft; larger than a basketball court).
On Jan. 6, 2025, Nvidia announced Project Digits at the Consumer Electronics Show (CES) in Las Vegas. It's a desktop-sized AI supercomputer, delivers 1 petaflop, and costs $3,000.
While Project Digits' 1 petaflop is at 4 floating point precision, compared to Roadrunner's 64-bit, it's still a significant milestone. During my research career, it was a fact of life to be constantly writing proposals for supercomputer time. Computing was a precious resource. Now, it's really neat to imagine having a little supercomputer sitting on your table, even if it's for specialized AI workload.
So, what's next as our collective AI journey continues? Here are four trends to watch.
1 - How we learn will change (again!), and It's going to be very Socratic
When was the last time you went to a library to seek out a certain piece of knowledge? It sounds like a foreign concept only reserved for the most obscure topics, but that used to be the standard method for research.
Of course, the digitization of information, coupled with internet search engines, changed this model of knowledge acquisition. With Google, "research" is less about digging through books on shelves, and more about typing your queries into the search bar. Instantly, the knowledge you seek is available to you without getting out of your chair.
But knowledge acquisition is not the full learning journey. We need to digest the information, and put it in practice.
With AI, now the second "practice" phase of learning has also become instantaneous.
Foundation models not only already have digitized knowledge, the chatbot experience also allows us to ask follow-up questions, play with "what if" counterfactuals, and flexibly explore the topic. It's a "socratic" learning experience with immediate feedback loops to constantly test our growing understanding with a knowledgeable (and patient!) mentor.
So exactly how will AI shape our learning journey? Some possibilities:
- Interactive AI-based learning will become more common, replacing pure passive reading.
- Instead of forbidding AI tools in education, curriculums and learning platforms will integrate AI to help students learn. This has already started and will continue to evolve.
- When acquisition of knowledge becomes easier with AI, boundaries between specialized roles will blur.
2 - Computing is about to get a lot more "Biological"
A simple example of a computer is a calculator. Given user inputs, a calculator returns a correct result. If it returns anything other than that one correct result, it's a bug.
This behavior is the tenet of computing. It's deterministic. It's reproducible. Same thing goes in, and same thing comes out.
In software development, this concept is baked in deeply. In fact, the gold standard is to implement many tests to ensure the expected behavior is always returned. This is the basis of many testing frameworks, from modular unit testing to end-to-end testing that covers the entire application.
But the behavior of LLM (and generative AI in general) is challenging this notion. LLM is stochastic. In fact, you will often see a "temperature" parameter for inference that controls the degree of randomness. You can enforce a deterministic behavior by caching, but this still doesn't turn your LLM into a calculator.
That's because LLMs, like biological systems, work with patterns and probabilities rather than exact calculations. There is not one right answer, because there are many ways to summarize, to translate, and to answer a question.
This shift from deterministic to probabilistic computing is fundamentally changing how we build and interact with technology. What does that mean? Some possibilities:
- While our existing testing frameworks will not go away, we will also need new "testing" paradigms to get a hold of how our AI-based program is behaving. Same code no longer means same behavior. Instead we'd need to monitor probabilistic performance.
- And it's not just about keeping track of how the system behaves. User feedbacks become the core method to validate application performance. When there are many ways to be correct, the most direct approach is to ask the user.
- Quality metrics will shift from "correctness" to "consistency" and "reasonableness", measures that can't be defined against "ground truth". This will bring us to using AI to evaluate AI as a part of quality assurance.
3 - The return of the"Prompt-fu"
In 2023, "prompt engineering" became quite a hot topic. There were high paying Prompt Engineer jobs, and many learning platforms created courses for prompt engineering.
Thankfully, this fad has subsided. Nowadays, one wouldn't build a career purely on prompt engineering. However, the skill to effectively utilize AI tools is not going away. In fact, it's taking on new forms:
- Model-specific prompt "dialects": Different models were trained differently, and they can respond particularly well if you speak in the way they were "brought up". For example, Anthropic models understand XML tags, and Llama models use special tokens. Optimizing your prompts can provide meaningful performance boost.
- Breaking down tasks and designing clear instructions: AI models are powerful, but they are not magic. They need clear instructions to perform well. So to get good results, users need to scope tasks into individual pieces that AI can handle. For a large task (that can be completed in many ways), it's important to provide sufficient instructions to guide the model.
- Extracting structured output reliably: if you build systems that interact with AI, then AI response often needs to be integrated with downstream processes, which would expect precisely formatted data. But as we discussed above, LLMs are probabilistic with variable outputs. So the ability to leverage tools like function calling to extract structured output will be the difference between a buggy and a stable system.
4 - Talking to robots will make us better at talking to humans
Remember Microsoft's Tay? In 2016, Microsoft released a chatbot that learned by mimicking Twitter interactions. This backfired when Tay absorbed all the bad behaviors on the internet like a sponge, and began to similarly misbehave. "We are what we eat" is also true for AI models; how they are trained will determine their actions and responses.
This was an important lesson in AI research. Nowadays, foundational models are trained with better curated data, and carefully designed guardrails to prevent harmful behavior (alignment). For example, Anthropic's Claude model was designed with ideal characteristics of human communication. It was built to express curiosity and open-mindedness, and responds to the user's queries with thoughtfulness and humility. These traits reflect peak human communication: make the speaker feel heard, provide helpful feedback, and be supportive.
As AI becomes our companion in our day-to-day work and life, our communication pattern will be influenced by AI systems, whether we realize it or not. I can imagine several scenarios:
- Through regular interaction with AI models trained for optimal communication, we'll unconsciously absorb patterns of active listening, structured responses, and thoughtful acknowledgment.
- Our expectations for human interactions will be reshaped. After experiencing AI's consistent thoughtfulness and support, we'll become less tolerant of poor communication in real life. This "raised bar" will make strong communication skills not just nice-to-have, but essential.
- The "communication elevation" will be particularly visible in professional settings, and workplace will adapt to new, higher expectations. Companies will invest more in communication training, and hiring practices will place greater emphasis on these skills. What was once considered "excellent" communication may become the new baseline.
How are AI tools changing your daily interactions? What do you think will be the next big thing in 2025? I'd love to hear your thoughts!