Sorry, your browser cannot access this site
This page requires browser support (enable) JavaScript
Learn more >

With the global hype around OpenClaw, I’ve once again pondered a bit: is the current agent frenzy really because AGI has arrived? I naturally don’t think so.

1. Language generation models are just language generation models; they have no inherent logic

The term “intelligence” is actually a somewhat misleading piece of marketing. What large language models do is essentially probabilistic completion—predicting the next most likely token based on statistical patterns in the training data. They don’t know why 1+1 equals 2, nor do they understand why code should be written a certain way, but they can “imitate” textual content they’ve seen.

A model can produce beautiful Python code, but it doesn’t “understand” what a loop is or what a function is. It has simply seen enough code samples to know which token sequences are most probable in certain contexts. If I specify a language with a narrow application scope and relatively little public information, the probability of writing incorrect code remains high (I’m looking at you, R…).

However, because different programming languages share underlying logical similarities, their outward manifestations inevitably have commonalities. A well‑designed agent, when provided with well‑written documentation, can still guide the model to output code of higher quality than handwritten code. But this process… should still be devoid of logic…

2. The so‑called capabilities originate from the source code behind various files

Think carefully: what are the “miraculous” abilities that current agents display?

  • Generate an icon or flowchart? — Behind it are SVG, XML schemas, and rendering engines.
  • Write a webpage? — Behind it are HTML/CSS/JavaScript syntax rules and browser APIs.
  • Execute shell commands? — Behind it are decades of well‑documented CLI toolchains.

These capabilities were already programmatic in different ways; the language model merely connects natural language with existing code. In other words, agents/language models now act as extremely efficient “interfaces”, not literal “thinking entities”.

3. Technology has indeed advanced, but perhaps not as rapidly as it seems

We have to admit that technologies like the Transformer architecture, RLHF, and MoE have brought about a qualitative leap. We can converse with machines more naturally, and the quality of machine‑generated text has indeed improved.

But this is still a different matter from “AGI”. We have indeed evolved from “manual transmission” to “automatic transmission”, but that doesn’t mean the car has become the “driver”. We do have more convenient and faster ways to realize creativity—this progress is significant, yet it has boundaries.

4. Global turbulence has amplified everyone’s restlessness

The economic scars of the pandemic, geopolitical conflicts, and the ensuing series of economic anxieties… All these factors combined make people desperately need a “hope” they can grasp. And AI happens to become that “lifeline”:

  • Capital needs a story.
  • Media needs a hot topic.
  • Practitioners need a sense of purpose.
  • The public needs a placebo.

Unfortunately, everything intertwines and ultimately becomes a greater source of anxiety: “AI and robots will replace humans within N months…”

Afterword

Writing this much isn’t to deny technological progress—after all, I use these tools every day.

However, no matter how powerful a tool is, its original design intent is to extend human capabilities, not to remove humans from the system.

What I want to convey is that people should not abandon their awareness as subjects. The subject of all human activities remains, always, human.

Comments

Please leave your comments here