We Delivered a Lecture on the Theme "How Do We Reproduce Human Thought and Sensibility with AI?"

At a Technical Course hosted by the Consortium for Applied Neuroscience (CAN), our Head of Technology Junki Komura and Research and Development Lead Nguyen Toan Duc took the stage. The theme was "How Do We Reproduce Human Thought and Sensibility with AI? — A New Picture of Intelligence Built on Personalized AI, Multimodal AI, and Physical AI." With about 100 participants from industry, government, and academia, we delivered talks and live demonstrations. This article shares a digest of the day's content.
On February 12, 2026, at a Technical Course hosted by the Consortium for Applied Neuroscience (CAN), our Head of Technology Junki Komura and Research and Development Lead Nguyen Toan Duc took the stage.
The theme was "How Do We Reproduce Human Thought and Sensibility with AI? — A New Picture of Intelligence Built on Personalized AI, Multimodal AI, and Physical AI." With about 100 participants from industry, government, and academia, we delivered talks and live demonstrations. This article shares a digest of the day's content.

The "Human Brain" That We Aim to Reproduce
The opening of the lecture revisited the capabilities of the human brain as the subject we aim to reproduce with AI. Roughly 10 million km of neural wiring, 300TB of memory capacity, and 200 quintillion operations per second — all delivered on as little as 20 watts per day. While noting the brain's astonishing specifications, we focused in particular on the structure of information processing: "5% conscious, 95% unconscious."
Information the conscious mind can process amounts to just 40 bits per second. Just as one cannot do addition while multiplying, processing falls behind unless attention is concentrated on a specific subject. The subconscious, by contrast, processes an enormous 11 million bits per second — without our awareness. Much of the input from senses such as sight and hearing acts on emotion and judgment without us ever being conscious of it.
In other words, much of our behavior and decision-making is determined in unconscious territory we do not notice. How do we reproduce that structure with AI? That was the through-line of the entire lecture.
We also mapped functions by brain region, clearly placing "areas where reproducibility is high with current AI technology" — the prefrontal cortex's thinking and judgment, the hippocampus's memory, the occipital lobe's visual recognition — alongside "areas still developing," such as the amygdala's emotional recognition and perceptual tracing via mirror neurons. By laying out the full picture, we conveyed where our technology sits and what it is aiming for.
Part 1: Reproducing "Human Sensibility" — Multimodal AI × Physical AI
The second half conveyed the technologies and applied examples for reproducing sensibility.
Multimodal AI — Integrating Sight, Sound, and Reading
The primary inputs in reproducing human sensibility are three: vision (facial expression), hearing (voice), and language (text). We introduced multimodal AI technology that processes these in an integrated way, along with several applied examples.
In general affairs, we introduced a patented multimodal authentication system combining facial recognition with voiceprint. Face registration completes just by speaking directly to the system, and AI labor-reduces office entry authentication and customer and mail reception. It is in actual operation as a virtual reception.
In healthcare, we shared an AI model that predicts at-risk areas in real time from lung CT and gastric endoscope imagery. By detecting in video subtle changes that are difficult even for human doctors to discern, this technology points to a powerful possibility for supporting medical diagnostics that depend on individual experience.
We also presented a roadmap that, from current multimodal emotion recognition across vision, hearing, and language, will integrate neuro-information such as EEG data. By bringing in brain information — both non-invasive and invasive — in a multimodal way, we aim for a future in which intuition-based human sensibility can be reproduced with greater fidelity.
Physical AI — From the Digital to the Physical
Next, we introduced cases in which AI plays an active role in the physical world.
Fully automated docking of boarding bridges at airports is an advanced Physical-AI technology in which video sensing captures the surrounding environment, judges human risk, and precisely adjusts the docking unit to connect to the aircraft. We presented it as an example of operations once dependent on operators' craft skill, now fully automated while preserving safety.
We also introduced our joint research with Vietnam National University, Hanoi, on world models. Unlike text-based LLMs, this approach learns human "intent" and "action" in the three-dimensional real world, so that robots can reproduce them in the physical environment. We walked through two approaches — reinforcement learning and inverse reinforcement learning (IRL) — and emphasized that IRL, in particular, can extract from expert behavioral data the very thinking logic of "what they prioritize when acting." This part drew many questions from the audience — a clear sign of strong interest.
Part 2: Reproducing "Human Thought" — Personalized AI Service "N1 Agent"
In the first 40 minutes, we introduced our efforts to reproduce thought through personalized AI, centered on the "N1 Agent" service we develop and operate in-house.
enableX's core technology mission is to "practice freedom from physical constraints and the expansion of human capability through the use of personalized AI." We advance the social implementation of personalized AI on two wheels: a "substitution" dimension that frees humans from physical constraints — time, fatigue, location, and number — and a "collaboration" dimension that uses AI to lift human judgment and creativity.
N1 Agent is a service that brings an individual's decision-making tendencies, expertise, and values into an LLM, building a "digital twin of knowledge" in digital space. Technically, it is designed in three layers. The combination of these three layers gives rise to AI that holds "the essence of that individual" — clearly distinct from a mere chatbot.
We have, in fact, mass-produced and operated personalized AIs of our CEO, business heads, and function leads. To the individual-dependent need of "I want to ask Mr./Ms. XX about this," the person's personalized AI responds on their behalf, covering a wide range of tasks — consultation on specialized work, drafting reports, and hands-on support for decision-making.
An important point is the operating design that does not "build and forget." We have established a "knowledge-circulation mechanism" that automatically ingests sources such as meeting minutes, deliverables, input materials, web information, and weekly updates, and continuously runs differential checks and additional training. Through this, the personalized AI keeps running with freshness, and stakeholders across the organization can use AI equipped with the latest knowledge.
Komura conveyed the message: "AI is not a near-term lever — it is about familiarity. What matters is a design in which humans engage AI, understand each other, and raise value through practice."
Part 3: Neuro × AI — Optimizing Advertising with Intuition Metrics
In the final part, we introduced the neuromarketing services of our "Neuron∞AI Lab."
Most human purchase decisions are made instantaneously, in the unconscious — "Pre-decision." Between 50% and 95% of the brain is devoted to sense- and sensibility-based processing tied to the five senses, and only when a decision is especially important do we attach a rational, after-the-fact explanation. What conventional surveys and interviews can capture is this Post-decision — they do not reach the "intuition" that truly drives purchase behavior. This has been a long-standing structural challenge in marketing.
To address this, we have built a measurement environment combining EEG and eye-tracking, capturing six intuition metrics in real time: attention, emotion, memory, novelty, cognitive understanding, and intent to act. By integrating these metrics into a Neuro AI model, we have put into practice peak-end analysis of advertising videos (identifying the moments at which emotion or attention peaks), and an evaluation and generation model for advertising images (suggesting improvements based on the intuition metrics).
On the day, we also ran live demonstrations with sample ad videos and images. Attendees experienced EEG data visualized on a dashboard in real time — and the response was significant.
Komura closed the lecture with the vision: "Use Neuro AI to draw out the human sense of pleasure and prompt behavioral change. Convert that into business value — branding, revenue uplift, cost reduction, and quality improvement — and accelerate social implementation."
Closing
In this lecture, we conveyed how our technology domains — personalized AI, multimodal AI, Physical AI, and neuroscience — are connected under the single, consistent theme of "reproducing human thought and sensibility."
On February 4, 2026, we also released the "AI Social Implementation White Paper, Vol. 1." It compiles our findings centered on personalization and multimodal AI — please refer to it together with this article.
Toward advancing research and social implementation through the collaboration of applied neuroscience, information science, and engineering, enableX will continue to take on this challenge.