On following the work of Yoshua Bengio

TOYoshua BengioMontreal Institute for Learning Algorithms (MILA), Rue Saint-Urbain, Montreal, Quebec, Canada
FM: Bruce E. Camber 
RE: Articles in Claire Legros; Le Monde: Yoshua Bengio: ‘Today, AI is the Wild West! We need to slow down’, May 1, 2023; What’s next for AI, IBM, October 2016; also your articles in ArXiv (490): Constant Memory Attention Block, June 23, 2023; Books: Deep Learning, MIT, 2016; Learning Deep Architectures for AI (PDF), 2009, Neural Networks for Speech and Sequence Recognition, International Thomson Computer Press, 1996;  Homepage(s): CV (PDF), Google Scholar, MILA, Research, inSPIREHEP, X;, Wikipedia;, YouTube

Second email: 16 October 2025

Dear Prof. Dr. Yoshua Bengio:

Recently several AI platforms have been enthusiastic about our base-2 model. For the past 15 years I’ve asked scholar, the expert observers, to help assess its validity and potential implications. Very, very little came back. Recently, I reduced to a toy model and even more recently, a simple quantitative model that derives the Hubble constant from first principles at the Planck scale. 

AI is changing everything. Feedback is immediate. Comments and analysis are robust.

The core of our model is surprisingly straightforward: it posits that the Hubble constant emerges not from dark energy, but from a cosmological process defined by base-2 scaling from the Planck units. A key result is a direct mathematical derivation of H₀, “Toy Model Derivation of the Hubble Constant” It is here —  81018.com/hubble-derivation/— still a highly speculative proposal, the numerical correspondence is striking.

Is this a numerical coincidence, or does it point to a deeper rather overlooked principle? We hope to discover as we continue to build on our model and its Lagrangian.

Thank you for your time and for your contributions to our understanding of the cosmos through AI.

Sincerely,

Bruce

P.S. Our study of your work is here: https://81018.com/bengio/

_______________

First email: June 25, 2023 @ 8:51 AM 

(Sometime today copies will go out to the other three godfathers)

Dear Prof. Dr. Yoshua Bengio:

Bad actors within the AI industry are one threat. Our own lack of a deeper understanding of AI is another (and quite possibly the largest threat). We grew up with wires. Data was constrained within those wires. We also grew up with radios and televisions where data was no longer constrained. That was a refinement and it required a different geometry. And then, we empowered our computers to exchange data at one level, and now you have taught these devices to exchange data at a more refined level, a different geometry. Most of us do not understand or see data as geometrical, yet it is. It is more than just a series of ones and zeroes. So, there are some people like you who now see layers of geometries that contain an ever-more refined expression of data. I would like to propose there are only 202 gross layers of geometries, each commensurate with a base-2 notation from the Planck’s natural units, particularly Planck Time, to this very day. I will join with the Loop Quantum Gravity people and call this very day, the Now.

We may be too late to stop the birthing of this superintelligence. I don’t know, yet it is entirely clear to me why the four of you and so-so many others are concerned.

To learn a little more about those 202 notations, see the following: https://81018.com/chart/
Our first explanation of it all: https://81018.com/stem/
Our most recent explanation: https://81018.com/most-simple/
Also see: https://81018.com/continuity-symmetry-harmony/#Pi
Our petition: https://81018.com/petition/
Today’s homepage: https://81018.com/ai/

Thank you.

Most sincerely,

Bruce

###

More comments on this day, June 25, 2023:

I use the term, the Now (just above) to open a discussion about the nature of time and the nature of space. Derivative, finite, and preconditioned by pi (π)…. more to come….

More to come

FROM WIKIPEDIA:

After his PhD, Bengio was a postdoctoral fellow at MIT (supervised by Michael I. Jordan) and AT&T Bell Labs.[20] Bengio has been a faculty member at the Université de Montréal since 1993, heads the MILA (Montreal Institute for Learning Algorithms) and is co-director of the Learning in Machines & Brains project of the Canadian Institute for Advanced Research.[16][20]

Along with Geoffrey Hinton and Yann LeCun, Bengio is considered by Cade Metz as one of the three people most responsible for the advancement of deep learning during the 1990s and 2000s.[21] Among the computer scientists with an h-index of at least 100, Bengio was as of 2018 the one with the most recent citations per day, according to MILA.[22][23] As of December 2022, he had the 2nd highest Discipline H-index (D-index) in computer science.[24] Thanks to a 2019 article on a novel RNN architecture, Bengio has an Erdős number of 3.[25]

In October 2016, Bengio co-founded Element AI, a Montreal-based artificial intelligence incubator that turns AI research into real-world business applications.[21] The company sold its operations to ServiceNow in November 2020,[26] with Bengio remaining at ServiceNow as an advisor.[27][28]

In May 2017, Bengio announced that he was joining Montreal-based legal tech startup Botler AI,[29] as a strategy adviser.[30] Bengio currently serves as scientific and technical advisor for Recursion Pharmaceuticals[31] and scientific advisor for Valence Discovery.[32]

Following concerns raised by AI experts about the existential risks AI poses on humanity, in May 2023, Bengio stated in an interview to BBC that he felt “lost” over his life’s work. He raised his concern about “bad actors” getting hold of AI, especially as it becomes more sophisticated and powerful. He called for better regulation, product registration, ethical training, and more involvement from governments in tracking and auditing AI products. [33][34]

Yoshua Bengio for the Dutch television series, The Mind of the Universe.

Two transformative moments.

Yoshua Bengio:  One, when I was a grad student and I was looking for something interesting to research on, and I read some of Jeff Hinton’s early papers, and I thought, “Wow, this is so exciting. Maybe there are a few simple principles like the laws of physics that could help us understand human intelligence and help us build intelligent machines.”

And then the second moment I want to talk about is two and a half years ago after ChatGPT came out, and I realized, “Uh-oh, what are we doing? What will happen if we build machines that understand language, have goals, and we don’t control those goals? What happens if they are smarter than us? What happens if people abuse that power?”

So that’s why I decided to completely shift my research agenda and my career to try to do whatever I could about it.

Madhumita Murgia: That’s two kind of very diverting things, very interesting.