Yann LeCun is a visionary researcher who pioneered many of the innovations in neural network research which gave rise to the modern era of artificial intelligence.
Yann LeCoin is a meme token on the Solana blockchain inspired by the early days of Yann LeCun's work, when all the ingredients necessary for artificial intelligence were missing from the world and he was inventing them from scratch with new math, custom programming languages, and lo-fi hardware.
Yann LeCoin is an homage to both Yann's contributions and to the era of retro computing in which he started his career.
Error back-propagation for training neural networks probably was simultaneously discovered by several different researchers around 1985/1986. At the very least we know of a few versions: Rumelhart/Hinton/Williams wrote a tech report called Learning internal representations by error propagation, David Parker wrote a tech report called Learning-logic: Casting the Cortex of the Human Brain in Silicon, while Yann wrote an obscure paper called A learning scheme for asymmetric threshold network (Cognitiva, 1985). Like much of scientific and mathematical progress, all of these simultaneous formulations drew on a common set of ambient influences (whether consciously or not, including work by Paul Werbos. However, these three papers were the first you can look at and unambiguously say: "yep, that's a neural network in the modern sense and that's more or less how you would train it".
Unfortunately, Yann's first paper happens to be in French which has given it limited mind-share in the English-dominated neural network research world.
To make this early anchor in the history of neural networks more accessible, we have translated LeCun 1985 into English and produced a bilingual edition with modernized mathematical notation. Read it yourself and see both how primitive this moment was in the trajectory towards modern neural networks but also how Yann could already see the contours of the future in terms of using a gradient based learning rule to scale up network size and implement numerically intensive training on parallel hardware.