Flying AI
It has already crossed the pond
Yann LeCun’s tweet comparing the incremental development of aviation to the evolution of AI is misleading. AI certainly has not just begun, it has been developing incrementally for more than 80 years, he knows that. OpenAI released ChatGPT in 2022 unarguably landing AI into the public discourse. The concerns around the current AI are not entirely new. This is also important. There is an existing concern over digital rights, privacy, security, and societal harm. As such, not only has AI ‘crossed the Atlantic non-stop’ it lands in an area already rife with concern. Therefore, now is decidedly the time to debate and craft the controls computational intelligence will operate under in society. Perhaps, to borrow from his analogy and previous post, perhaps it is time for FAiR – a Federal Ai Regulatory, rather than the FAIR he leaves behind at Meta.
Surrendering the responsibility for ethical development of technologies to the corporations that develop them puts society at grave risk. Companies, with the concentration of power and influence, with the resources, motive, and history of Apple, Meta, Google, Microsoft, OpenAI, X, Anthropic, and others have is categorically unwise. The time is now to discuss and craft the controls that will be used to regulate systems which will accumulate, in an inscrutable way, more knowledge than any single person could could ever hope to comprehend.
One need only look at the current employment trends, capital markets, and pathologies of social-media to see there /are/ effects to these technologies. Their development have already demonstrated systematic disregard for law, morality, and ethics of various individuals, populations, societies, institutions, and law. I would suggest, as would many more, that we are well beyond the point of need for broad understanding and ethical discourses necessary to affect the development of reinforcement based LLMs. We should also consider whether those closest to the power of these systems are those who should arbitrate the environment and power they operate within.
Dr. LeCun’s cannot possibly be unbiased in his suggestion, especially as he strikes out to create a new start-up on what the rest of society /should/ do regarding AI. A clear conflict of interest exists. Notice, with his already significant wealth, that he choses not to take his work back into academia, where ethical concern, bureaucracy, and transparency would certainly affect his work.
Society is already behind the curve, having failed to identify the emergence of ChatGPT, in understanding how impactful and prolific its adoption would be. As the EU demonstrated with GPDR, whether perfect or not, there is a need for regulation regarding the systematic accumulation of personal and copyrighted data. There are risks to privacy, and the emotional and psychological well-being due to technological impacts to our society. His is but one, albeit very-influential and intelligent in the field, but that’s the thing about society, the rest of us matter as dammit, so does our future.

