Over the past few weeks there have been articles about regulating AI (well Large Language Models but honestly only the tech people care.). There are also a few articles about people asking to have their information removed, other lawsuits from authors saying their information were used in training, would they ask for their data to be removed. Now not going to get into is permitted use, but more about how the AI is human like in one thing.
Humans have something that is causing lots of issues, not just now but appears to be for the longest time. Unlearning is a struggle. When people learn something and for years that learning is reinforced it gets embedded in their brains. When there is a change, or even challenge to that knowledge there is an instinct to push back against it. The point when someone is told they are wrong it feels like they are being punched.
Now Large Language models cannot feel pain, but they do have an issue. When ingesting information (web page, document etc.) they are converting the text into ‘tokens’ and giving those tokens a numeric representation. Those numbers get linked together. Reversing that information out to find that exact point of some text and removing it is well impossible. So LLMs struggle to do the same thing that humans have, unlearning.
So, one option when it comes to LLMs is to train it from scratch all over again. This could be costly. For humans there is an art of unlearning, and it is a skill that we should work on. Just like learning it is something that we need to practice getting good at it. Now, LLMs the unlearning of things is a bit more difficult. There are a few studies, here, here and here. I am not going to deep dive into them, as I will leave that to the reader to deep dive.
It really is amazing that LLM and the GPTs that are built on top of them can produce sentences, paragraphs, and code. What is surprising is that both struggle to unlearn. And though there are now studies and ways for humans to unlearn, the only apparent way for an LLM to unlearn is to start from nothing. To reach GAI do machines need to be able to unlearn? Have we stumbled on an update to the Turing test? Or is it possible that our brains are not powerful to understand our own brains?
This opinion is mine, and mine only, my current or former employers have nothing to do with it. I do not write for any financial gain, I do not take advertising and any product company listed was not done for payment. But if you do like what I write you can donate to the charity I support (with my wife who passed away in 2017) Morgan Stanley’s Children’s Hospital or donate to your favorite charity. I pay to host my site out of my own pocket, my intention is to keep it free. I do read all feedback, I mostly wont post any of them.
This Blog is a labor of love, and was originally going to be a book. With the advent of being able to publish yourself on the web I chose this path. I will write many of these and not worry too much about grammar or spelling (I will try to come back later and fix it) but focus on content. I apologize in advance for my ADD as often topics may flip. I hope one day to turn this into a book and or a podcast, but for now it will remain a blog. AI is not used in this writing other than using the web to find information.Images without notes are created using and AI tool that allows me to reuse them.
Leave a Reply