Machine unlearning – a new frontier

machine unlearning

Share on

Since there is so much talk about machine learning and ways it could be utilized, it is time to mention the exact opposite of it. We have mentioned so many times that the ML model is only as good as the data it was trained on. But what happens when that data is no longer of service to us? The issue arises when the model is already trained on it, and now you have to exclude it or revert the process. From machine learning, we turn to machine unlearning

A bit of a black box still, machine unlearning is gaining track. There are some scientific articles on it, but practical applications are almost still in its wraps. Especially in large language models (LLMs), it could be tricky to make the model, sort of, forget some parts of the data set it was already trained on. 

A theoretical approach to machine unlearning

Machine unlearning is an emerging subfield of machine learning whose sole goal is to remove or forget specific subsets of data that the model was trained on or influenced by. It’s the process of unlearning or forgetting certain data points to retrain the model without that specific data or information.

This derives from data privacy laws and “The right to be forgotten”. Private data and data whose security could be in jeopardy are something we definitely don’t want in some ML or AI models. However, once the model has been trained on data sets (training data) it is fairly difficult to revert the process without scraping the whole process which is extremely costly and time-consuming. 

Machine unlearning could be defined as the process of not only removing data and knowledge but also updating or adapting the trained model to ensure compliance with privacy and security and a higher level of accuracy. Let’s not forget that this approach is not something we can completely be sure about since the outcome of removing data could sometimes lead to model corruption and breakage. If you remove certain data points from the model, how can you be sure it wasn’t something integral that made the model work in high accuracy?

Known ways to machine unlearning

Despite being such an interesting concept, machine unlearning still requires more advancements. On a smaller scale, ML models can be reverted by retraining. Yes, it will prolong the time to get expected results from machine learning, but you will be sure that you’ve removed unwanted data sets. But, on the other hand, with larger models, retraining is far more expensive and requires more time and effort to do so. Retraining deep learning models or LLMs has those issues with cost and time. 

Machine unlearning can be grouped into two main categories: exact unlearning and approximate unlearning methods.

Exact unlearning is done by removing certain data points (those we want the model to forget) by retraining the model from scratch without the data we want to exclude from the training set. As mentioned above, this is expensive. It’s also not sustainable, since it’s more likely that you will have to remove data points more than once. Forgetting data will be a recurring problem in large datasets so it’s not optimal to retrain the model every time this happens.

Approximate unlearning methods try to approximate data points that would be included in the training set as if the model was trained on it without the data it has to unlearn. Often it is done by introducing new data points to replace or overwrite those we want to be forgotten. Or it finds outliers to be excluded from data sets.

Various methods are being tested to see which ones will efficiently perform machine unlearning, but it still remains to see which ones are going to be fully efficient and reliable. Machine unlearning is still going through examination and experiments and there is no definite approach to the best practices. 

Why and where will it appear?

Privacy laws, copyright issues, and data poisoning are only some of the factors that influence the development of machine unlearning and the evergrowing need for it. 

The need to remove unwanted or outdated information is driving advancements in unlearning. Often, data can be also labeled incorrectly or it encounters errors, faces adversarial attacks, or is manipulated somehow. In all these cases you would want to remove such data from the model at any cost. 

But you have to understand data first, before trying your hand at any machine unlearning method. It has to be known how a certain data point influences the model. Will the data you remove influence the model’s accuracy and its results? What is the reasoning behind forgetting data and will this set back your model? 

Time moves on and data changes

Data and its value change over time. Something actual before, might not be now. For the ML or AI model’s accuracy, data needs to stay fresh and up to date. As much as anything else, data evolves as well.

Better control over our private data

“The right to be forgotten”, privacy laws, and legislation, all give private individuals to take control over their data. This in turn influences which part of our data can be used in ML and AI models. And it’s not only about our private information but also about content, art, and intellectual property that original creators didn’t allow those models to use. 

Removing bias

Unfortunately, in ML and AI models there is bias. Depending on the data ingested, it can create unfair and biased results. Models can learn that some attributes are better than others and thus create inaccurate results that can lead to discrimination, manipulated results, or prejudice. By unlearning and removing data that produced bias, models can show more leveled results. 

Data resource optimization

Ingesting vast amounts of data will make the models a bit more inflexible. Too much invaluable data can strain memory resources. Not all data points are valuable for models’ efficiency and accuracy. Models should prioritize essential information. With machine unlearning, you can make the model more agile and decrease data latency. Also, unlearning will let you, sort of, aerate data and focus on what makes the model work.

Improving the model’s efficiency

Unlearning helps models focus on important core data that actually creates value and doesn’t strain the model performance. This also speeds up the learning process and improves accuracy. It will consequently decrease complexity and model management issues. 

Unlearning is not simple

Machine unlearning is a difficult task. Removing data without consequences seems almost impossible. If you have a model that has been using certain data for some time and depends on it, it proves to be tricky to suddenly remove that data from the model. Unlearning as a method still has so much to discover. As a testament to that, Google organized the first ever Kaggle competition on machine unlearning to research the possibilities and new ways to achieve data forgetting. 

And it’s not only about the uncertainty of the efficiency of such methods. Since LLMs use so much data it can be challenging to determine how some data points influence the model. Especially, with data poisoning when you don’t know immediately if some data was targeted, it can be hard to react on time. Just setting your sights on what needs to be excluded from the training data or the model is not easy. 

A Jenga tower

Removing one piece of data could make the whole model unstable or even collapse. Well, it depends on the piece you remove. If the model and its results are highly dependent on the data set you need to exclude, then you might face issues of losing data integrity, accuracy, and efficiency. Some data points are in the end so interconnected and dependent on each other, that touching one will leave consequences on the rest. 

Retraining issues

Removing data will influence how ML or AI models are created. The training process will be different from the original one since data differs this time. Using the same methods for training might not be applicable anymore. By removing some data, it will influence the remaining sets so it might change how AI and ML models learn.

Amnesia effect

It’s a balancing act to remove or forget some information or data but retain useful knowledge. The machine unlearning process, depending on how data is mutually connected and influenced, could cause a loss of data and knowledge we do not want to lose. It can prove to be a challenge not to cause model amnesia of important information by excluding parts that were integral to it. 

Overfitting and generalization

Can it be that if data is removed, the model will end up being overly general and not specific enough? Can it provide results that do not consider new data inputs? The answer is yes. By removing certain data points we risk creating a model that doesn’t take into account new data sets and focuses overly on training data. Meaning, that if the training data is not broad enough to cover all scenarios, the model’s results will rely simply on the limited data it was trained on and not recognize new data inputs as it should. This will lead to wrong predictions or outcomes that do not work well with new data. It’s important not to exclude vital information that could lead to model overfitting.

The need for a tailored approach

Each ML or AI model is unique since it is created for a specific purpose and is based on different methods. 

There is a research called “Who’s Harry Potter? Approximate Unlearning in LLMs” where they decided to remove Harry Potter books from the model. One of the techniques of doing this is by eliminating terms and words specific to that book, so the model would have no recollection of it. They declared their attempts successful since they only marginally had lower results than the original model with that data set. However, critics argue that it wasn’t successful since it used the method of replacing Harry Potter-specific words with others, and it basically means that only the context was changed but the story, in some determinants, remained. The main point is that this research shows it had to use a combination of multiple methods specific to its goal, to reach its results. We do recommend reading this paper and analyzing the pros and cons for yourself. 

What’s in store for machine unlearning

As mentioned many times already, machine unlearning is just taking flight. It is not even a fully recognized machine learning subfield, especially the approximate unlearning. You can find some scientific research regarding experiments with this, and each one will propose a different approach. Some were successful, some were not. The point of them all is to discover bulletproof methods of excluding data from models without having to fully retrain them since it’s expensive and time-consuming, especially for LLMs. 

Already data poisoning attacks, in terms of copyright protection, are influencing certain data points and creators are demanding they be excluded from the model. Presumably, more and more people will want their content and their data removed from those. If that persists, and it will, there should be a functional and efficient method of doing so, while preserving the model’s value and performance. In a world that is rapidly implementing and accepting ML and AI, there should be a way of forgetting data while keeping the benefits they provide. 

Stay Connected

More Updates

Croatia
Zadarska 80, Zagreb
HR62004514019
info@digitalpoirots.com

© 2022 DigitalPoirots.com | Deegloo.com