Understanding Foundation Models and Their Impact on Domain-Specific Tasks

January 7, 2024

Scientist

 


For domain specific tasks should we fine-tune a foundation model or build a new traditional model from scratch?

Traditional modelling.

Before foundation models, machine learning was predominantly about creating task-oriented models. These models were designed for specific purposes and they have a few key characteristics:

Foundation models.

Foundation models are a class of AI model characterised by their extensive training on large diverse datasets, enabling them to develop a wide understanding of various subjects and contexts. This comprehensive training makes them versatile and adaptable to a variety of applications outside the scope of their training.

One class of foundation model is a Large language model (LLM). Trained on a large corpus of text they can be fine tuned to become task-oriented. For example they could be used for sentiment analysis, where a sentence is submitted and the sentiment of that sentence returned. While LLMs specifically deal with language data, foundation models can encompass other domains as well such as vision or audio.

Foundation models aim to replace the traditional way of modelling with a single model capable of handling various tasks. Trained on vast amounts of unsupervised, unstructured data they have the ability to adapt to various tasks.

So while the end result of having a model perform a specific task is similar to the traditional way of creating task-specific models, the methodology and efficiency are different. Foundation models offer a more scalable and flexible approach, as they can be adapted to multiple tasks without the need to start the development process from the beginning for each new task.

 

megaphone-the-words.jpg

 

So how do we make foundation models task orientated?

Making foundation models Task-Oriented:

So what are the pros and cons of fine tuning a foundation model over building a traditional model?

 

futurism-and-technology.jpg

Pros.

Cons.

Overall, while foundation models offer significant advantages in terms of performance and versatility, they also bring many challenges.

It is also important to recognise that foundation models extend beyond the realm of language processing. Also, some are trained with a greater emphasis on specific data types and therefore the model develops a more nuanced understanding of various domains.

 

various-applications.jpg

 

Some other applications of foundation models.

 

The shift towards foundation models represents a significant change in the traditional approach. Foundation models are designed to be more general-purpose, trained on vast and varied datasets. They can be adapted with fine-tuning to perform a wide range of tasks, reducing the need for specialised models for each task and offering more flexibility and scalability.

However, there are pros and cons and with the ever changing AI landscape, we can imagine that smaller, faster and cheaper will be the focus of these foundation models going forward.