Software

The transformative power of generative AI

Generative AI (GenAI) promises a 30 to 50% increase in efficiency and a 10 to 20% increase in productivity for companies. But how does it actually produce results? Where is its use worthwhile? What do companies need to consider and does it always have to be the big foundation model? We took a look at all of this.

Share this Post

Contact info

Silicon Saxony

Marketing, Kommunikation und Öffentlichkeitsarbeit

Manfred-von-Ardenne-Ring 20 F

Telefon: +49 351 8925 886

Fax: +49 351 8925 889

redaktion@silicon-saxony.de

Contact person:

The launch of ChatGPT in 2022 marks a turning point in technology history, akin to the internet or mobile phones. Leading this wave of innovation, characterized by increasing quality and diversity of applications, are companies such as Google, Amazon, and Microsoft. They are pushing the boundaries of what’s possible through significant investments in Foundation Models, transforming how individuals and increasingly businesses work, communicate, and interact. But how does generative AI work, and what makes it so special?

How generative AI models work

Generative AI models are a class of algorithms in artificial intelligence that are trained to generate content independently and on the basis of corresponding training data. This content can be text, images, music or video content, for example, and is similar to that which was available as training data. And that too – in terms of speech style, voice modulation or other individual characteristics. Common examples of GenAI include the GPT series (Generative Pre-trained Transformer, OpenAI) and BERT (Bidirectional Encoder Representations from Transformers, Google).

The development of generative AI models begins with the selection of a suitable machine learning approach. These include:

  • GANs (Generative Adversarial Networks): They generate realistic data through the interaction of a generator and a discriminator that evaluates authenticity.
  • VAEs (Variational Autoencoders): These networks encode data and generate new, varying data.
  • Transformer models: They recognize patterns in large amounts of data and are particularly effective in generating new texts.

Due to the large amounts of data, training requires considerable computing resources. Models such as GPT-3 have been trained with hundreds of gigabytes of text data to develop a deep understanding of human language.

In order to generate appropriate responses, the AI stores the training data in high-dimensional data structures (vectors) that are used to represent the features or properties of the training data. Each feature of a data element is encoded in a vector, which acts as a kind of numerical fingerprint of this feature. When generating content, the model then uses the learned vectors to create new data points that are similar to the trained ones.

Generative AI in companies

A recent study by Deloitte2 shows that of 2,800 executives surveyed worldwide, 79% expect GenAI to drive significant change in their organizations within less than three years. 44% of respondents also believe that their company currently has a high (35%) or very high (9%) level of expertise in the field of generative AI.

According to the Applied AI 2023 study3 by IDG, companies in the DACH region are also well prepared for the use of generative AI. A majority of the companies surveyed have already set specific budgets for AI and made corresponding investments. The companies surveyed are hoping for automated processes, enhanced customer communication and the promotion of creativity.

In future, the potential will primarily affect complex and highly paid professions

According to a McKinsey study from 20234, GenAI will accelerate changes in the world of work. The special feature is that, unlike previous technological leaps, the focus is not on physical processes. Instead, the potential lies in the combination of humans and AI, because it compensates for existing human deficits in knowledge and experience and maximizes quality and output. The greatest potential for automation lies in areas of work that require a bachelor’s or master’s degree and a doctorate. Instead of the previously assumed 28%, GenAI could automate 57% of these jobs by 2030. Teachers (38%), IT professions (31%) and creative professions (24%) have particularly high automation potential. Physical occupations are barely significant here at 5%.

Examples of the use of GenAI in companies

The use of AI in companies is primarily concerned with the challenge of penetrating large volumes of data. Extracting or classifying information or entities from structured, semi-structured or unstructured data in order to control business processes on this basis. But all this is not yet generative. The situation is different when you ask the AI to summarize meeting minutes, for example. So wherever something new needs to be created, GenAI comes into play. The capabilities are already being used here:

Quality assurance in production:
58% of 300 companies surveyed in the IDG study3 use generative AI in this area. One example is the development of efficient processes by analyzing historical data.

Customer service:
The use of chatbots and virtual assistants to answer customer queries and personalize interactions is already being used by 54% of companies.

Automation of processes:
50% of companies use such services, which include automatically generated emails or AIs that sort workpieces in the best possible way on systems so that the optimum throughput is achieved.

Code creation and maintenance:
Generative AI can be used to automate software development processes, including writing, reviewing and optimizing code. Tools like GitHub Copilot are already using AI to help developers write high-quality code faster and more efficiently. This technology can not only increase productivity, but also help reduce errors and improve overall code quality.

Creating training data:
An innovative use of generative AI that improves the performance of other AI systems built on top of the data.

Other application areas:
I
content production & Knowledge Management (automatic creation of texts and documentation); Design and creative processes; Personalization of products and services

First steps for use in the company

In order to use GenAI successfully, companies primarily need a business objective and knowledge of which type of AI is required for which use case. After all, it is not always necessary to use an expensive, generative model.

In addition, companies need to consider how they will make the data available, especially if queries and tasks relate to specific company information (context) that is not publicly accessible to the AI.

Way 1: Use foundation models

Large and very flexible models are useful when companies first want to fundamentally check a use case or support decision-making. As soon as the models need to be specifically trained in the company context, smaller and therefore more cost-effective models often make more sense. In this case, the costs and effort are primarily in the training of the model and its applicability to a specific use case.

Way 2: Retrieval Augmented Generation (RAG)

One option is to provide the context as part of the query (prompt). The principle behind this is called Retrieval Augmented Generation (RAG). This involves providing a large database that is accessible to the AI and allowing it to generate answers based on this.

Way 3: Provide data as vectors

Another way to save costs is to divide the data and information into elements (tokens) and vectorize them, i.e. make them available as so-called embeddings. The data is structured in the same way as the AI uses it or would prepare it itself during analysis. There are already frameworks for this increasingly popular technology that companies can use.

Way 4: Develop your own foundation models

In contrast, the next stage of development does not rely on existing models, but generates its own customized foundation models. The costs for this are enormous and the challenge is to provide a sufficient amount of good training data. The advantage is that the specific company context is known and does not have to be supplied with each new prompt. This enables companies to use their specific data sets even more effectively and develop AI solutions that are deeply integrated into their own business model. Another advantage is that data does not have to be disclosed. However, due to the effort and high costs involved, this approach is only worthwhile for highly scalable business models.

Corresponding AI models for all of the aforementioned approaches can be found on the platforms of the major hyperscalers, at Hugging Face (open source platform), AI model libraries such as IBM’s and numerous other providers.

Challenges when using generative AI in the company

The industrial use of generative AI brings with it both enormous opportunities and significant challenges. And it differs fundamentally from its use in the private sphere, because while you can have an AI-generated painting produced for your own living room without hesitation, customers demand transparency, legal certainty and the protection of personal rights. The Deloitte survey mentioned above2 paints a worrying picture here. Only 22% of the 2,800 managers surveyed worldwide, including 150 AI experts, believe their company is well prepared for the use of AI. There are particular deficits in the area of qualified specialists and in governance and risk issues. Trust in the output (23%) of AI, the protection of intellectual property (35%) and compliance with regulations (33%) are considered to be particularly critical.

Governance

Aside from various existing regulations, the EU AI Act will certainly have the greatest impact on how companies deal with GenAI. At the latest when it is incorporated into national law. Companies would therefore do well to subject their products, services, etc. to a risk assessment right from the start. In addition, companies must make their use cases traceable, i.e. disclose which data was used, where it is located and which model was used to train and evaluate it.

Legal hurdles and ethical concerns

1. Data protection:

The use of personal data to train generative AI raises significant data protection issues, particularly with regard to the GDPR in Europe and other international data protection laws.

2. Copyright and intellectual property:

When an AI generates content based on existing works, copyright issues may arise. Who owns the copyright to an AI-generated work? This question is still unresolved in many jurisdictions.

3. Liability:

If an AI produces erroneous or harmful results, it is often difficult to determine who is held liable – developers, users, the company or the AI itself?

4. Transparency and explainability:

AI systems, especially those based on deep learning, are often “black boxes” whose decision-making is not transparent. This can lead to problems with acceptance and trust in AI applications.

Data and result quality

Aside from this, it is just as important to guarantee the quality of results. Companies must therefore constantly ask themselves how good their models and data are over time. One of the main challenges is data management: generative AI models are heavily dependent on the quality of the training data. Inaccurate or outdated data can significantly impair the results. There are also security risks, as AI systems that learn online are susceptible to manipulation such as data poisoning. The scalability of such systems also poses a technical and financial challenge, while the increasing reliance on AI technologies can also lead to a loss of human expertise, which becomes particularly problematic when systems fail or deliver unexpected results.

Strategies for companies to overcome these challenges

Companies that take early action here have a clear advantage. They can ensure the accuracy and integrity of their AI systems by establishing strict data governance and security strategies. In addition, ensuring legal compliance is crucial in order to meet legal requirements. The development of ethical guidelines and the promotion of transparency and explainability of AI decisions are also essential to strengthen trust and acceptance on the customer side as well as within the company.

Conclusion

Generative AI is on the cusp of widespread acceptance in the business world. While the technology is still in its infancy, the potential for companies that invest in these technologies early on is enormous. Or to put it another way. Companies – and software companies in particular – that do not adopt AI early on will not be able to survive in the market. The challenges are not insignificant. But in terms of efficiency gains, cost reductions, new business opportunities and counteracting the shortage of skilled workers, they are simply too tempting to ignore.

 

_ _ _ _ _ _

Definition of terms

The terms machine learning, deep learning, foundation models, generative AI and transformer models are closely linked and form a hierarchy in the development and application of artificial intelligence. Here is an explanation that clarifies both the distinctions and the connections between these concepts:

Machine Learning.

Machine learning is an area of artificial intelligence that involves algorithms that can learn and make predictions based on data. These algorithms improve their performance as the amount of data increases, without the need to explicitly program how a task should be performed. Machine learning includes different approaches such as supervised learning, unsupervised learning and reinforcement learning.

Deep learning (deep learning)

Deep learning is a specialized branch of machine learning and uses multilayer neural networks to recognize complex patterns in large amounts of data. These deep neural networks can consist of hundreds or thousands of layers, with each layer learning specific features of the input data and passing them on to the next layer. Deep learning is particularly effective in areas such as image and speech recognition.

Foundation models

Foundation models are a type of large neural network that have been pre-trained on a wide variety of data (text, images, etc.) and have the ability to learn a variety of tasks on this basis. These models are often very large and require considerable computing resources for training. After pre-training, they can be fine-tuned for specific applications through further training. They are a further development of deep learning in that they work on very large amounts of data and with very deep network structures.

Generative AI (GenAI)

Generative AI refers to AI systems that are able to generate content similar to that created by humans. These systems often use foundation models and specialize in generating new data that is similar to the trained examples. Examples of generative AI applications include the generation of text, music, images or videos. These technologies rely heavily on deep learning techniques to learn how to reproduce data in a realistic way.

Transformer models (Self Supervised Training)

Transformer models are a special class of architectures in deep learning that are particularly effective for processing sequential data, such as that found in speech and text. They are based on a mechanism known as “self-attention”, which allows the model to learn weightings about which parts of an input are important relative to others. Many Transformer models are trained by self-supervised training, which means that they learn from the input data itself, without the need for external annotations or labels. This enables very flexible and effective preparation for training foundation models.

_ _ _ _ _ _

More on the topic:

👉 Podcast on the EU AI Act (German)

_ _ _ _ _ _

Sources:

1 Turning GenAI Magic into Business Impact | BCG

2 Generative AI can become a productivity booster | McKinsey & Company

3 IDG Study Applied AI 2023

4 KI Study 2024: Accelerating AI Transformation

_ _ _ _

This article was written exclusively for our magazine NEXT “In the spotlight: Software”.

👉 To the full issue of the magazine

You may be interested in the following