Generative AI in business: examples (woman in front of notes)
9 min

Generative AI in business: use cases and best practices

Generative AI is a rapidly emerging technology that companies can leverage to increase their efficiency and develop new applications. But how can organisations make good use of this technology? And what are the challenges and limitations? This article helps provide context through practical experience and examples.

The big question currently preoccupying every company is: how can we make good use of generative artificial intelligence? What are the benefits of these language models? It’s not that the business world has no experience with artificial intelligence, but that this has been limited to ‘traditional’ AI with self-trained models based on internal data to date.

Originally developed by Google, the transformer architecture behind the large language models (LLMs) is now taking AI to the next level, into the age of generative AI (also known as GenAI). These models focus on understanding and generating text, language, images and, increasingly, multimedia content. The big difference between LLMs like OpenAI’s GPT, Google Gemini or Claude from Anthropic and traditional AI is that the LLMs come trained and can be easily used as an ‘off the shelf’ service.

Simple? Unfortunately, it’s not that simple. Proper preparation is required and, above all, an idea of what generative AI should be used for. Broadly speaking, we can distinguish between three areas of application that equate to increasing levels of difficulty or complexity:

  1. Integrating AI services into the workplace: This involves integrating and deploying ready-to-use services such as ChatGPT, Copilot for Microsoft 365 and the Midjourney image generator.
  2. Integrating existing LLMs into internal applications: This is where a company adds internal data to an existing cloud model in order to build its own applications, such as ‘intelligent’ customer service chatbots.
  3. Training and operating models with internal data: Here, a company uses the basic capability of LLMs – text and language processing – to develop applications entirely of its own and, if necessary, operate them itself. In addition, the models are trained with internal, topic-specific data.

This article looks at the opportunities, challenges and limitations of generative AI and shows how it can be used.

Level 1: Using AI services in a business context

The main reason for using AI services in a company is to give employees access to generative AI so that they can work more efficiently. This field of AI as a Service is dominated by big tech: OpenAI with GPT, Microsoft with Copilot for M365 and Google with Gemini.

Copilot for M365 in PowerPoint
Typical business use case for generative AI: Copilot helps employees to create PowerPoint presentations. (Image: Microsoft)

‘The barriers to entry are relatively low. Some investment is needed for licences, and compliance requirements must be met. But I can use the services without in-depth IT knowledge,’ explains Tim Giger, Consulting Leader Data & AI at Swisscom. The offerings integrate into and expand the existing landscape of office applications.

Use case: everyday business use of Copilot

A survey of the Copilot for M365 pilot group at Swisscom shows how this works in practice. The participants use Copilot primarily as a work aid in Office applications. The following scenarios were mentioned frequently:

  • Writing and translating e-mails and internal documentation
  • Summarising texts and e-mails
  • Facilitating the use of Office applications by having Copilot create Excel formulas or turn lists into PowerPoint presentations
  • Quickly preparing and summarising technical, operational and strategic topics

The participants appreciated that Copilot makes them quicker, more efficient and more productive. This is especially true when creating texts and scripts. However, the pilot trial also revealed limitations. For example, Copilot sometimes delivers unsatisfactory results when using company data. This is due in some cases to the quality of the data and in others to the prompts. A certain understanding of prompting (prompt engineering) or employee training is required to achieve usable results with Copilot. Dialects are also an issue at this time: the AI assistant could not get to grips with Swiss German when summarising online meetings in Teams.

Discover the world of Microsoft 365 Copilot with Swisscom

We offer companies comprehensive support for all aspects of Microsoft 365 Copilot. We accompany you during the introduction and work with you to develop deployment scenarios.

Level 2: Integrating existing LLMs into internal applications

This level pivots away from AI assistants in the workplace and towards integrating AI functionality into internal applications. Existing, pre-trained models offered as a cloud service are used for this. A typical example is text and language processing using an LLM that is controlled via an API (programming interface). ‘This is where companies typically purchase services from a hyperscaler such as Azure, AWS or Google Cloud,’ says Tim Giger. ‘These foundation models can, for example, be supplemented with internal data using retrieval-augmented generation (RAG).’

The AI component is not directly visible to users of such applications and operation is via a company platform, such as its website or intranet. These AI-supported applications can be aimed at both employees and customers.

More time and effort are needed than for AI assistants integrated into the workplace. ‘A certain amount of expertise is required to connect these models to my application,’ says Tim Giger. ‘But I don’t need in-depth knowledge about how to train and operate the models themselves.’

Use case 1: chatbot with generative AI

The most obvious application for generative AI at this level is an ‘intelligent’ chatbot, for example in customer service or sales. ‘We’re talking here about traditional chatbots or voice bots that don’t, however, just direct the person asking the query, but understand and can process the request themselves,’ says Kai Duttle, Consultant at Swisscom’s Interaction & Process Management. Such applications are also referred to as conversational AI.

Generative AI can help enhance the capabilities of existing chatbots and relieve employees of routine tasks. It is crucial to keep the benefits in view, says Kai Duttle: ‘We need to use the technology in a way that improves customer value rather than just for the sake of it.’

Use case 2: processing recorded conversations

Calls recorded in a contact centre (with consent) can be transcribed and summarised using generative AI. The results can be used in many ways:

  • As information in the support process for other employees if someone calls again
  • For sentiment analysis of the calls to get an idea of the general mood
  • To gain insights about the topics that come up frequently on the hotline
    This can be used in contact centre operation and development, whether for training employees or automating processes for frequent support topics.

Use case 3: knowledge database and enterprise search

Employees in every large company probably spend a lot of time looking for internal information. An AI-supported knowledge database fed with all relevant information can significantly reduce this effort and possibly also replace an internal search function. Technically, this knowledge database is the same as the chatbot from the first use case. The implementation process is also similar.

ReThink Conversational AI Workshop

Discover the future of customer service. Find out how artificial intelligence can make your customer service more efficient and improve the customer experience. In this workshop, you will work with Swisscom experts to develop a strategy for implementing conversational AI as a basis for further development.

Level 3: Training models with internal data and operating them in-house

The biggest difference between the second and third levels is that at Level 3 a language model is trained for a particular task using internal data that is specific to a company or industry. The company often operates the model itself as well. The aim is to develop an application for a specific purpose. This differs from Level 2 applications, which rely on a ‘generalist’ language model.

The approach and effort are the same as for machine learning and deep learning models. ‘I require specific expertise for this, such as in Python programming. And I need to understand how the models work and how I can make sure I have the right inputs and outputs,’ says Tim Giger.

At this level, a company takes care of the entire life cycle itself, from training to operating the models and applications. This usually involves specialist cloud services from hyperscalers, such as Amazon SageMaker or Azure Machine Learning, combined with open-source models such as Meta Llama and Mistral AI’s Mixtral. ‘This approach requires a lot of development work and is highly specialised in its application,’ says Tim Giger. ‘We’re only seeing a few use cases for this here in Switzerland at the moment.’

Challenges of using generative AI

Even though pre-trained LLMs are simpler to use than traditional AI, there is still work involved to ensure good use of them. Certain challenges must be overcome, particularly at the data level. These are the most important hurdles and stumbling blocks that companies need to consider when using generative AI:

  • Poor data quality: A chatbot for customers, a knowledge database for employees and the Copilot button in Microsoft applications: all of these (and users) benefit from the use of company data. This does not have to be structured, but it should be correct. Outdated and invalid data distorts the results of AI models. ‘It makes sense to tidy up data and folder structures beforehand,’ recommends Tim Giger. This also prevents AI from accidentally accessing confidential information.
  • Results are accepted unchecked: Chatbot hallucinations should preferably not reach users. In the case of AI-supported applications, the output (and, where appropriate, the input) should be checked and filtered in advance to minimise unwanted results. Employees should also critically scrutinise or review the output from Copilot or ChatGPT. We recommend suitable awareness-raising measures. Otherwise, employees will tend to accept what they get from the AI, without further checking.
  • AI governance: How high the error rate may be or how much hallucination is still beneficial is also a question of governance. This regulates the level of tolerance of errors in order to clarify liability issues and the possible legal consequences of false statements. Or, as Tim Giger sums it up: ‘If the error rate has to be kept to a minimum, it might not be the right use case for generative AI.’
  • Lack of benchmarks and traceability: With generative AI, it is not possible to determine or achieve a statistical precision in the results, such as ‘80% correct’. And output traceability is difficult, because the output can vary even when the same inputs are used. Accordingly, there are no standardised benchmarks for commercial use that allow for measurement of the quality of results and selection of a suitable LLM. This uncertainty needs to be taken into account for AI applications.

Where do we currently stand with the use of generative AI?

Despite the rapid technological development of the large language models, we can say with certainty where companies stand today. ‘The biggest benefits right now are in terms of efficiency. Organisations are using generative AI to work more efficiently,’ says Tim Giger. Many companies are at Level 1 – they are using AI services or are planning to introduce them.

According to Kai Duttle, in addition to its benefits when moving towards a digital workplace, generative AI is especially useful for improving customer service at this time: ‘Chatbots and language – this is the area in which generative AI is most mature and brings the greatest benefits.’

MSM Research: artificial intelligence

Artifical intelligence on the way to becoming a game changer: Find out in the latest study by MSM Research how Swiss companies use AI and what you should consider when using it.

Read now