Artificial intelligence brings great opportunities – and risks – for organisations. Anne-Sophie Morand, lawyer and Data Governance Counsel at Swisscom, explains in the interview how AI governance in particular can help here and ensure greater confidence among customers.
Do you use GenAI services in your private life? And if you do, for what purpose?
Yes, I use GenAI services such as ChatGPT, DeepL and DALL-E in my private life. With ChatGPT, I like being able to brainstorm and generate ideas. I regularly use DeepL for quick translations into different languages or to check my own communications in foreign languages. And with DALL-E, I recently generated a suitable image for my website www.ki-recht.ch. All in all, these GenAI services help to make my private life more efficient and to support creative processes.
What do we need to be aware of when using such GenAI services privately?
When using GenAI services privately, it’s important that we know the capabilities and limitations of these tools. While these services can deliver impressive results, we should not blindly trust the automatically generated content. It’s always worth checking the results as they are not invariably faultless and may include incorrect or inappropriate information.
Discover the world of Copilot for M365 with Swisscom
We offer companies comprehensive support for all aspects of Copilot for Microsoft 365. We accompany you during the introduction and work with you to develop deployment scenarios.
When I’m using these services privately, I take care to protect my own privacy and that of others and I avoid revealing more than is necessary. For example, I deliberately refrain from entering sensitive personal information such as names, addresses and other private details into these systems.
Is it acceptable to use the GenAI services we’re familiar with from our private lives in our work as well?
It depends on the individual case. Business use of private AI-based tools at Swisscom is governed by an instruction covering the use of ICT resources. Before using such services, employees must therefore always make sure that the company has the necessary licence and right to use the output. This applies to software and services paid for by employees themselves as well as their free counterparts. So a privately used tool cannot simply be used for work without first checking whether business use of this is allowed and what kind of data can be fed into the tool. In any case, preference should be given to the internal tools already available, such as SwisscomGPT.
In addition to private use of GenAI services, we need to especially discuss the risks that arise when employees and companies use AI and feed it sensitive data. What are the dangers of AI systems?
Firstly, using AI systems can create a compliance risk. Data protection is a serious challenge, as AI systems often process large amounts of data that may contain personal information. Data fed into AI systems is also regularly used for further system development, so it flows into the AI system and merges with it. To what extent can affected customers, for example, still exercise their right to delete the data?
Other areas of law to be observed include confidentiality, intellectual property rights (e.g. copyright), protection of privacy, discrimination rights, consumer protection law and competition law. Depending on the specific case, the use of AI systems may violate different legal standards and then lead to lawsuits and fines. Furthermore, a company may commit a classic breach of contract vis-à-vis business customers, for example, if it is contractually forbidden to put customer data in a cloud, but cloud-based AI systems are still used.
‘Data protection is a serious challenge, as AI systems often process large amounts of data that may contain personal information.’
Anne-Sophie Morand
Secondly, using AI systems can create security risks. For example, attacks may be aimed at generating unintended outputs, disclosing confidential information or compromising the availability of the AI system.
Thirdly, we have to pay attention to the ethical risks. AI systems often make decisions that affect customers. For example, they may decide which ads to display or whether credit should be extended. As we know, AI systems are prone to drift, so to speak, leading to fluctuations in the quality and reliability of the results. This carries the further risk of a loss of trust among customers, employees and partners, and ultimately of damage to the company’s reputation.
Are certain sectors more exposed to these risks?
Companies that use a variety of AI systems, processing large amounts of information and using particularly sensitive data, are automatically exposed to higher compliance, reputation and security risks – for example, those in the health sector. However, high risk exposure is not necessarily sector-specific, as any company may use what are known as high-risk AI systems, which pose a particular risk to the health, safety and/or fundamental rights of individuals. For example, a company may use an AI system in its HR department to publish targeted job advertisements or to analyse and filter applications. Under the EU’s AI Act, such an instrument would fall into the ‘high-risk AI system’ category and be subject to strict requirements.
Data governance can help to minimise these risks. Is effective data governance sufficient when using AI systems?
When an organisation has good data governance, this already covers a large part – but usually not all – of the risks. The risk landscape associated with AI systems is diverse and requires not only data governance but also comprehensive, AI-specific measures as part of AI governance.
You’ve already mentioned that, in addition to data protection, AI systems raise questions pertaining to ethics, copyright, fundamental rights and cybersecurity. What does Swisscom’s AI governance framework look like? Which companies need comprehensive AI governance?
AI governance is a comprehensive system of rules, organisational measures, processes, controls and tools that help an organisation and ensure the trustworthy, responsible, ethical and efficient development and use of AI systems and general-purpose AI (GPAI) models. At Swisscom, we have been working on an AI governance framework since the end of last year under the leadership of the Data Governance organisational unit in a mixed team with members from different organisational units. We have had an ad interim AI governance solution in place since the beginning of April, and will be working until September on implementing a comprehensive, risk-based solution.
In my view, any company that uses or develops AI systems or develops GPAI models should establish an enterprise-wide AI governance framework in the future – not only to prevent reputational risks, but also to comply with existing and future regulations. AI technology is evolving rapidly, so it is all the more important to keep pace and ensure responsible use and development.
The EU has been working on its AI Act since 2021 and this will soon enter into force. How relevant is the planned AI Act for Swiss companies?
The forthcoming AI Act will apply to all companies based in the EU. It will also have an impact on companies in Switzerland that offer or operate AI systems, even if they are not based in the EU. The AI Act applies firstly to Swiss companies if they, as providers, place AI systems on the EU market. Secondly, the AI Act applies to providers and operators of AI systems if the result produced by the system is used within the EU. All in all, this means that companies based in Switzerland may also be subject to this regulation, even if they have neither their registered office nor a subsidiary on EU territory.
That’s why I believe that the AI Act is also likely to have what is termed the Brussels effect in Switzerland. This is because many Swiss AI providers will not develop their products specifically for Switzerland, but for both markets, and will therefore have to be guided by the strict European rules. So we can assume that the AI Act will have a major impact in Switzerland.
What is the role of the EU’s AI Act when a company wants to build a good AI governance framework?
Companies operating internationally are likely to have to follow international standards and EU regulatory requirements, in particular when exporting AI systems. The Brussels effect I have just mentioned reinforces this. Thus, the AI Act is an important factor in shaping comprehensive and internationally oriented AI governance.
Doesn’t AI governance slow down GenAI innovation and efficiency?
No, quite the opposite. Effective AI governance with clear and understandable rules helps organisations to develop and deploy trustworthy AI systems. AI governance is therefore not only about complying with AI-specific legislation, but also meeting ethical standards and societal expectations. This protects an organisation’s long-term stability and strengthens its reputation. Through effective AI governance, companies can also actively demonstrate their commitment to responsible AI application and development and thereby help to boost consumer and partner confidence. Closely connected with the building of trust is the resulting competitive advantage. Well-thought-out AI governance enables companies to better position themselves in the market and benefit in the long run from the advantages of using and developing AI systems and GPAI models. By creating clear rules, they can differentiate themselves in a dynamic market environment and achieve a sustainable competitive advantage.
Discover the world of Copilot for M365 with Swisscom
We offer companies comprehensive support for all aspects of Copilot for Microsoft 365. We accompany you during the introduction and work with you to develop deployment scenarios.
About Anne-Sophie Morand
Dr iur., lawyer, LL.M., Data Governance Counsel, Swisscom
Anne-Sophie Morand is a lawyer and works as Data Governance Counsel at Swisscom. She is regarded as a legal expert on digital issues such as data protection and artificial intelligence. She lectures on these topics at the University of Lucerne and Lucerne University of Applied Sciences and Arts, writes specialist publications and moderates events. Anne-Sophie Morand studied law at the Universities of Lucerne and Neuchâtel. After completing her studies, she conducted research at the University of Lucerne and wrote a dissertation on the protection of privacy and sports sponsorship, for which she received the Swiss Sports Law Award. Anne-Sophie Morand then completed an LLM in IT Law at the University of Edinburgh. She has worked for the Swiss Parliament, the Federal Data Protection and Information Commissioner (FDPIC) and corporate law firm Walder Wyss, among others.