In March 2023, the Korean blog “theSCIENCEplus” by Moon Kwang-ju published the article “ChatGPT – Breakthrough or Hype”. The article is based on the argumentation of the scinexx article “ChatGPT and Co – Opportunity or Risk?” by Nadja Podregar and refers to insights from leading German experts such as Johannes Hoffart, Thilo Hagendorff, Ute Schmid, Jochen Werne et al. Most of these experts are also organised in Germany’s leading AI platform “Learning Systems”.
Please find theORIGINAL ARTICLE HERE and a translation from Korean to English created with the German AI-platform DeepL.com below
Read 3’40”
ChatGPT – Opportunity or Risk?
Features and consequences of a new AI system
ChatGPT can write poems, essays, professional articles, or even computer code. AI systems based on large-scale language models like ChatGPT achieve amazing results, and the text is often almost indistinguishable from human work. But what’s behind GPT and its ilk? And how intelligent are such systems really?
Artificial intelligence has made rapid progress in recent years. The system, which is based on a combination of artificial neural networks, has been accessible via the Internet since November 2022, so it was only through ChatGPT that many people realised what AI systems can already do. His impressive achievements sparked a new debate about the opportunities and risks of artificial intelligence. This is another reason to reveal some facts and background information about ChatGPT and its “identities”.
Artificial Intelligence, ChatGPT, and the Results “Breakthrough or Hype?”
“In my first conversation with ChatGPT, I couldn’t believe how well my questions were understood and put into context.” These are the words of Johannes Hoffart, head of SAP’s AI department. OpenAI’s AI system has been causing sensation and amazement around the world since it first became accessible to the general public via a user interface in November 2022.
A flood of new AI systems
In fact, thanks to neural networks and self-learning systems, artificial intelligence has made huge strides in recent years. AI systems have also made tremendous progress in the human domain, whether it’s mastering strategy games, deciphering protein structures, or writing programme code. Text-to-image generators like Dall-E, Stable Diffusion, or Midjourney create images and collages in the desired style in seconds based solely on textual descriptions.
Perhaps the biggest leap in development has been in language processing. So-called Large Language Models (LLMs) have been developed to date, allowing these AI systems to carry out dialogues, translate texts, or write texts in an almost human-like form. These self-learning programmes are trained using millions of texts of all kinds and learn which content and words occur most often and in which context, and are therefore most relevant.
What does ChatGPT do?
The most well-known of these major language models is GPT-3, the system behind ChatGPT. At first glance, this AI seems to be able to do almost anything. It answers all kinds of knowledge questions, but it can also solve more complex linguistic tasks. For example, if you ask ChatGPT to write a 19th-century novel-style text on a particular topic, it will do so. ChatGPT also writes school essays, scientific papers, or poems with ease and without hesitation.
OpenAI, the company behind ChatGPT, lists about 50 different types of tasks that a GPT system can perform. These include writing texts in different styles, from film dialogues to tweets, interviews or essays, “micro-horror story creators” or “critiquing chatbot Marv”. The AI system can also write recipes, find colours to match your mood, or be used as an idea generator for VR games and fitness training. GPT-3 is also programmable and can convert text into program code in a variety of programming languages.
Just the tip of the iceberg
It’s no surprise that ChatGPT and its “colleagues” are hailed by many as a milestone in AI development, but can GPT-3 and its successor GPT-3.5 really make such a quantum leap? “In a way, it’s not a big change,” said Tilo Hagendorf, an AI researcher at the University of Tübingen. Similarly powerful language models have been around for a long time. “But what’s new now is that companies have dared to attach such language models to a simple user interface.”
Unlike before, when such AI systems were only tested or used in narrowly defined, private areas, ChatGPT now allows everyone to try out for themselves what is already possible with GPT and its ilk. “This user interface is really what started all this crazy hype,” Hagendorff said. In his assessment, ChatGPT is definitely a game changer in this regard. Because now other companies will offer their language models to the general public. “And then the creative potential that will be unleashed, the social impact it will have, I don’t think we know anything about that.”
Consequences for education and society
The introduction of ChatGPT is already causing considerable upheaval and change, especially in education. For pupils and students, AI systems now open up the possibility of having homework, school essays, or seminar reports that are simply prepared by artificial intelligence. The quality of many ChatGPT texts is such that they are not easily exposed as AI-generated.
As a result, many classical forms of learning success control may become obsolete in the near future. Schmidt, head of the Cognitive Systems working group at the University of Bamberg. Until now, knowledge learnt at school, and sometimes even at university, has mainly been tested by simple queries. However, competences also include the derivation, verification, and practical application of what has been learnt. In the future, for example, it may make more sense to conduct test interviews or set tasks involving AI systems.
“Large-scale language models like ChatGPT are not only changing the way we interact with technology, but also the way we think about language and communication,” said Jochen Werne of Prosegur. “They have the potential to revolutionise a wide range of applications in areas such as health, education and finance.”
The German AI platform Learning Systems has created an excellent platform for the exchange of innovation experts and thought leaders, and it is my honour and pleasure to be part of this initiative. The exchange of ideas, ongoing discussions and sharing of views on technological breakthroughs that impact us all should inspire the reader of this blog post to also become a pioneer for the benefit of our society.
You will find comments from the following members of the platform: Prof.Dr. Volker Tresp, Prof.Dr. Anne Lauber-Rönsberg, Prof.Dr. Christoph Neuberger, Prof.Dr. Peter Dabrock, Prof.Dr.-Ing. Alexander Löser, Dr. Johannes Hoffart, Prof.Dr. Kristian Kersting, Prof.Dr.Prof.h.c. Andreas Dengel, Prof.Dr. Wolfgang Nejdl, Dr.-Ing. Matthias Peissner, Prof.Dr. Klemens Budde, Jochen Werne
SOURCE: Designing self-learning systems for the benefit of society is the goal pursued by the Plattform Lernende Systeme which was launched by the Federal Ministry of Education and Research (BMBF) in 2017 at the suggestion of acatech. The members of the platform are organized into working groups and a steering committee which consolidate the current state of knowledge about self-learning systems and Artificial Intelligence.
EXPERT COMMENTS: How disruptive are ChatGPT & Co.
10 February 2022 – Source: Plattform Lernende Systeme, Translated with German AI-platform: DeepL, Original Source in German can be found HERE
The ChatGPT language model has catapulted artificial intelligence into the middle of society. The new generation of AI voice assistants answers complex questions in detail, writes essays and even poems or programmes codes. It is being hailed as a breakthrough in AI development. Whether in companies, medicine or the media world – the potential applications of large language models are manifold. What is there to the hype? How will big language models like ChatGPT change our lives? And what ethical, legal and social challenges are associated with their use? Experts from the Learning Systems Platform put it in perspective.
Digital assistants are becoming a reality.
Exploiting potential responsibly.
Thinking along with the people.
New freedom for patient treatment.
Possibilities become visible.
Outlook for efficient multimodal models.
Helping to shape development in Europe.
German language model necessary.
IN-DEPTH VIEWS
Large language models like ChatGPT can now write texts that are indistinguishable from human texts. ChatGPT is even cited as a co-author in some scientific papers. Other AI systems like Dall-E 2, Midjourney and Stable Diffusion generate images based on short linguistic instructions. Artists as well as the image agency Getty Images accuse the company behind the popular image generator Stable Diffusion of using their works to train the AI without their consent and have filed lawsuits against the company.
Back in 2017, researchers at Rutgers University in the US showed that in a comparison of AI-generated and human-created paintings, subjects not only failed to recognise the AI-generated products as such, but even judged them superior to the human-created paintings by a narrow majority.
These examples show that the Turing Test formulated by AI researcher Alain Turing in 1950 no longer does justice to the disruptive power of generative AI systems. Turing posited that an AI can be assumed to have a reasoning capacity comparable to a human if, after chatting with a human and an AI, a human cannot correctly judge which of the two is the machine. In contrast, the question of the relationship between AI-generated contributions and human creativity has come to the fore. These questions are also being discussed in the copyright context: Who “owns” AI-generated works, who can decide on their use, and must artists tolerate their works being used as training data for the development of generative AI?
Copyright: Man vs. Machine?
So far, AI has often been used as a tool in artistic contexts. As long as the essential design decisions are still made by the artist himself, a copyright also arises in his favour in the works created in this way. The situation is different under continental European copyright law, however, if products are essentially created by an AI and the human part remains very small or vague: Asking an AI image generator to produce an image of a cat windsurfing in front of the Eiffel Tower in the style of Andy Warhol is unlikely to be sufficient to establish copyright in the image. Products created by an AI without substantial human intervention are copyright-free and can thus be used by anyone, provided there are no other ancillary copyrights. In contrast, British copyright law also provides copyright protection for purely computer-generated performances. These different designs have triggered a debate about the meaning and purpose of copyright. Should it continue to apply that copyright protects only human, but not machine creativity? Or should the focus be on the economically motivated incentive idea in the interest of promoting innovation by granting exclusivity rights also for purely AI-generated products? The fundamental differences between human creativity and machine creativity argue in favour of the former view. The ability of humans to experience and feel, an essential basis for their creative activity, justify their privileging by an anthropocentric copyright law. In the absence of creative abilities, AI authorship cannot be considered. Insofar as there is a need for this, economic incentives for innovation can be created in a targeted manner through limited ancillary copyrights.
Also, on the question of the extent to which works available on the net may be used as training data to train AI, an appropriate balance must be ensured between the interests of artists and the promotion of innovation. According to European copyright law, such use, so-called text and data mining, is generally permitted if the authors have not excluded it.
Increasing demands on human originality
However, these developments are likely to have an indirect impact on human creators as well. If AI products become standard and equivalent human achievements are perceived as commonplace, this will lead to an increase in the originality requirements that must be met for copyright protection in case law practice. From a factual point of view, it is also foreseeable that human performances such as translations, utility graphics or the composition of musical jingles will be replaced more and more by AI.
Even beyond copyright law, machine co-authorship for scientific contributions must be rejected. Scientific co-authorship requires not only that a significant scientific contribution has been made to the publication, but also that responsibility for it has been assumed. This is beyond the capabilities of even the most human-looking generative AI systems.
ChatGPT is currently moving the public. The text bot is one of the so-called big language models that are celebrated as a breakthrough in AI research. Do big language models promise real progress or are they just hype? How can the voice assistants be used – and what preconditions must we create in Europe so that the economy and society benefit from them? Volker Tresp answers these questions in an interview. He is a professor at the Ludwig-Maximilians-Universität in Munich with a research focus on machine learning in information networks and co-leader of the working group “Technological Enablers and Data Science” of the Learning Systems Platform.
What are big language models and what is special about them?
Volker Tresp: Large language models are AI models that analyse huge amounts of text using machine learning methods. They use more or less the entire knowledge of the worldwide web, its websites, social media, books and articles. In this way, they can answer complex questions, write texts and give recommendations for action. Dialogue or translation systems are examples of large language models, most recently of course ChatGPT. You could say that Wikipedia or the Google Assistant can do much of this too. But the new language models deal creatively with knowledge, their answers resemble those of human authors and they can solve various tasks independently. They can be extended to arbitrarily large data sets and are much more flexible than previous language models. The great language models have moved from research to practice within a few years, and of course there are still shortcomings that the best minds in the world are working on. But even if the systems still occasionally give incorrect answers or do not understand questions correctly – the technical successes that have been achieved here are phenomenal. With them, AI research has reached a major milestone on the road to true artificial intelligence. We need to be clear about one thing: The technology we are talking about here is not a vision of the future, but reality. Anyone can use the voice assistants and chatbots via the web browser. The current voice models are true gamechangers. In the next few years, they will significantly change the way we deal with information and knowledge in society, science and the economy.
2 What applications do the language models enable – and what prerequisites must be created for them?
Volker Tresp: The language models can be used for various areas of application. They can improve information systems and search engines. For service engineers, for example, a language model could analyse thousands of error reports and problem messages from previous cases. For doctors, it can support diagnosis and treatment. Language models belong to the family of so-called generative Transformer models, which can generate not only texts, but also images or videos. Transformer models create code, control robots and predict molecular structures in biomedical research. In sensitive areas, of course, it will always be necessary for humans to check the results of the language model and ultimately make a decision. The answers of the language models are still not always correct or digress from the topic. How can this be improved? How can we further integrate information sources? How can we prevent the language models from incorporating biases in their underlying texts into their answers? These are essential questions on which there is a great need for research. So there is still a lot of work to be done. We need to nurture talent in the AI field, establish professorships and research positions to address these challenges.
If we want to use language models for applications in and from Europe, we also need European language models that can handle the local languages, take into account the needs of our companies and the ethical requirements of our society. Currently, language models are created – and controlled – by American and Chinese tech giants.
3 Who can benefit from large language models? Only large companies or also small and medium-sized enterprises?
Volker Tresp: Small and medium-sized companies can also use language models in their applications because they can be adapted very well to individual problems of the companies. Certainly, medium-sized companies also need technical support. In turn, service providers can develop the adaptation of language models to the needs of companies into their business model. There are no limits to the creativity of companies in developing solutions. Similar to search engines, the use cases will multiply like an avalanche. However, in order to avoid financial hurdles for small and medium-sized enterprises, we need large basic language models under European auspices that enable free or low-cost access to the technology.
Large language models like ChatGPT are celebrated as a technical breakthrough of AI – their effects on our society sometimes discussed with concern, sometimes demonised. Life is rarely black and white, but mostly grey in grey. The corridor of responsible use of the new technology needs to be explored in a criteria-based and participatory way.
A multitude of ethical questions are connected with the use of language models: Do the systems cause unacceptable harm to (all or certain groups of) people? Do we mean permanent, irreversible, very deep or light harms? Ideal or material? Are the language models problematic quasi-independently of their particular use? Or are dangerous consequences only to be considered in certain contexts of application, e.g. when a medical diagnosis is made automatically? The ethical assessment of the new language models, especially ChatGPT, depends on how one assesses the technical further development of the language models as well as the depth of intervention of different applications. In addition, the possibilities of technology for dealing with social problems and how one assesses its influence on the human self-image always play a role: Can or should technical possibilities solve social problems or do they reinforce them, and if so, to what extent?
Non-discriminatory language models?
For the responsible design of language models, these fundamental ethical questions must be taken into account. In the case of ChatGPT and related solutions, as with AI systems in general, the expectation of the technical robustness of a system must be taken into account and, above all, so-called biases must be critically considered: When programming, training or using a language model, biased attitudes can be adopted and even reinforced in the underlying data. These must be minimised as far as possible.
Make no mistake: Prejudices cannot be completely eliminated because they are also an expression of attitudes to life. And one should not completely erase them. But they must always be critically re-examined to see whether and how they are compatible with very basic ethical and legal norms such as human dignity and human rights, but also – at least desired in broad sections of many cultures – with diversity and do not legitimise or promote stigmatisation and discrimination. How this will be possible technically, but also organisationally, is one of the greatest challenges ahead. Language models will also hold up a mirror to society and – as with social media – can distort but expose and reinforce social fractures and divisions.
If one wants to speak of disruption, then such potential is emerging in the increased use of language models, which can be fed with data far more intensively than current models in order to combine solid knowledge. Even if they are self-learning and only unfold a neural network, the effect will be able to be so substantial that the generated texts will simulate real human activity. Thus they are likely to pass the usual forms of the Turing test. Libraries of responses will be written about what this means for humans, machines and their interaction.
Whistle blown for creative writing?
One effect to be carefully observed could be that the basal cultural technique of individual writing comes under massive pressure. Why should this be anthropologically and ethically worrying? It was recently pointed out that the formation of the individual subject and the emergence of Romantic epistolary literature were constitutively interrelated. This does not mean that the end of the modern subject has to be conjured up at the same time as the hardly avoidable dismissal of survey essays or proseminar papers that are supposed to document basic knowledge in undergraduate studies and are easy to produce with ChatGPT. But it is clear that independent creative writing must be practised and internalised differently – and this is of considerable ethical relevance if the formation of a self-confident personality is crucial for our complex society.
Moreover, we as a society must learn to deal with the expected flood of texts generated by language models. This is not only a question of personal time hygiene. Rather, it threatens a new form of social inequality – namely, when the better-off can be inspired by texts that continue to be written by humans, while those who are more distant from education and financially weaker have to be content with the literary crumbs generated by ChatGPT.
Technically disruptive or socially divisive?
Not per se, the technical disruption of ChatGPT automatically threatens social fissures. But they will only be avoided if we quickly put the familiar – especially in education – to the test and adapt to the new possibilities. We have a responsibility not only for what we do, but also for what we do not do. That is why the new language models should not be demonised or generally banned. Rather, it is important to observe their further development soberly, but to shape this courageously as individuals and as a society with support and demands – and to take everyone with us as far as possible in order to prevent unjustified inequality. This is how ChatGPT can be justified.
Artificial intelligence (AI) has long remained a promise, an unfulfilled promise. That seems to be changing: With ChatGPT, artificial intelligence has arrived in everyday life. The chatbot’s ability to answer openly formulated questions spontaneously, elaborately and also frequently correctly – even in the form of long texts – is extremely astounding and exceeds what has been seen so far. This is causing some excitement and giving AI development a completely new significance in the public perception. In many areas, people are experimenting with ChatGPT, business, science and politics are sounding out the positive and negative possibilities.
It is easy to forget that there is no mind in the machine. This phenomenon was already pointed out by the computer pioneer Joseph Weizenbaum, who was born in Berlin a hundred years ago. He programmed one of the first chatbots in the early 1960s. ELIZA, as it was called, was able to conduct a therapy conversation. From today’s perspective, the answers were rather plain. Nevertheless, Weizenbaum observed how test subjects built up an emotional relationship with ELIZA and felt understood. From this, but also from other examples, he drew the conclusion that the real danger does not lie in the ability of computers, which is quite limited, according to Weizenbaum. Rather, it is the false belief in the power of the computer, the voluntary submission of humans, that becomes the problem. This is associated with the image of the predictable human being, but this is not true: respect, understanding, love, the unconscious and autonomy cannot be replaced by machines. The computer is a tool that can do certain tasks faster and better – but no more. Therefore, not all tasks should be transferred to the computer.
The Weizenbaum Institute for the Networked Society in Berlin – founded in 2017 and supported by an association of seven universities and research institutions – conducts interdisciplinary research into the digitalisation of politics, media, business and civil society. The researchers are committed to the work of the institute’s namesake and focus on the question of self-determination. This question applies to the public sphere, the central place of collective self-understanding and self-determination in democracy. Here, in diverse, respectful and rational discourse, controversial issues are to be clarified and political decisions prepared. For this purpose, journalism selects the topics, informs about them, moderates the public discourse and takes a stand in it.
Using AI responsibly in journalism
When dealing with large language models such as ChatGPT, the question therefore arises to what extent AI applications can and should determine news and opinion? Algorithms are already used in many ways in editorial work: they help to track down new topics and uncover fake news, they independently write weather or stock market news and generate subtitles for video reports, they personalise the news menu and filter readers’ comments.
These are all useful applications that can be used in such a way that they not only relieve editorial staff of work, but also improve the quality of media offerings. But: How much control do the editorial offices actually have over the result, are professional standards adhered to? Or is a distorted view of the world created, are conflicts fuelled? And how much does the audience hear about the work of AI? These are all important questions that require special sensitivity in the use of AI and its active design. Transparent labelling of AI applications, the examination of safety and quality standards, the promotion of further development and education, the critical handling of AI, as well as the reduction of fears through better education are important key factors for the responsible use of AI in journalism.
Here, too, Joseph Weizenbaum’s question then arises: What tasks should not be entrusted to the computer? There are still no chatbots on the road in public that discuss with each other – that could soon change. ChatGPT also stimulates the imagination here. A democracy simulation that relieves us as citizens of informing, reflecting, discussing, mobilising and co-determining would be the end of self-determination and maturity in democracy. Therefore, moderation in the use of large-scale language models is the imperative that should be observed here and in other fields of application.
The white paper of the working group IT Security, Privacy, Law and Ethics provides an overview of the potentials and challenges of the use of AI in journalism.
As a member of Germany’s AI Plattform Lernende Systeme it is very inspiring to read this progress report and learn what has been achieved by Germany’s best experts in this field.
Self-learning systems are increasingly becoming a driving force behind digitalisation in business and society. They are based on Artificial Intelligence technologies and methods that are currently developing at a rapid pace in terms of performance. Self-learning systems are machines, robots and software systems that learn from data and use it to autonomously complete tasks that have been described in an abstract fashion – all without specific programming for each step.
Self-learning systems are becoming increasingly commonplace supporting people in their work and everyday lives. For example, they can be used to develop autonomous traffic systems, improve medical diagnostics and assist emergency services in disaster zones. They can help improve quality of life in many different respects, but are also fundamentally changing how humans and machines interact.
Self-learning systems have immense economic potential. As digitalisation takes hold, they are already helping companies in certain sectors to create entirely new business models based on data usage and are radically changing conventional value creation chains. This is opening up opportunities for new businesses, but can also represent a threat to established market leaders should they fail to react quickly enough.
Developing and introducing self-learning systems calls for special core skills, which need to be carefully nurtured to secure Germany’s pioneering role in this field. Using self-learning systems also raises numerous social, legal, ethical and security questions – with regard to data protection and liability, but also responsibility and transparency. To tackle these issues, we need to engage in broad-based dialogues as early as possible.
Plattform Lernende Systeme brings together leading experts in self-learning systems and Artificial Intelligence from science, industry, politics and civic organisations. In specialised focus groups, they discuss the opportunities, challenges and parameters for developing self-learning systems and using them responsibly. They derive scenarios, recommendations, design options and road maps from the results.
The Platform aims to:
shape self-learning systems to ensure positive, fair and responsible social coexistence,
strengthen skills for developing and using self-learning systems,
act as an independent intermediary to combine different perspectives,
promote dialogue within society on Artificial Intelligence,
develop objectives and scenarios for the application of self-learning systems,
encourage collaboration in research and development,
position Germany as the leading supplier of technology for self-learning systems.
It was a inspiring holding in hand the first edition of the JOURNAL OF AI, ROBOTICS & WORKPLACE AUTOMATION published by Henry Stewart Publications
We are pleased to give everyone the opportunity to download the entire article POINT OF NO RETURN by Jochen Werne & Johannes Winter here: https://lnkd.in/dmi9i9aB
The inspiring articles and case studies published in Volume 1 Number 1 are:
Editorial Tom Davenport, Distinguished Professor, Babson College, Research Fellow, MIT Center for Digital Business and Senior Advisor, Deloitte Institute for Research and Practice in Analytics
Practice papers:
The path to AI in procurement by Phil Morgan, Senior Director, Electronic Arts (EA)
How to kickstart an AI venture without proprietary data: AI start-ups have a chicken and egg problem — here is how to solve it by Kartik Hosanagar, Professor, The Wharton School of University of Pennsylvania and Monisha Gulabani, Research Assistant, Wharton UK AI Studio
Towards a capability assessment model for the comprehension and adoption of AI in organisations by Tom Butler PhD MSc, Professor, Angelina Espinoza-Limón, Research Fellow and Selja Seppälä, Research Fellow, University College Cork, Ireland
The path to autonomous driving by Sudha Jamthe, Technology Futurist and Ananya Sen, Product Manager and Software Engineer
Point of no return: Turning data into value by Jochen Werne, Chief Visionary Officer, Prosegur Germany and Johannes Winter, Managing Director, Plattform Lernende Systeme – Germany’s AI Platform
Robotic process automation and the power of automation in the workplace by Raj Samra, Senior Manager, PwC
Difficult decisions in uncertain times: AI and automation in commercial lending by Sean Hunter, Chief Information Officer and Onur Güzey, Head of Artificial Intelligence, OakNorth
The intelligent, experiential and competitive workplace: Part 1 by Peter Miscovich, Managing Director, Strategy + Innovation, JLL Technologies
Responding to ethics being a data protection building block for AI by Henry Chang, Adjunct Associate Professor, The University of Hong Kong
Legal issues arising from the use of artificial intelligence in government tax administration and decision making by Liz Bishop Barrister, Ground Floor Wentworth Chambers
It was indeed a great pleasure contributing in co-authorship Dr. Johannes Winter & Jochen Werne to the Henry Stewart Publications and we are pleased to present the article:
POINT OF NO RETURN: TURNING DATA INTO VALUE
The Cambridge Dictionary defines the point of no return as the stage at which it is no longer possible to stop what you are doing, and when its effects cannot now be avoided or prevented. Exponential advances in technology have led to a global race for dominance in politically, militarily and economically strategic technologies such as 5G, artificial intelligence (AI) and digital platforms. A reversal of this status quo is hardly conceivable. Based on this assumption, this paper looks to the future, adding the lessons of recent years — the years when the point of no return was passed. In addition, the paper uses practical examples from different industries to show how digital transformation can be successfully undergone and provides six key questions that every company should ask itself in the digital age.
The article includes key learnings and/or best practise examples from e.g. acatech – Deutsche Akademie der Technikwissenschaften Plattform Lernende Systeme – Germany’s AI Platform Prosegur Tesla Waymo Google Amazon relayr Ada Health Fiege Logistik Westphalia DataLab Satya Nadella Microsoft TikTok Facebook
HOT OFF THE TAPE: Business Transformation in the Digital Age – Insight into Practice from an Expert’s Perspective
It was a great pleasure being invited as guest to the brand new podcast format POLEDIFY. With Poledify, Felix Gehm offers insights into the routines, mindsets and habits of experts and thought leaders from a wide range of disciplines.
Jochen Werne is Chief Development Officer and Chief Visionary Officer of Prosegur Germany. Prosegur Group is one of the leading security service providers worldwide with over 175,000 employees on five continents. Jochen Werne is, among other things, a member of the Learning Systems Platform, which advises the German government on artificial intelligence, and of the Royal Institute of International Affairs Chatham House, one of the most important think tanks in the world. Jochen was listed as one of the AI experts in Germany by Focus magazine. He is also an author, keynote speaker, internationally awarded NGO founder and specialist in business development and transformation, and international diplomacy. In 2020, the Tyto Tech Power List named him one of the 50 most influential people in the tech scene in Germany.
Topics of this episode:
What does digital transformation mean for “traditional” business sectors? How Prosegur plans to master digital transformation How not to be deterred by big challenges The most important characteristics of a leader in the face of such challenges
Links and other things from the episode: The interview between Bill Gates and Warren Buffet: shorturl.at/mGPYZ Books: Utopias for Realists by Rutger Bregman Mordern Monopolies by Alex Moazed and Nicholas L. Johnson Here you can find Jochen Werne and everything about Prosegur: Jochen Werne LinkedIn: https://www.linkedin.com/in/jochenwerne/ Jochen Werne Website: http://jochenwerne.com/ Prosegur LinkedIn: https://www.linkedin.com/company/prosegur/ Prosegur website: https://www.prosegur.com/en/jobs Platform Learning Systems: https://www.plattform-lernende-systeme.de/home-en.html
Questions, criticism, suggestions or anything else? Write to me! Instagram: https://www.instagram.com/poledify/ Twitter: https://twitter.com/ThisIsFelixGehm or simply send an email to poledify@gmail.com Where does the fine music (intro & outro) come from? The fine music in the intro and outro is produced by pads. Behind the artist name is Patrick, who has finally decided to record all his little songs. You can find it all here: YouTube: bit.ly/33TOFcN Instagram: https://bit.ly/2XWFDIm Soundcloud: https://bit.ly/3oYQA8k
An event organised by acatech – the National Academy of Science and Engineering which is the voice of the technological sciences at home and abroad. acatech provides advice on strategic engineering and technology policy issues to policymakers and the public. The National Academy of Science and Engineering fulfils the mandate to provide independent, evidence-based advice that is in the public interest under the patronage of the Federal President.
Start: 05 March 2021 – 10:00 a.m. End: 05 March 2021 – 11:30 a.m Location: Virtual event – Language: German
Especially in the Corona pandemic, digital technologies proved their usefulness: through them, companies were more adaptable in the crisis. What role do digital technologies now play on the way out of the crisis – especially for medium-sized companies? How do they manage the digital transformation and develop new value creation models?
A debate organized by acatech
The host is discussing these and other questions with guests from business and research on 5 March.
PROGRAM
Welcome:
Dr. Johannes Winter, acatech Secretariat
Moderation:
Prof. Dr. Michael Dowling, University of Regensburg/acatech
Impulse/Podium: DATA, VALUES, VALUE CREATION – WHERE IS THE JOURNEY GOING?
Dr. Wolfgang Faisst, CEO ValueWorks.ai / Platform Learning Systems BEST PRACTICE INDUSTRY 4.0
LESER GmbH & Co. KG: Digital transformation in medium-sized companies Kai-Uwe Weiß, Head of Global Industrial Engineering FORCAM GmbH: Value creation through integrative IIoT platform solution Franz Gruber, Founder and Advisory Board
EXPERT DISCUSSION: DIGITAL SOLUTIONS FOR A RESILIENT COMPANY
Olga Mordvinova, CEO incontext.technology GmbH / Learning Systems Platform Jochen Werne, Prosegur Cash Services Germany GmbH / Learning Systems Platform Franz Gruber, FORCAM GmbH Kai-Uwe Weiß, LESER GmbH & Co. KG
Registration: Admission free; registration required. Please register under the following link, all registered will receive the access link before the event.
About this whitepaper This paper was prepared by the Work/Qualification, Human-Machine Interaction working group of the Learning Systems Platform. As one of a total of seven working groups, it examines the potentials and challenges arising from the use of artificial intelligence in the world of work and life. The focus is on questions of transformation and the development of humane working conditions. In addition, it focuses on the requirements and options for qualification and lifelong learning as well as starting points for the design of human-machine interaction and the division of labour between man and technology.
Original published in German. Translation made by Deepl.com
Authors: Prof. Dr.-Ing. Sascha Stowasser, Institut für angewandte Arbeitswissenschaft (ifaa) (Projektleitung) Oliver Suchy, Deutscher Gewerkschaftsbund (DGB) (Projektleitung) Dr. Norbert Huchler, Institut für Sozialwissenschaftliche Forschung e. V. (ISF-München) Dr. Nadine Müller, Vereinte Dienstleistungsgewerkschaft (ver.di) Dr.-Ing. Matthias Peissner, Fraunhofer-Institut für Arbeitswirtschaft und Organisation (IAO) Andrea Stich, Infineon Technologies AG Dr. Hans-Jörg Vögel, BMW Group Jochen Werne, Prosegur Cash Services Germany GmbH Authors with guest status: Timo Henkelmann, Elabo GmbH Dr.-Ing. habil. Dipl.-Tech. Math. Thorsten Schindler, ABB AG Corporate Research Center Germany Maike Scholz, Deutsche Telekom AG Coordination: Sebastian Terstegen, Institut für angewandte Arbeitswissenschaft (ifaa) / Dr. Andreas Heindl, Geschäftsstelle der Plattform Lernende Systeme / Alexander Mihatsch, Geschäftsstelle der Plattform Lernende Systeme
The introduction of artificial intelligence (AI) in companies offers opportunities and potential both for employees, for example in the form of relief through AI systems, and for companies, for example in the form of improvements in work processes or the implementation of new business models. At the same time, the challenges in the use of AI systems must – and can – be addressed and possible negative accompanying implications dealt with. The change in the companies can only be mastered together. All in all, it is a matter of shaping a new relationship between people and technology, in which people and AI systems work together productively and the respective strengths are emphasised. Change management is a decisive factor for the successful introduction of AI systems as well as the human-centred design of AI deployment in companies. Good change management promotes the acceptance of AI systems among employees, so that the potential of new technologies can be used jointly for all those involved, further innovation steps can be facilitated and both employees and their representatives can be made the shapers of technological change.
The participation of employees and their representatives makes a significant contribution to the best possible design of AI systems and the interface between man and machine – especially in terms of efficient, productive work organisation that promotes health and learning. Early and process-oriented participation of employees and co-determination representatives is therefore an important component for the human-centred design and acceptance of AI systems in companies.
The introduction of artificial intelligence has some special features which also have an impact on change management as well as on the participation of employees including the processes of co-determination in the company. The authors of the working group Work/Qualification, Human-Machine-Interaction pursue with this white paper the goal to sensitize for the requirements of change management in Artificial Intelligence and to give orientation for the practical implementation of the introduction of AI systems in the different phases of the change process:
Phase 1 – Objectives and impact assessment: In the change processes for the introduction of AI systems, the objective and purpose of the applications should be defined from the outset with the employees and their representatives and information on the functioning of the AI system should be provided. On this basis, the potential of the AI systems and the possible consequences for the company, the organisation and the employees can then be assessed. A decisive factor for the success of a change process is the involvement of the employees and the mobilisation for the use of new technologies (chapter 2.1).
Phase 2 – Planning and design: In a second step, the design of the AI systems themselves is the main focus. This is primarily concerned with the design of the interface between man and AI system along criteria for the humane and productive implementation of man-machine interaction in the working environment. Of particular importance here are questions of transparency and explainability, of the processing and use of data and of analysis possibilities by AI systems (including employee analysis) as well as the creation of stress profiles and the consideration of employment development (Chapter 2.2).
Phase 3 – Preparation and implementation: The AI systems must also be integrated in a suitable way into existing or new work processes and possibly changed organisational structures. This means preparing employees for new tasks at an early stage and initiating the necessary qualification measures. It is also important to design new task and activity profiles for employees and to adapt the work organisation to a changed relationship between man and machine. A helpful instrument in the introduction of AI systems are pilot projects and expert phases in which experience can be gathered before a comprehensive introduction and possible need for adaptation with regard to AI systems, qualification requirements or work organisation can be identified (Chapter 2.3).
Phase 4 – Evaluation and adaptation: After the introduction of the AI systems, a continuous review and evaluation of the AI deployment should take place in order to ensure possible adaptations with regard to the design of the applications, the organisation of work or the further qualification of the employees. In addition, the regular evaluation of AI deployment can make use of the experience of the employees and initiate further innovation processes – both with regard to the further improvement of (work) processes and with regard to new products and business models – together with the employees as designers of change (Chapter 2.4).
These practice-oriented requirements are aimed at all stakeholders involved in change processes and are intended to provide orientation for the successful introduction of AI systems in companies. In addition, these requirements should also inspire the further development of existing regulations – for example in legislation, social partnership or standardisation – and thus enable an employment-oriented, flexible, self-determined and autonomous work with AI systems and promote the acceptance of AI systems.
Original in German published online in the Cicero – Magazine for political education. Please click here
Translation made by DeepL.com
In the first half of digitisation, the USA and China have mercilessly left Europe behind. But nothing is lost yet: a plea for sovereign data infrastructures and a transformation to service-oriented value creation.
Europe is at a crossroads – once again. This time it is about nothing less than the preservation of the continent’s sovereignty, at least in technological and economic terms. It is therefore not surprising that “Digital Sovereignty” is a focus topic of the German EU Council Presidency. Europe’s largest economy exemplifies the current challenges in the midst of a global trade conflict and quasi-monopolies of American and Asian platform companies: because Germany’s strength as the world’s equipment supplier is under scrutiny.
Since the 1970s, the first wave of digitalization has been underway, characterized by the use of electronics and IT as well as the automation and standardization of business processes. It has been driven by exponential growth in performance parameters such as communication networks, memories and processors, which is typical for the IT industry. As a manufacturer of machines, plants, vehicles or process technology, Germany has benefited considerably from this. “Made in Germany” is a worldwide promise of quality. But for how much longer? Or to put it another way: How can we transfer this promise into the digital age?
The real and virtual worlds are merging, an Internet of things, data and services is emerging in all areas of work and life. Automated systems driven by artificial intelligence learn during operation and increasingly act autonomously, as collaborative manufacturing robots, robo-advisors or intelligent harvesters.
Europe is falling behind
Consumer platforms such as Amazon, Alibaba and Facebook have dominated the first half of digitisation. With the exception of the streaming service Spotify, Europe is hardly present in the B2C platform markets. The second half includes the industrial sector, both the digitisation and networking of production (Industry 4.0) and the expansion of products and services to include personalised, digital services (digital business models).
So far the stocktaking – what is still outstanding, however, is the comprehensive implementation, without which Europe will fall further behind in the global race. What levers are there for Europe to score points in the second half and thus maintain competitiveness and self-determination? Two aspects seem particularly important:
Without a sovereign data infrastructure
Once developed, software platforms have process costs that approach zero. This makes it easy to aggregate huge amounts of data, learn from data with Artificial Intelligence and use it to develop digital business models that can be scaled exponentially across countries and industries. Google’s search engine with a 95 percent EU market share is an example of both innovation leadership and quasi-monopoly. In order to gain sovereignty over data and data infrastructures, digital sovereignty is needed: from hardware and software components to communication networks, cloud infrastructures, data rooms and platforms.
European efforts such as the policy- and business-driven project “GAIA-X” deserve broad support, even if success is by no means certain. Self-determination does not mean self-sufficiency or the exclusion of dominant competitors. On the contrary: Europe’s path must be determined by diversity, openness and decentralization, not by isolation. A glance at the regional distribution of medium-sized world market leaders is enough to understand that Europe’s technological and entrepreneurial pound is not in the hands of a few large companies.
Germany has domain expertise
Appropriately, Europe should focus on building open digital ecosystems based on a common reference architecture and defined standards, enabling technological interoperability, providing distributed cloud and edge services and relying on European values such as trustworthiness, security, privacy and fairness. In the industrial sector, the race is still open, since production-strong and product-centric countries like Germany have domain expertise and industrial data such as machine, process, user and product data to which hyperscalers like Google and Amazon have so far had only limited access.
But to achieve sovereignty, Europe needs access to the cloud and data infrastructures, whether in the mechanical engineering or mobility sector. And it needs European regulation as well as state and companies as active consumers of European technology and business offerings. To do so, they must be secure, high-performance, cost-effective and competitive. A high standard! However, if Europe chooses the passive path, this endangers economic competitiveness, entrepreneurial freedom and, in the medium term, our prosperity.
Value creation shifts in favour of the platform operators
We know from the consumer world that investor-financed technology start-ups attack established business models in all domains, act as a platform operator between supplier and customer, define rules, standards and interfaces and benefit from network and economies of scale. As a result, value creation shifts in favour of the platform operators, traditional providers of products and services are degraded to suppliers. Operating and controlling platforms and marketing digital products and services on them is therefore a core prerequisite for Europe’s survival in a digital economy.
Since no single company in the industrial environment has the know-how and data to be successful in the digital age, digital value-added networks are the solution. The “Learning Systems” platform, led by the Federal Ministry of Education and Research and acatech, recently highlighted a dozen success stories of digital ecosystems in Germany. One example shows how resilient manufacturing is created when machine builders can minimize production stops with the help of IoT and AI service providers by means of data-based prediction.
Process optimization is scalable
If the machine nevertheless comes to a standstill, a contingency insurance policy is taken out. In an Industry 4.0 logic, this creates a flexible production line that almost never stops and is therefore even more profitable. And: This process optimization including the digital business model is scalable and does not remain an isolated solution. Another example shows how agricultural APPs and IoT platforms enable cross-manufacturer data exchange with agricultural machinery, even if farmers and contractors use machines from different manufacturers.
The entire vehicle fleet can thus be optimized via one platform. This reduces complexity and enables medium-sized technology leaders for sensor systems, seeds or harvesting machines to scale in a trustworthy platform environment without having to take a greater entrepreneurial risk in building their own platforms.
Europe must speak with one voice
Many more such examples are needed – and they are also emerging in federal Europe. The realization is there, after all. But: In order to play an important role in the world, Europe should not only become faster, but should also speak with one voice, whether in enforcing a level playing field or in international standardization.
Completing the digital single market is also important to enable what China and the US have ahead of us: huge consumer markets in which domestic providers can scale. Europe is at a digital crossroads. Let us take the fork in the road to self-determination!