Interview with Gerhard Backfried
Gerhard Backfried

Interview with Gerhard Backfried

Date: 29-30 November 2018
Venue: weXelerate, Praterstrasse 1, 1020 Vienna, Austria

The News Media are confronted with major challenges and opportunities arising from game-changing technological developments. This event will showcase how AI-powered technologies can help news organisations safeguard the truthfulness and trustworthiness of their sources, stay in control of the news gathering and delivery process and empower their newsrooms, their journalists and, ultimately, their readership.

Computational Propaganda: Technical and Social Aspects

Teaser Interview with Gerhard Backfried

Gerhard Backfried, Head of Research at SAIL LABS Technology, will hold a speech “Technical and Social Elements of Computational Propaganda” at the conference “Fake News and other AI Challenges for the News Media in the 21st Century” (29.-30. November 2018, Vienna).

Gerhard Backfried answered questions regarding the origin of Fake News, the interplay of Fake News and social media and ways how Open Source Intelligence (OSINT), Machine learning (ML) and Artificial intelligence (AI) can help with combating Fake News. Moreover, the interview also tackles questions concerning the future of the relationship of AI and News industry and also talks about a new book chapter on Fake News which Gerhard co-authored.

Q: According to your perception, what does Fake News actually mean?

GB: There are so many different definitions of Fake News. But all circle around the concepts of manipulation and distortion of information with the aim to confuse people, create some sort of illusion of consensus, to cast doubts or to drown out other voices. From a computational point of view, we tend to look at it as computational propaganda – a subset of what I mentioned before that can be created and distributed by algorithms.

Q: Do you believe that Fake News is a concept that has always been part of a disinformation communication, or is it a new and more powerful tool that arose with the emergence of new media?

GB: Fake News has always been with us. And propaganda has always been around, for as long as we have been communicating. Regarding the technicalities, today what happens is that social media have enabled them to be used on a much broader scale. That allows to push them out on a massive scale and, at the same time, to much a more targeted population. By profiling and use of technologies associated with big data, you can create specific profiles and different people can get very different messages. The core idea, though, is always the same: manipulation, sending slanted, wrong or outright false messages.

Q:  Since your presentation would be about social and technical aspects of computational propaganda, what do you understand as these social and technical aspects?

GB: Computational Propaganda is the use of algorithms, automation, combined with human curation to create, manage and distribute misleading information. This is done on all kinds of media and combines social and technical aspects. The technical aspects would be algorithms, agents, platforms, big-data methods, statistics or ML. The social aspects relate to the human actors and their motivations, their agenda and social interactions. Both aspects need to be addressed to handle and counter Fake News! 

We have also witnessed a shift from an information economy to an attention economy. People want to be recognized and receive feedback. Especially so on social media. Therefore, they are willing to communicate and pass-on all sorts of information, even sometimes at the expense of not even having read them. And, on the other hand, what they read and what they want to read often just reinforces what they believed in in the first place, a typical filter-bubble. These bubbles can also exist without social media, but again I believe that social media provide a good base to make them stronger.

Q: According to your experience, do you think that the social and the technical aspects then could be addressed together?

GB: It is not only that they can be addressed together but that they have to be addressed together. Fake News is a very interdisciplinary phenomenon and it is best tackled by an interdisciplinary approach. For example, if you view it only from a computer scientist’s perspective, we can calculate bot factors, which are useful and important, but this alone is not going to make Fake News go away. Political scientists on the other hand can investigate possible strategies behind such Fake News, but only this by itself also will not be sufficient. The topic is actually so broad and interdisciplinary that it requires a corresponding approach to counter it. I am deeply convinced of that

Q: Since AI can be used to create Fake News themselves, how can it be used to detect them?

GB: Like many technologies, it has two sides. For example, in the case of feedback and comments to posts. If I want to appear as thirty different persons commenting on a post about a new policy, then I can use technology to create thirty comments that on the surface may look different. Meanwhile the same technology can be used to find out that what we see is not from thirty different authors and there is really just one author behind them. So very often the same thing can be used for both processes.

Q: Which OSINT tools in real time are applicable for fighting Fake News?

GB: Real time behaviour is very important for early detections, otherwise the spread is immediate, something goes viral and it is too late! Through OSINT you can benefit from a cross-media, cross-platform and multilingual approach, finding out how some news distributes and how information circulates over different accounts and platforms. Finding patterns is also tremendously important, and this is what we can actually help with through the SAIL LABS Media Mining System. It is not that humans could not do that, but you could not do it in such a short time and with that amount of data. So, it is actually a case where we are using computers for tasks that they are very good at and then the humans get into the loop with tasks that they are good at, like putting matters into a larger cultural or social context.

Q: Do you think multilinguality is important?

GB: Absolutely. In Austria and Germany for example, we have a large number of second and third generation migrants coming from countries like Turkey. These people live over here but they also watch Turkish TV, listen to Turkish radio and so on. Some of them are still allowed to vote in Turkey. The government there will use that fact and try to influence them. Without multilingual technologies, we will not know what is communicated and how things are presented to them. This is also the case for Germans who emigrated from Russia to Germany.  They consume Russian propaganda which is tailored specifically for them. But, of course, this is not limited to the context of migration. It is equally important to be able to scan what your EU-neighbors are saying about certain topics which may influence your own people.

Q: What are the biggest technical challenges regarding the detection of Fake News?

GB: I believe this is not black and white. Maybe some superficial Fake News can be detected quickly. But very often at the end even when you investigate, there are these cases where it is impossible to decide whether something is fake or true. So, it is not only about classifying “fake” and “not fake”. It is also about how to support people and how to point out things that do not add up. Algorithms can help with that concerning the content, the sources and the patterns of how news spread. We can rank accounts and documents or create indicators and help in visualizing things. But the people producing Fake News and the algorithms they use also get better – so it is a kind of an arms-race between the groups producing Fake News and the groups trying to combat them.

Q: Do you believe then that we will always rely at the end on the human perception and that there would not be an automatic solution for it?

GB: I do not foresee a fully automatic solution in the near future. Fakes are getting better as well and we are already talking about deep fakes. Especially when it comes to the visual process, we as humans are quite bad in detecting fakes. Technologies will certainly get better and help us, but AI is no substitute for critical thinking. It will support us but will not spare us to think about what we read or pass-on.

Q: It could be stated that the technology worldwide is advancing fast. Do you think the ethical discussions and law issues are advancing at the same velocity?

GB: Definitely not. Ethical discussions are catching up, but it depends again on where you are for starters. I believe that interdisciplinarity helps, because politicians or social scientists can bring in a different mind-set complementing the view of others like computer scientists. This is a great combination. Technology is very fast. Ethics lags behind. And law is even much more behind them and it is probably going to stay like that.

Q: Do you see this as a threat to democracy or the rule of law?

GB: Yes. If you take away the basis for open and free discussions this will make certain voices remain in silence. I do not believe that Europe is very polarized because we are also very diverse. But the US, for instance, is actually a pretty bad example of where this might lead to. People just want to get their own opinions heard and reinforced. And freedom of expression for some does not translate only to the right to be heard but almost also to the right to silence others.

Q: Do you think that people should be more educated about the topic?

GB: AI is not going to take away the need for critical thinking from us, so it is still something that needs to be developed and extended. Media literacy is also something that is mentioned very often in this context. Being able to reflect a bit more about what we are reading and most importantly what we are we passing on, under which circumstances, combining this with education that could start at school. Inside the computer science classes that our kids take at school, where they learn about Word, Excel or programming, they should also be encouraged to develop critical thinking in the use of social media – how are we responsible for its use, what are the dangers out there, what problems could be created. This would for sure deliver a good strategy.

Q: Could you tell us a bit about your upcoming book chapter regarding Fake News?

GB: It is a chapter inside a book on information quality to be released in March 2019. We dedicated it particularly to the Fake News phenomenon.  We put our heads together with my SAIL LABS colleague Dorothea Thomas-Aniola and a friend from Romania who runs his own company dedicated to Fake News detection. He has some good ideas about automatic measures that can be used to detect Fake News, so together with him we wrote a little section about the history of Fake News and automatic ways to detect them. The book’s title is “Information Quality in Information Fusion and Decision Making”. It is published by Springer and due in March of next year.

Q: According to your opinion, what could the future bring for the interaction of AI and news?

GB: There are a few things already in progress such as articles being written by AI methods, AI to counter Fake News, AI to make certain mechanisms of distribution more transparent. If we can have more transparency about where things are coming from and how they are being communicated, this would be great. This could be supported by better visualizations in finding hidden connections about media. I also believe that there will be a lot more NLP (natural language processing, a sub-field of AI, editor’s note) employed in a variety of fields: the processing of multimedia news, as well as chat-bots or personal assistants for improved interaction. The processing of natural language – after all the most natural way for us, humans, to interact – will impact many industries and especially the news and media domains.

Medium Bio*:

Gerhard Backfried is one of the founders and currently holds the position of Head of Research at SAIL LABS Technology. Prior to joining SAIL LABS, Gerhard worked in the fields of expert systems for the finance sector and personal dictation systems (IBM’s ViaVoice). His technical expertise includes acoustic and language modelling as well as speech recognition algorithms. More recently he has been focussing on the combination of traditional and social media, particularly in the context of multilingual and multimedia disaster-communication. He holds a degree in computer science (M.Sc.) from the Technical University of Vienna with specialty in Artificial Intelligence and Linguistics and is a Ph.D. candidate at the University of Vienna. He holds a number of patents, has authored several papers and book chapters, regularly participates in conference program committees and has been contributing to national and international research projects, such as KIRAS/MDL, KIRAS/QuOIMA, FP7/M-ECO, FP6/VIRTUOSO, FP7/iTalk2Learn or FP7/SIIP.

If you are interested in learning more about computational propaganda and its technical and social aspects, do not miss Gerhard Backfried’s speech on 30th November 2018! Save your spot on https://www.liatwork.com.

Share on Social Media

Leave a Reply

Close Menu