The implementation of bot interfaces varies tremendously in current industry practice. They range from the human-like to those that merely present a brand logo or a digital avatar. Some applications provide a maximum amount of information with limited turn-taking between the user and the interface; others offer only short pieces of information and require more turn-taking. Instead of simply implementing the default option provided by chatbot providers and platforms, companies should consider very carefully how the specifics of the chatbot interface might affect the user experience. Simple mechanics such as increasing the frequency of interactions leads to greater trust and a more enjoyable user experience. Also, personalizing chatbots with basic consumer characteristics such as gender increases trust and improves the perceived closeness between the customer and the chatbot – and ultimately the brand. Brand managers should therefore consider chatbots not as merely another digital marketing fad or a way to save costs through service automation. When implemented wisely, they are even able to increase a company’s upselling potential.
Thanks to the rapid progress in the field of artificial intelligence algorithms are able to accomplish an increasingly comprehensive list of tasks, and often they achieve better results than human experts. Nevertheless, many consumers have ambivalent feelings towards algorithms and tend to trust humans more than they trust machines. Especially when tasks are perceived as subjective, consumers often assume that algorithms will be less effective, even if this belief is getting more and more inaccurate.
To encourage algorithm adoption, managers should provide empirical evidence of the algorithm’s superior performance relative to humans. Given that consumers trust in the cognitive capabilities of algorithms, another way to increase trust is to demonstrate that these capabilities are relevant for the task in question. Further, explaining that algorithms can detect and understand human emotions can enhance adoption of algorithms for subjective tasks.
Whatever your perception of AI is, the machine age of marketing has arrived. To fully grasp how AI is changing every fabric of both our professional and private lives, we need to abstract beyond the presence of autonomous cars, digital voice assistants, or using machines to translate some text for us. AI is creating new forms of competition, value chains, and novel ways of orchestrating economies around the world. AI is more than just technology, it’s creating a new economy. The fuel that runs this economy is the combination of computational processing power, data, and the algorithms that process this data.
AI has the potential to make our life easier, but this convenience might come at a price which we have to pay such as biases directly built-in to the algorithms we use, data privacy issues or failed AI projects in business practice. But without testing, failing, and learning from our failures, there
Interview with Jan Neumann, Senior Director, Applied AI, Comcast Cable, Philadelphia, USA
While many customers are still reluctant to entrust themselves to Alexa, Cortona or Siri in their homes, they seem to be less worried about controlling their TV sets via voice control. Comcast started offering a voice-based remote control in 2015 and has extended this service continuously. In the vast world of home entertainment, it seems that voice has come just in time to help consumers navigate and control their ever-increasing home entertainment options. Jan Neumann explains how Comcast enables its customers to comfortably boil down a huge entertainment portfolio to personally relevant content on the TV screen, and how the company remains successful in the highly competitive home entertainment market.
Edmond Awad, Jean-François Bonnefon, Azim Shariff and Iyad Rahwan
The algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm. Manufacturers and regulators are confronted with three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. The presented moral machine study is a step towards solving this problem as it tries to learn how people all over the world feel about the alternative decisions the AI of self-driving vehicles might have to make. The global study displayed broad agreement across regions regarding how to handle unavoidable accidents. To master the moral challenges, all stakeholders should embrace the topic of machine ethics: this is a unique opportunity to decide as a community what we believe to be right or wrong, and to make sure that machines, unlike humans, unerringly follow the agreed-upon moral preferences. The integration of autonomous cars will require a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered.
Consumers produce enormous amounts of textual data of product reviews online. Artificial intelligence (AI) can help analyze this data and generate insights about consumer preferences and decision-making. A GfK research project tested how we can use AI to learn consumer preferences and predict choices from publicly available social media and review data. The common AI tool “Word Embeddings” was used and has shown to be a powerful way to analyze the words people use. It helped reveal consumers’ preferred brands, favorite features and main benefits. Language biases uncovered by the analysis can indicate preferences. Compared to actual sales data from GfK panels, they fit reasonably within various categories. Especially when data volumes were large, the method produced very accurate results. By using free, widespread online data it is completely passive, without affecting respondents or leading them into ranking or answering questions they would otherwise not even have thought of. The analysis is fast to run and no fancy processing power is needed.
Powered by better hardware and software, and fueled by the emergence of computational social science, digital traces of human activity can be used to make highly personal inferences about their owner’s preferences, habits and psychological characteristics. The gained insights allow the application of psychological targeting and make it possible to influence the behavior of large groups of people by tailoring persuasive appeals to the psychological needs of the target audiences. On the one hand, this method holds potential benefits for helping individuals make better decisions and lead healthier and happier lives. On the other hand, there are also several potential pitfalls related to manipulation, data protection and privacy violations. Even the most progressive data protection regulations of today might not adequately address the potential abuse of online information in the context of psychological targeting, highlighting the need for further policy interventions and regulations.
More and more companies are using chatbots in customer service. Instead of with a human employee, customers interact with a machine. Many companies give these chatbots human traits through names, human-like appearances, a human voice or even character descriptions. Intuitively such a humanization strategy seems to be a good idea.
Studies show, however, that the humanization of chatbots is perceived in a nuanced way and can also backfire. Especially in the context of customer complaints, human-like chatbots can intensify negative reactions of angry customers, because their performance is judged more critically compared to non-humanized chatbot variants. Service managers should therefore consider very carefully whether and in which situations they should use humanized service chatbots.