Behavior-Aware System Design
How can we design systems that are aware of human behavior and are guided by socially desirable principles and efficient resource allocation goals?
The recent decades have been marked by extensive technological advancements in domains such as Internet, Computing, Communication, and Artificial Intelligence. These have lead to rapidly evolving technolgies such as cloud computing, automated services, communication platforms, big data software companies, and online marketplaces, that have become an integral part of our daily lives.
Our focus in the development of these technologies has essentially been at making them work efficiently. For example, enormous engineering efforts have been directed towards the development of high speed communication tools, compact processing devices, large scale database deployments, and power efficient systems.
Only recently have we started paying closer attention to the way these systems interact with their users. Although we are employing techniques such as geotargeting and personalized recommendations, a great deal remains yet to be undertood, especially, from the point of view of the correct guiding principles that would govern these human-computer interactions. For example, I believe social welfare and efficient resource allocation should play a key role in the next stage of development of these technologies, and these principles should certainly dominate than other metrics such as revenue generation or customer engagement.
Several people have raised alarms towards the perils of social media with a huge ongoing debate regarding the correct usgae of these technologies, and the ethical responsibilities of the giant tech companines. I am not going to discuss these issues in this blog post (although they are very closely related to the point I wish to make). Instead, my goal is to take a modest scientific outlook towards the role of behavioral sciences in the design of such systems.
Let’s say you need to travel to keep an appointment or reach the airport to catch a flight. You open Google Maps or some other navigation app and check for possible routes and the estimated times of arrival. If you are short on time, then your topmost concern would be to reach your destination as soon as possible. Plus, you’d like to have a pretty good estimate of your arrival time. Compare it with someone who might be using the same app but is looking for a scenic route and not so particular about his arrival time. At any given time, hundreds of thousands of users are using such apps to find what suits them the best. All these different people are going to have varied requirements based on their purposes and preferences while sharing the same infrastructure and resources. The app reccommendations are going to affect their choices, and their choices have externalities that affect the conditions for others. Ideally, we would like to make suggestions and assign routes to the different users based on everybody’s requirements and preferences in an “optimal” manner (with an appropriate definition of “optimality”).
For another example, consider a doctor who is performing a remote surgery and a social media influencer who is posting a video. I would say that the doctor requires much more certainty in his bandwidth allocation as compared to the influencer. (I hope we agree on this!) Can we design our systems to account for these differences in the user preferences and provide optimal services through appropriate incentives and nudges to the users?
This is actually not a new idea. We have been doing this for several years. Think about clearing the way for emergency vehicles or using emergency diversion routes. We have been using such techniques to reallocate resources based on the needs. However, in the past, taking such policy steps required considerable efforts, and no wonder they were restricted to emergency situations. However, in today’s world, where almost everybody uses some sort of navigation app to decide their routes, we have many more ways to influence the routing of traffic. More so in the future with the advent of smart cars.
Sure, this requires knowing the preferences of the users. But economists have already given us an answer to this. It is not a far-fetched thought to imagine a world where you pay some points to get a route you prefer or earn some points to take a less preferred way. We already have this in the form of tolls for bridges and expressways. But I want you to imagine a mechanism that works on a much larger scale in an integrated manner. And one that accounts for the factors at the core of the problem - the users and their preferences.
This leads to two directions to focus on - designing an efficient mechanism that works on a large scale and capturing the preferences of the users as accurately as possible. For the first direction, implementing any policy at a large scale often requires mathematical models that capture the essence of the problem and building mechanisms and algorithms that would execute these policies. By and large, this has been the engineering approach towards solving these problems. Here the focus has been more on design efficiency and less on the social aspects along the lines of the second direction. In several engineering tasks, social goals such as welfare or the well-being of each individual were assumed to be straightforward and often taken for granted. On the other hand, these goals have mainly been the topic of interest in policy-making and economics. Only recently have people started to integrate ideas from both these fields.
I believe there is a good reason for it. Classical economics often treats people as if they were like Spock - formally called homo economicus - the rational individual who always accounts for all the available information and makes decisions involving great computational power. Firms and nations who have vast resources can afford to behave like Spock. And ideas from economics have played a significant role in governing their decisions. For example, the notion of utility allows us to convert several decision problems into an optimization problem and solve them (often encounterd under the name of cost-benefit analysis). However, for e-commerce platforms like social media and online marketplaces, where the participating agents are single individuals who perform several short-lived interactions with the platform, it is unusual that these agents would behave like Spock.
Working in an environment with growing human-computer interaction requires a better understanding of human behavior, often including the emotional traits displayed by them. On the one hand, these have created the necessity to account for the behavioral traits of the users, and on the other hand, they have also provided us with extraordinary tools to act upon it. Especially, the data boom. This data comes in various forms right from the preferences of the users, their choices, their habits, their strengths, their weaknesses, to their emotional moods (which I recently learned can be measured by watch-like devices through the slight electrical changes in the skin).
These technological advancements have opened several new avenues at the intersection of engineering, technology, and social sciences. It has raised significant interest in the past few decades, mostly under the label of behavioral economics. Although behavioral economics is often interpreted as creating nudges and changing the defaults to alter human behavior, I mean it in a more general sense. I like to think of it as also comprising of techniques such as pricing and taxing commonly encountered in economics, but in a manner that is aware of the deviations in human behavior. Or simply put, economics for information technologies without the Spock assumption.
You might have encountered such approaches under the label of boundedly rational behavior. But I feel that this term implicitly imposes certain restrictions. For example, a decision-maker is typically said to be rational if she her decisions are consistent with expected utility theory, and any deviation from this behavior is due her lack of Spock like abilities, and hence marked as irrational. On the contrary, I believe there are times when even if I had Spock like abilities I would chose to deviate from the prescriptions of expected utility theory. See for example the famous Allais paradox. The point I wish to make here concerns with the general behavior which may or may not be a result of the lack of Spock like abilities.
For example, if we envision having automated decision-makers with Spock like abilities that assist each user, then we would like these assistants to make decisions according to the preferences of the corresponding users. Even if these preferences do not satisfy certain notions of rationality. Certainly, we will have to make assumptions regarding the rationality features ranging from basic safety to commonly agreed upon aspects that one expects to be present in these systems. But my point is that, along with the correction of non-Spock like behavior, the design of such systems should also be flexible enough in their definition of rationality.
I believe, it is the integration of behavioral economics and engineering that holds the right recipe to design robust and scalable systems that would interact and assist the users in making decisions that are in their own and the society’s best interests. This is what I like to call Behavior-Aware System Design. Its systematic study and implementation is the next big leap that we need.