skip to Main Content

Share Price

The uses and abuses of personal data

By Renate Samson

Since the introduction, roughly a decade ago, of Web 2:0—when the internet moved from being an information superhighway to a globalized shopping mall and social hangout—society has been persistently told that what appears to be free online is actually paid for with our data. But no one listened, because the concept that information had monetary as well as informational value was, and remains, abstract.

If we were able to visualize our data as actual money, however, this concept becomes more tangible. If our personal data were represented by coins and notes—and every time we searched online or shared a piece of personal information or liked an article or browsed a website, the money in our pocket diminished—we might have a clearer idea of the value of every click, like, or share. But because our data is more like a credit card, we click and share as if we were charging and spending on a piece of plastic. And if we cannot see the money we are spending, we have a tendency to think we are richer than we are. Similarly, if we don’t understand the value of our data, we just give it away.

This doesn’t mean we should be paid for every piece of data we share. Though many promote this as a future approach to “ownership” and “value” of data, the reality is that ownership of data is not truly possible, because our data—data about us—is also data about others. Moreover, the concept of being paid for sharing data has the potential to harm vulnerable people, creating complex data inequality.

Rather than going down this likely dead-end of ownership, we need to start by improving our basic education about data, and by learning some fundamentals of what it means to be a digital citizen. For example, let’s understand how the data we share is about us and about others, how it is used, how it is deployed by companies to define us, and how it helps feed algorithms, machine learning, and artificial intelligence. A basic understanding of this may help us to understand the value data about us has.

Rather than going down this likely dead-end of ownership, we need to start by improving our basic education about data, and by learning some fundamentals of what it means to be a digital citizen.

A good starting point is the work of Cathy O’Neil, most notably her 2016 book Weapons of Math Destruction. O’Neil outlines how the data we share with big companies is used to “bucket” us into categories defined by companies based on assumptions made about us. These are then used to establish generalized headings—for example, about our gender, income, age, race, education, employment, friendship group, aspirations, food choices, and so on. These “buckets” may not accurately represent us, but they do define us. O’Neil writes, “They define their own reality and use it to justify their results.” This type of model, known as a “black box” algorithm, “is self-perpetuating, highly destructive and very common.”

These black boxes are mostly inscrutable and often indefinable, even by the people who design and build them. But they define us, measure us, and then provide us with choices or decisions that impact the things we see and the options available to us. In the long term, they undermine our individual autonomy. Being a digital citizen should not mean being a data-slave or being data- punished. If you have ever taken a moment to read the terms and conditions for connected services, you might recognize the part that says something along the lines of “we may share your data with third parties to provide you with a more personalized service.” Personalized pricing based on cookie data is one example of what that means. But clearly personalized services online are not always to our financial advantage.

Tellingly, this is issue of financial disparity is one of the areas where people suddenly understand that their data has value—even those people who say, “I have nothing to hide, nothing to fear,” or “I don’t care what these companies know about me; I am boring, they won’t learn much of any interest.” When those people learn that all that “boring” internet activity, browsing history, location data, online spending behavior, and IP addresses can actually influence the cost of the flights or hotels they are offered, and that what they are offered may differ from the deal given to the person next to them, it tends to have an impact. At first they don’t believe it, but by explaining that the cookies on their devices store all this valuable information and, in turn, enable personalized pricing to take place, all of a sudden this crystallizes the concept of how data influences our online lives and the amount of money in our pockets.

So how do we get a grip on this? How do we start to develop a more informed approach to our digital citizenry? How do we learn to stop, look, and listen before sharing data?

First of all, we need to understand and acknowledge that we are no longer just citizens or  consumers, but also products. Gone are the days when we filled our lives and our homes with inanimate objects designed to sit silently and serve us. In the connected world, when we engage with a connected or smart product, we serve them as much as they serve us. When we purchase a smart product or sign up to a connected service, we are required to sign terms and conditions and give consent to our data being acquired, monitored, analyzed, stored and shared, otherwise we cannot use the product we have spent good money on. So we tick the box saying “agree,” and off we go, letting Amazon’s Alexa listen to our conversations so it can remember our preferences for next time to make life easier. We let YouTube remember what videos we have watched, our connected cars remember that on Tuesday’s we take the kids to swimming practice, our fridges remember that we are on a low-carb diet and our phones remember every single wi-fi point we have ever connected to. They don’t just remember, though; they remember and they share, usually with hundreds of data-hungry companies and brokers desperate to learn as much as they can about us so they can target us with products and services they think we might like.

Who fuels this? We do. These smart products and services don’t start smart; they start dumb, and we make them smart by telling them everything they ask for. But like a small child, they don’t ever stop asking, and like an elephant, they never forget. If the device is connected to the internet, we are not in control of how it is used. It is in control of how it uses us.

Smart products and services don’t start smart; they start dumb, and we make them smart by telling them everything they ask for.

Never before have we been coerced into such a relationship where the value to us is so unclear. Exchanging money for goods and services? Yes, OK, that makes us a consumer. But spending money on a product and then also having to agree to share all aspects of our life with it for evermore in order for it to function? That is another story entirely. The traditional give-and-take model is redesigned to be take-and-take to the benefit of businesses, with little true value, other than convenience to the consumer.

If we are prepared to give up our autonomy for convenience, information, and globalized connectivity, it must be because it has improved our lives, right?

Well, yes, but mostly, no. Obviously, the ability to access information has never been easier or quicker which is groundbreaking for society, but research indicates that the positive aspects of digitalization may not be the panacea hoped for or dreamed about. The values held dear by the internet pioneers of CERN, MIT, and Silicon Valley—who saw the World Wide Web and the internet as the freedom to learn and educate, to provide a space where ideas and thoughts could be shared, and where people would find value in exploring and engaging together—have not played out as was hoped.

What has happened is these laudable ideals and values have been exploited by business, crooks, bullies, frauds, bots, and bad-acting nation-states, to name but a few. Clear dividing-lines exist between those with access to technology and those without, an issue persistently exploited by big tech companies offering internet access solely via their own platforms. Moreover, problems exist with cybersecurity and government backdoors into data, and internet shutdowns are commonplace across many countries, causing mass disruption and impacting human rights. Profound problems are occurring around the volume of fake news and state interference with information, not to the mention the harvesting of data to manipulate citizens, swing elections, and create dissent, distrust, and disruption.

But it is not just at the meta level we are witnessing problems. What of the impact of technology on us on the micro level, the impact on us as individuals, on our children? Has growing up in a world of 24-hour connectivity and digitization been positive? Has technology created a better world for our children? Are children more engaged, more tolerant, willing to develop more varied and nuanced arguments, knowing that they can build their knowledge from the vast amount of information available to them? Are they happier, more relaxed, more social?

Sadly, it appears not.

Jean M. Twenge is a professor of psychology at San Diego State University. She’s the author of a number of books exploring the impact of technology and connectivity on children, in particular those born between 1995 and 2012—a generation she refers to as the iGen.

Twenge’s research shows seismic changes in the way the iGen generation have developed, predominantly in the United States. She notes that 18-year-olds today behave like 15-year-olds; 15-year-olds behave like 13-year-olds; 13-year-olds like 10-year olds. On the plus side, this delayed development means that the iGen are drinking less, having less teen sex, getting driving licenses later, and spending less time meeting friends and hanging out in public environments. Their desire to rebel, to be independent of parents’ oversight, to have autonomy and take on individual responsibility, and to be grown up before their time has diminished across the board.

From the perspective of domestic public policy, much of this sounds positive. Societal issues arising from teen pregnancy, teen-deaths due to drunk driving, and teen drug-use have been problems that have plagued governments for decades. But the reduction of these traditional problems have been replaced with a whole raft of new problems stemming from the impact of new technology, in particular mobile phones, and the disruption of connecting with people and experiencing life remotely through a screen rather than face to face.

While we may think that having a mobile phone in our hand—particularly one which can access all the world’s information—means we are better connected and more social, the reality appears to be that we now have an entire generation of future adults, future leaders, future decision-makers and future diplomats who are less rebellious, risk-taking, and emotionally resilient. Instead, they are more infantile, depressed, lonely, anxious, and likely to commit suicide than any generation before.

We now have an entire generation of future adults, future leaders, future decision-makers and future diplomats who are less rebellious, risk-taking, and emotionally resilient.

Twenge’s research reveals that girls in particular are more susceptible to depression and mental-health problems related to mobile-phone use, since the engagement reduced face-to-face communication leads to greater loneliness and less emotional closeness.

This area of research is leading to conversations about changing how we parent, how to control screen-time, and whether phones have a place in schools. In 2017, France famously banned mobile-phone use in schools—and to much condemnation: parents feared that banning screen-time for students had the potential to undermine or disrupt the essential skills they would need for a life most likely spent engaging with screens and technology.

These arguments seem valid, but are they right? After all, isn’t it curious that the children of tech leaders from Google, Microsoft, Apple, Facebook, Snapchat, and so on, are being raised and educated in an environment where technology and screens are prohibited, and that encourage engagement with pen-and-paper assignments, and face-to-face interaction?

The impact of technology on iGen is also explored in relation to how they handle pressure, stress, and difficult emotional issues in a body of work led by the New York University social psychologist and Jonathan Haidt and civil liberties lawyer Greg Lukianoff. In The Coddling of the American Mind, they argue that the behavior demonstrated by iGen members on university campuses in the United States and Britain are evidencing that the mix of late development, lack of life experience, and mental-health concerns has led to a rise in university students seeking more protection and adult intervention in their affairs and interpersonal conflicts.

Behavior demonstrated by iGen members on university campuses in the United States and Britain are evidencing that the mix of late development, lack of life experience, and mental-health concerns has led to a rise in university students seeking more protection and adult intervention in their affairs and interpersonal conflicts.

This isn’t, just about kids squabbling with each other in their dorm rooms; this is about a rise in students denying public speaking engagements of people whose views they don’t like, seeking the dismissal of teaching staff whose classes raise challenging ideas and expose them to ideas such as imperialism, gender issues, sexuality, abortion, and so on. This is about students censoring one another, shouting each other down, and, most alarmingly, claiming to be physically harmed by words and ideas.

This approach could be seen as admirable, a sign that the iGen has strongly held values and a sensitive consciences. They care deeply about vulnerable people, they want more equality and to eradicate harmful commentary and hate but they want these things—and this is where the optimism flounders—only for those who share the same views. For those of differing opinions? Their voices must be silenced, for fear of causing emotional and physical harm.

This is a confusing and confused picture, and one with the potential to be easily manipulated or used to criticize young people attempting, in their own way, to influence and improve the world. Rather than dismissing it as people being misguided or overly sensitive, it is worth considering how and why we have arrived here and examining how we are all reacting to our environment.

In a world where the rules of the game have been changed beyond recognition, where what it means to be human has been altered forever—a world in which we are the product subjected to new methods of manipulation and where the solutions to online challenges are often in direct conflict with the offline rules we live by—everyone, not just iGen, feels more vulnerable and less sure.

The sooner we realize we are digital by default, that data is integral to who we are, that it has a value to us which must not be manipulated or dictated by big business, the sooner we can start to find some balance and develop values which are meaningful and relevant to this on/offline world we all now inhabit.


Renate Samson is a London-based expert in data privacy, security, and digital citizenry. She is the former CEO of a nongovernmental organization exposing the rise of data surveillance and encouraging improved online privacy and security. Samson has specific expertise and knowledge in privacy, biometrics, data rights, digital citizenry, and digital literacy. She is currently engaged in developing rights for digital citizens.

Back To Top