Big Data... Bad Data? How Can We Stop the Latter From Occurring

Sat, 13 Apr 2019

Karugi Gathumbi
Student, SFU CMNS 353

‘Data is the new oil’, says The Economist. The digital revolution continues at an increasingly rapid pace. With great innovations occuring in the fields of big data and artificial intelligence, our personal data has more value now than ever before. In our own online lives, there’s been a steady increase in requests for our data such as name, identity number, address (email and home), places of work, phone number. Such data might be for security purposes, for example, but often the full purpose is left unknown. This growing appetite for our personal data has led to its manipulation by data scientists and growing corporations. The question that needs to be addressed is: how much of our privacy are we giving up, how is this data being used and what companies should be allowed to store data about our personal information?

Big data can be put to both highly beneficial and deeply problematic uses. I shall focus on the latter to emphasize how big of a difference data can make when put into action. Its growing claws have sunk into democracy and revealed its most toxic trait yet; the ability to spread tailored fake content that is personalized according to one’s interests. The Cambridge Analytica data scandal following the 2016 US presidential elections brought this specific problem to light. Exactly one year since the revelations, the scandal has left us with several lessons that should be understood by every online user, to ensure that democracy is upheld and the privacy of our personal data is respected.

The scandal involved Cambridge Analytica, a political consulting firm that worked on Trump’s campaign for the 2016 US presidential elections. Through collaborations with a specific Facebook employee and a third-party app that was built as a quiz, personal data of 87 million Facebook users was collected – an act that clearly violated Facebook’s own data use policies. This data was exploited by Trump’s campaign to build techniques to support the campaign, and some argue that it was an important factor in Trump’s electoral success.

What are the lessons we could learn from this? First, we need to ask about the purpose of social media platforms. They exist to connect users all over the world and enhance communication of news happening all over the world. Social media therefore should not be a tool for gathering information about the users for the benefit of corporations or politicians. Acknowledging the fact that social media companies are businesses that are required to make profits, social media should look into making money in an alternative way that is not secretive and manipulative of its users. These risks of data manipulation are now, more than ever, a major social problem.

Secondly, social media platforms must become more vigilant with the access they give to 3rd party applications. These apps often solicit users’ personal information, but do not provide a detailed explanation as to why such data is being collected and for what use. Social media users must always be informed at length on how much they are giving away by being a participant and active on the platforms so as to be aware. The data privacy laws of each platform should note only be clearly outlined in the terms and conditions manual but strictly adhered to by the social media companies themselves so as to maintain trust and accountability.

Along these lines, social media companies’ ridiculously long terms and conditions are in need of serious reform. Numerous scholars have already looked into how long and discouraging it is to read the terms and conditions to discover how much information they collect about us. ‘I Agree’, an art piece by Dima Yarovinsky’s ‘I agree’, showcases how minute, discouraging and harmful social media company’s terms and conditions are detailed for users to have access to. He printed them out on scrolls and hang them at a gallery to highlight how unfair our privacy details are presented to users.

Third and most serious of all, is the inadequacy of our laws to protect social media users’ privacy and punish the individuals that benefit from the data mining that takes place. The Cambridge Analytica scandal highlighted the reality of how the legal systems are unequipped to deal with the repercussions of data manipulation. The UK’s Information Commissioner’s office did investigate Facebook, which resulted in the €500,000 fine that Facebook was charged with: an amount that would barely affect a company of Facebook’s scale and could easily pass for pocket change. Had the EU’s General Data Protection Regulation (GDPR) been implemented at the time, the charges against Facebook might have been a staggering 4% of their global turnover.

Although it is easy to see these problems as a failure of technology, we ought to acknowledge the power dynamics that are at the root of these scandals. Social media inventors are escalated to positions of power by the applications they invent, but they often fail to take bold steps to take responsibility in the same manner that they do when they make the agreements to sell the data of the users subscribed to their platforms for their own personal gain is wrong. As both users and subjects of online media platforms, we ought to apply pressure and demand accountability for our privacy and personal information.

This blog is part of the Community Summit Classroom Partnership blog series.

Confronting the Disinformation Age Blog Posts