‘Data is the new oil’, says The Economist. The digital revolution continues at an increasingly rapid pace. With great innovations occuring in the fields of big data and artificial intelligence, our personal data has more value now than ever before. In our own online lives, there’s been a steady increase in requests for our data such as name, identity number, address (email and home), places of work, phone number. Such data might be for security purposes, for example, but often the full purpose is left unknown. This growing appetite for our personal data has led to its manipulation by data scientists and growing corporations. The question that needs to be addressed is: how much of our privacy are we giving up, how is this data being used and what companies should be allowed to store data about our personal information?
Big data can be put to both highly beneficial and deeply problematic uses. I shall focus on the latter to emphasize how big of a difference data can make when put into action. Its growing claws have sunk into democracy and revealed its most toxic trait yet; the ability to spread tailored fake content that is personalized according to one’s interests. The Cambridge Analytica data scandal following the 2016 US presidential elections brought this specific problem to light. Exactly one year since the revelations, the scandal has left us with several lessons that should be understood by every online user, to ensure that democracy is upheld and the privacy of our personal data is respected.
The scandal involved Cambridge Analytica, a political consulting firm that worked on Trump’s campaign for the 2016 US presidential elections. Through collaborations with a specific Facebook employee and a third-party app that was built as a quiz, personal data of 87 million Facebook users was collected – an act that clearly violated Facebook’s own data use policies. This data was exploited by Trump’s campaign to build techniques to support the campaign, and some argue that it was an important factor in Trump’s electoral success.
What are the lessons we could learn from this? First, we need to ask about the purpose of social media platforms. They exist to connect users all over the world and enhance communication of news happening all over the world. Social media therefore should not be a tool for gathering information about the users for the benefit of corporations or politicians. Acknowledging the fact that social media companies are businesses that are required to make profits, social media should look into making money in an alternative way that is not secretive and manipulative of its users. These risks of data manipulation are now, more than ever, a major social problem.
Secondly, social media platforms must become more vigilant with the access they give to 3rd party applications. These apps often solicit users’ personal information, but do not provide a detailed explanation as to why such data is being collected and for what use. Social media users must always be informed at length on how much they are giving away by being a participant and active on the platforms so as to be aware. The data privacy laws of each platform should note only be clearly outlined in the terms and conditions manual but strictly adhered to by the social media companies themselves so as to maintain trust and accountability.
Along these lines, social media companies’ ridiculously long terms and conditions are in need of serious reform. Numerous scholars have already looked into how long and discouraging it is to read the terms and conditions to discover how much information they collect about us. ‘I Agree’, an art piece by Dima Yarovinsky’s ‘I agree’, showcases how minute, discouraging and harmful social media company’s terms and conditions are detailed for users to have access to. He printed them out on scrolls and hang them at a gallery to highlight how unfair our privacy details are presented to users.