Back to Blogs
BRIDGING THE GAP: ON PRIVACY AND THE TECHNOLOGY TRUST DEFICIT
By Aqilliz
Published on July 29, 2020
Over time, the impact of technologies and digital infrastructures has become increasingly pronounced in our everyday lives, now powering the most benign to the most critical of day to day activities. As we become increasingly more reliant on apps, programmes, and devices, what does this say about the future of our data and how it’ll be used?
During our latest Defining MadTech webinar entitled “From Research to Real World, Bringing Federated Learning and Differential Privacy to Life”, our guests discussed exactly that in a conversation moderated by our CEO G’man, featuring:
Offering differing perspectives from the researcher and the practitioner, the resulting discussion touched on the subjectivity of privacy, the rise of privacy-enhancing technologies, and what we all need to be mindful of as we move towards a more equitable data-driven digital economy. To bring you up to speed, here are some essential takeaways from the conversation.
What is Differential Privacy and Federated Learning?
As defined by Saxena, differential privacy is an algorithmic toolkit that allows you to learn aggregated, general statistics about a data set without explicitly learning about a specific individual. Meanwhile, federated learning refers to a broad class of technologies that allow many parties to combine their data sets for any kind of shared analytics. Both technologies enable different things, but when paired together can allow for a more privacy-centric and efficient approach to analysing data for shared insights.
For more detail on these two technologies, give our Beginner’s Guide to Differential Privacy and Federated Learning a read.
Where are we with adoption?
That being said, Sarin highlights that these technologies are still classed as emerging in the practitioner’s domain and are not yet used at a wide scale on a commercial basis. Instead, what’s in use today is a system of anonymisation and aggregation—individual data is anonymised which is then aggregated to a certain level to gain insights.
From a practitioner’s standpoint, evaluating any vendor or brand that purportedly leverages these technologies will prompt some level of scepticism. According to Sarin, the process of adequately evaluating these new techniques coupled with encouraging their adoption, is a classic chicken and egg conundrum. There remains a significant barrier to overcome in terms of trust—trust in the technology from businesses and users, as well as trust between users and corporations as a whole.
What does this say about the need for common standards?
In the same way that a technology is only useful if its usage hits a critical mass, the same goes for the development of industry standards. Standardisation is crucial to industry alignment and this rule applies to any sector. In the case of data, Sarin highlighted that there needs to be a universal common currency when it comes to consent. You could have one specific to a single industry, but limitations would arise during opportunities for cross-industry collaboration—which, of course, is essential to growth and expansion in today’s digital economy.
A common global standard for consent that’s accepted across multiple stakeholders is essential and from a practitioner’s point-of-view, we just aren’t seeing that yet.
What has shaped the existing “oppositional” dynamic between users and corporations?
Today, discourse propagated in the public domain on the dynamic between users and corporations is often framed in an oppositional light, according to Saxena. However, this is largely both unfortunate and misleading—after all, if a user was gaining absolutely no value from a service, they wouldn’t engage in it all. That being said, in Saxena’s view, it’s important to note that from a researcher’s perspective, the current data handling practices are very questionable. Thus, more can certainly be done to better optimise the value exchange taking place to make it much more transparent and equitable.
As a whole, differential privacy and federated learning can offer a much more mature, informed debate between consumers and enterprises, encouraging a much more tangible way to implement a model of trust—be it GDPR or a different framework—within a system.
Where are we with big data?
Simultaneously, Sarin argues that whether we like it or not, we cannot run away from the fact that the future is going to be increasingly powered by data-driven decision-making. Practitioners simply play a role in giving consumers and corporations these data-driven insights to make much more informed decisions. With 5G and Internet of Things increasingly coming to the fore, the amount of data being provided and exchanged for access to goods and services will only skyrocket further, and the trust foundations simply aren’t in place to ensure a more productive discussion between consumers and corporations.
So what we should ask ourselves is this: How can corporations and institutions get the right amount of data to make decisions and how does this benefit the consumer? These questions have not been solved yet.
What about the nuances of privacy?
As G’man points out, the definition of privacy varies greatly across markets that operate with different cultural understanding of the concept as well as different regulatory frameworks (if any). What may constitute privacy in an emerging market, for one, may largely differ from what we see in developed markets with much more robust data privacy laws.
According to Sarin, practitioners look specifically at data across three different areas when in relation to privacy—usage, storage, and monetisation—and this can also differ not only in terms of geography, but also by industry. For example, in Thailand, telecommunications data can be freely used for internal purposes but are bound by very stringent storage laws and cannot be used for any external monetisation practices of any kind. In fact, Thailand as well as Indonesia have strict data localisation laws for telcos and banks that stipulate that data cannot leave the country under any circumstances—it must be “consumed” locally. In light of this, companies across different industries have to grapple with a whole set of terms and conditions when engaging with consumer data across different markets in each region.
However, for Saxena, a fundamental problem exists in that privacy is poorly understood and that oftentimes, the language around consent is masked in terms and conditions that are often overlooked by the average consumer.
When consumers and businesses are not fully aware of the difference between a fact about them and a fact about a demographic that they belong to, this prompts wider questions regarding the implications of privacy not only in an individual context, but also in an aggregated cohort. On a policymaking level, this is one of the perennial challenges of drafting a consent framework as it requires a holistic dissection of privacy across each user context.
As Saxena crucially points out, even differential privacy cannot be the be-all end-all to the world’s privacy woes.