dialogue
core_principle
scrappy_fiddle

The panopticon is a prison architecture that enables the guardian to observe the inmates without them being able to tell whether they are being observed or not.
Modern public spaces are designed in a similar way. Our streets are covered by cameras and our online spaces are riddled of trackers following our every move, our every click.
This reality is problematic on so many levels I won't even bother listing them.
We could silo our personal data on a device disconnected from the internet and be certain no one can peaks on us but this would make any form of social interaction impossible. The whole point of socialising is to share informations.

So having privacy in a social context is having the ability to share informations with whomever we want and have the certainty that no one else will ever get access to it unless one of the involved people decides to share it further.
The most frequent answer to this issue is end-to-end encryption. It works great to ensure proper privacy of our private messages in apps such as Signal but it won't be enough to solve our issue. Because what makes the modern panopticon so pervasive is that - some times - we actually want it.
We like to get relevant and personalised informations on our feeds, content that folks with similar tastes than us enjoy. To get this nice feature, a "trusted" third party may collect every user's data and use it to provide good recommendations for everyone. This feature is absolutely essential to navigate the modern internet and its infinite wealth of content. But could we get it with a different architecture?
Instead of trusting a single entity with our data we could make this data public and turn the panopticon into a public place where everyone sees everyone else. This would enable us to implement a variety of recommendation algorithms that each user could choose from. However this architecture would defeat any notion of privacy unless we can properly anonymise our data.

Data anonymisation is tricky. Removing personally identifiable informations isn't enough as there are techniques to de-anonymise data by cross referencing different datasets. Nevertheless it is doable by limiting the metrics we track to the strict minimum, use k-anonymisation and other anonymisation techniques.
What is even trickier in our case is to do this while identifying falsified data. We need ways to identify the source of our datapoints in order to ensure they match actual human beings and not bots trying to skew our recommendations. This question gets into a deeper rabbit hole.
I sadly don't have all of the answers here but I hope this short article will encourage you to search for solutions. We might be able to solve this tricky question together! 🤜💥🤛 Please get in touch if you have any thoughts or ideas on the question.
Published here :

