The DCMSC’s recommendations on regulating Facebook (etc.) – are they realistic, are they desirable?

Below are just a few reflections on the House of Commons’ Digital Culture Committee’s recommendations about how to prevent tech companies such as Facebook spreading ‘fake news’ and disinformation. This blog is a follow up to my previous post outlining the harms Facebook does in spreading lies, hate and confusion, as summarised by the DCC.

This is just my interpretation of the main points of the recommendations – as follows:

  1. Tech companies should assume legal liability for content published on their platforms
  2. A compulsory code of ethics should be established, overseen by an independent regulator
  3. A 2% digital services levy should be charged to cover the cost of regulating tech companies.
  4. Inferred data should be treated as private data.
  5. Electoral law is not fit for purpose and needs to be changed to reflect changes in campaigning techniques,
  6. Digital literacy should be a pillar of education.
  7. Friction measures put in place to prevent people from publishing online content too quickly

Tech companies* should assume legal liability for content, no matter whether they are a platform or a publishing company

*The report uses the term ‘tech company’ to reflect the fact that companies such as Facebook aren’t merely platform or publishing companies, they are somewhere between the two and a whole lot more.

ATM what makes platform publishing companies different from old media companies is that anyone can publish to a platform like Facebook without going through some kind of vetting or editing process, like they would have had to have done in an old-media publication.

At first look it thus seems harsh to suggest that Facebook should bare legal liability for its content, because Facebook simply does not and cannot vet everything that two billion people publish on its platform.

BUT, Facebook still profits from disinformation which is published through its platform, and so it somehow seems inappropriate that this should happen when social harm is being done through the spreading of ‘fake news’ etc, and making Facebook etc. take legal liability for their content seems to be the backbone of enforcing any other measures to control the spread of disinformation.

A compulsory code of ethics should be established, overseen by an independent regulator

The report doesn’t say anything about the content of the code of ethics but the content of the report suggests pretty firmly that that it should exclude, for example, disinformation from unverifiable sources, hate speech and revenge porn.

Not only should tech companies be obliged to sign up to this code of ethics, the report advises that they should also have relevant systems in place to highlight and remove ‘types of harm’ and to ensure that cyber security structures are in place, the later presumably to ensure that ‘harmful content’ can’t be snuck onto their platforms without them realising.

A 2% digital services levy should be charged to cover the cost of regulating tech companies

Unlike the above suggestion, this one looks relatively simply to implement – although the problem is that not only isn’t there the political will to increase taxes, Facebook is well positioned to avoid paying them through offshoring.

Inferred data should be treated as private data

This is an interesting tweak to the definition of what counts as ‘private data’.

ATM inferred data, which is data on you which has been inferred from what other people like you like, is not counted as private data. So, if everyone like me likes Coffee, but I haven’t expressed a like for it anywhere on Facebook, Facebook can still sell this data on to a coffee advertising company who can then target adds at me.

This is what the report recommends ceases, effectively removing what I imagine is a fairly lucrative source of profit ATM for Facebook etc.

In order to enforce this, the independent regulator should have the power to check what data tech companies are holding on users, which is (handily enough) another recommendation made by the report.

Hence the report is suggesting something of a rebalancing of privacy – more privacy for users, and less for tech companies. An idea which is further reinforced by another recommendation – that the algorithms tech companies use to generate inferred data should be accessible to regulators.

Electoral law is not fit for purpose and needs to be changed to reflect changes in campaigning techniques

UK ‘Electoral law not being fit for purpose’ is a pretty damning statement – and it reflects the fact that platforms such as Facebook allow big money to influence the political process without anyone knowing the sources of funding, and thus today’s elections lack transparency.

To quote the report’s specific recommendations on this important issue at length:

>There needs to be: absolute transparency of online political campaigning, including clear, persistent banners on all paid-for political adverts and videos, indicating the source and the advertiser; a category introduced for digital spending on campaigns; and explicit rules surrounding designated campaigners’ role and responsibilities.

The report mentions two very specific ways this might be achieved – firstly a searchable repository for who paid for ads and secondly the use of security certificates which could authenticate social media accounts and ensure that people could check the biases of the people writing content.

The report also recommends companies need to be clearer about the use of shell companies which can be used to obscure the sources of paid ads.

Digital literacy should be a pillar of education

To my mind this is probably the least contentious of the lot… it involves equipping people with critical thinking skills so they can freely chose how to engage with social media, so it shouldn’t offend any oddball righters.

At root this means teaching kids (and adults) to be sceptical about the validity of online sources, and teaching people how to check the validity. Maybe we could just push Critical Thinking a bit more in schools?

I guess the problem here is that we have a social problem (Facebook), and yet again it’s teachers that have to step in and control the negative effects caused by the problem.

Friction measures should be put in place to prevent people from publishing online content too quickly

The whole idea here is to ‘slow down’ the way people use social media – this seems like a sensible idea, it might be useful to prevent the spread of fake news, and it seems to fit in well with the mindfulness agenda that’s presently doing so well in schools.

Is all of this even possible or even desirable?

Together this package of recommendations seem like a reasonable and realistic starting point to put Facebook in its place and to redress the current imbalance of power between Facebook, governments and citizens.

But it would be a long-haul getting any of this put in place: there are so many cogs in the wheel that need to work together for effective regulation to take place (a regulator needs to be established, a code of ethics, penalties established, digital literacy put in place) and then Facebook needs to be willing to change its structure and transparency so it can be investigated, which they will be resistant to.

Then there’s the problem of the regulator: the UK government: do we really want to trust them with more power over our social media platforms?

However, what’s the alternative? Facebook as usual and the ever increasing power of the company to manipulate the political process and benefit from disinformation which spreads lies, hate and confusion?

Leave a Reply

Your email address will not be published. Required fields are marked *