Internet Regulations – WHY WEB REGULATION MAKES US LESS SAFE

Posted on 05. Nov, 2017 by in Cloud, desktop, internet, linux, windows

internet-freedom

internet-freedom

When something bad happens, the kneejerk reaction of ‘blame the internet’ is one that is over-
used and under-effective. In the wake of several devastating terror attacks on British shores, the
government has once again wheeled out its favourite scapegoat, calling for yet more red tape and regulation
online to stop the bad guys doing wrong.

While the desire to find a villain in a terrible situation is understandable, providing such a simplistic answer to one of the world’s most complex questions is short-sighted at best and dangerous at worst. Moreover, it
shows just how out of touch our politicians are when it comes to the wide-ranging issues of the web.
Inter net reg ulat ion has been on Prime M in ister T heresa May’s to-do list since she first became home secretary
in 2010. While the original Draft Communications Data bill was blocked by the Liberal Democrats in 2013, a
revised version – the Investigatory Powers Act – came into force under her premiership on 30 December 2016,
widely backed by both Labour and Conservative MPs.
Nicknamed the Snooper’s Charter, it gives increased government access to a user’s internet history for up to
12 months without the need for a warrant, alongside new powers for investigators to hack the devices of suspects
in order to collect data.
“That’s fine”, many of you will say. “I’m not doing anything illegal, I’ve got nothing to hide!” You might
not,but as whistleblower Edward Snowden put it: “Arguing that you don’t care about
the right to privacy because you have nothing to hide
is no different than saying you don’t care about free
speech because you have nothing to say.”

Privacy is a right we’re all entitled to, and a right we enjoy daily when we close our curtains at night. And yet
with this new legislation, it’s not just the police or the Ministry of Defence who have free reign over it – the
Food Standards Agency and the NHS also have unfettered access to your data. With no warrant required, no
quest ions asked, it’s a s ystem that is w ide open for abuse.
You don’t even have to be doing anything illegal. You could find yourself on the receiving end of a search by
association, and never even know about it. A metaphorical rifling through your knicker drawer – they might not
find what the y were look ing for, but it doesn’t stop them having a good gawk at your granny pants.
T hat’s before we even get into the priv ac y concerns of storing such a vast amount of information that hackers
could have a field day with. You only have to consider TalkTalk’s data breach of 21,000 customers’ details to know that companies don’t always have their security act together.
Despite this, following the London Bridge terror
attacks, Theresa May announced she wanted to extend such powers further still, accusing the internet and its
companies of allowing “safe spaces” for such ideology to breed.
That’s fighting talk, but also ignores the fact that the large majority of terrorist incidents in the UK over the
last 15 years have been carried out by people known to
the security services. Forcing more regulation will only drive such toxic
mindsets deeper into the darkest corners of the internet and make them harder to observe, all while giving up
the priv ac y of m i l l ions of innocent people in the process.

There’s also a fundamental lack of understanding of how the internet works here. Following the revelation
that Westminster attacker Khalid Masood had used WhatsApp in the minutes before his attack, Home
Secretary Amber Rudd said it was “completely unacceptable” that its contents were inaccessible due to
end-to-end encryption. Her suggestion? Backdoor access when the government requires it. The problem there, of
course, is that it’s impossible. Any hole in encryption means encryption no longer exists, leaving private communications open to abuse and exploitation.
Tim Berners-Lee gave a stark warning in the wake of Rudd’s comments. “I know that if you’re tr ying to catch
terrorists, it’s really tempting to demand to be able to break all that encryption ,” he told the BBC.
“But if you break all that encryption, then guess what?
So could other people, and guess what? They may end up getting better at it than you are.”
It also opens up more concerning questions. What would happen to any foreign company (like American-
owned WhatsApp) refusing any such co-operation with the British government? Could we see apps and websites
banned, like the Great Firewall of China? Perhaps. Theresa May has already failed to rule out
Chinese-style cyberblocking when questioned further on the topic, but e ven then, what’s to stop terrorists (and
other wrong-doers) making their own encrypted communications if they’re pushed to?
That’s not to say the internet couldn’t pull its socks up a bit too. Continuing to improve on the methods and
effectiveness of self-regulation is key to avoid it being taken out of our control entirely and replaced with a much
heavier-handed approach.
Unfortunately, some of that could already be in the pipeline. In June, the UK government announced a joint
campaign with France to take stronger action against web companies that fail to remove “unacceptable
content” f rom thei r pages. T hat could be any thing f rom child pornography to hate speech. While tech companies
undoubtedly have a role in preventing it, placing a legal liability on them is unfortunately not that simple.
Every minute, some 400 hours of video is uploaded to
YouTube and 939,000 pieces of content are posted to Facebook. No matter how big a company you are, policing
that amount of content is impossible and relies on an engaged community to report unsavoury material
alongside a company’s own measures. Even then, things inevitably slip through the net.
To help, the government has said it wants to work with companies to produce tools to identify and remove
harm f ul material automat ical ly. W hi le that sounds good on paper, as Ed Johnson-Wi l l iams of priv ac y campaig n ing group the Open Source Initiative points out, that also comes with its own concerns.
“First things first, how would this work?” he said in a blogpost. “It almost certainly entails the use of algorithms
and machine learning to censor content.
“Given the economic and reputational incentives on the companies to avoid fines, it seems highly l i kely that
the companies will go down the route of using hair-trigger, error-prone algorithms that will end up removing
unobjectionable content too.
“There are some that will say this is a small price to pay if it stops the spread of extremist propaganda but it
will lead to a framework for censorship that can be used against anything that is perceived as harmful

It also opens up more concerning questions. What would happen to any foreign company (like American-
owned WhatsApp) refusing any such co-operation with the British government? Could we see apps and websites
banned, like the Great Firewall of China?
Perhaps. Theresa May has already failed to rule out Chinese-style cyberblocking when questioned further
on the topic, but e ven then, what’s to stop terrorists (and other wrong-doers) making their own encrypted
communications if they’re pushed to?
That’s not to say the internet couldn’t pull its socks up a bit too. Continuing to improve on the methods and
effectiveness of self-regulation is key to avoid it being taken out of our control entirely and replaced with a much
heavier-handed approach.
Unfortunately, some of that could already be in the pipeline. In June, the UK government announced a joint
campaign with France to take stronger action against web companies that fail to remove “unacceptable
content” from their pages. That could be any thing from child pornography to hate speech. While tech companies
undoubtedly have a role in preventing it, placing a legal liability on them is unfortunately not that simple.
Every minute, some 400 hours of video is uploaded to YouTube and 939,000 pieces of content are posted to
Facebook. No matter how big a company you are, policing that amount of content is impossible and relies on an
engaged community to report unsavoury material
alongside a company’s own measures. Even then, things inevitably slip through the net.
To help, the government has said it wants to work with companies to produce tools to identify and remove
harm ful material automatically. While that sounds good on paper, as Ed Johnson-Williams of privacy campaigning
group the Open Source Initiative points out, that also comes with its own concerns.
“First things first, how would this work?” he said in a blogpost. “It almost certainly entails the use of algorithms
and machine learning to censor content.
“Given the economic and reputational incentives on the companies to avoid fines, it seems highly l i kely that
the companies will go down the route of using hair-trigger, error-prone algorithms that will end up removing
unobjectionable content too.
“There are some that will say this is a small price to pay if it stops the spread of extremist propaganda but it
will lead to a framework for censorship that can be used against anything that is perceived as harmful.
“All of this might result in extremists moving to other platforms to promote their material. But will they actually
be less able to communicate?”
Ideas like this also set a difficult precedent for companies with a worldwide presence, like Facebook.
Do May and Macron expect every country to accept our views of what is and isn’t harmful, or can individual
governments – less democratic governments – set their own guidelines? To say it’s opening a can of worms is
putting it lightly.
There’s no easy solution, and perhaps unhelpfully, this piece doesn’t seek to offer one. But as a web community,
we need to recognise the dangers facing us and come together to ensure our rights and freedoms aren’t taken
away under the guise of keeping us safe. The current and proposed legislation will do nothing of the sort – in fact,
it will do the opposite.
After the Manchester attack Theresa May reminded us that the terrorists will never win; that they cannot because “our values… our way of life, will always prevail”.
We must remember that in our response to it. That the best way to deal with an attack on our core principles of
justice, tolerance and freedom, is to strengthen them only further, not to take them away.

About Author
Verity Burns (@verityburns) is a technology
journalist writing about the highs and lows of
consumer technology. Also: dog enthusiast. www.verityburns.com
.


Rate this ➜

0 people like this.

4 Responses to “Internet Regulations – WHY WEB REGULATION MAKES US LESS SAFE”

  1. website

    14. Jan, 2018

    each time i used to read smaller content that also clear their motive, and that is also happening with this piece of writing which I
    am reading here.

    Reply to this comment
  2. drireepe

    27. Jan, 2018

    Reply to this comment
  3. swimmer dating a guy

    19. Mar, 2018

    I’m impressed, I have to admit. Rarely do I come across a blog that’s equally educative and amusing, and let me tell you, you have hit the nail on the head.
    The problem is something which not enough folks are speaking intelligently about.

    I’m very happy I came across this during my hunt for something relating
    to this.

    Reply to this comment
  4. I every time used to read article in news papers but now as I am a user of internet
    therefore from now I am using net for posts, thanks to web.

    Reply to this comment

Leave a Reply

*