CURRENT ARTICLES ABOUT REGULATING THE INTERNET
SOCIAL MEDIA REGULATION
https://www.thetimes.co.uk/article/tech-bosses-face-court-if-they-fail-to-protect-users-q6sp0wzt7 |
Social media executives will face fines and the threat of
criminal prosecution for failing to protect people who use their services under
plans to regulate tech giants in Britain for the first time. The government is
to publish next month its response to a consultation on policing social media
companies such as Facebook and Google after Britain leaves the European Union.
Ministers want to place the companies under a statutory duty
of care, which will be enforced by Ofcom, the broadcasting watchdog. The
government is also expected to introduce a “senior management liability”, under
which executives could be held personally responsible for breaches of
standards. US tech giants would be required to appoint a British-based
director, who would be accountable for any breaches of the duty of care in
this country. More draconian powers included in the original consultation, such
as asking internet service providers to block websites or apps from being used
in Britain, are likely to be dropped.
A levy on technology companies is being considered to help
to fund the extra staffing and the cost of new regulation. The government wants
to ensure that any penalties imposed on technology companies are
“proportionate” so that smaller social media developers are not hit by the same
level of fines as giants such as Google and Facebook.
Under the plans, Ofcom will draw up legally enforceable
codes of practice that spell out what tech companies need to do to protect
users from harmful content. They will cover terrorism,
child abuse, illegal drug or weapon sales, cyberbullying, self-harm,
harassment, disinformation, violence and pornography. Fines for those that
breach the codes could be linked to annual turnover or the volume of illegal
material online.
Theresa May’s government held a consultation in the summer
on proposals to regulate social media companies. Her weak political position
and the lack of parliamentary time because of Brexit meant that legislation was
never drawn up.
Boris Johnson put forward new duty-of-care laws in the Conservative
manifesto. “We will legislate to make the UK the safest place in the world
to be online — protecting children from online abuse and harms, protecting the
most vulnerable from accessing harmful content, and ensuring there is no safe
space for terrorists to hide online,” the document said.
In the Queen’s Speech this month the government pledged to
“develop” legislation in response to the consultation, to which there were more
than 2,000 submissions. Voluntary codes of practice will be published before
the legislation, in an attempt to curb the use of the internet by terrorists
and paedophiles. “This will ensure companies take action now to tackle content
that threatens our national security and the physical safety of children,”
ministers said.
When the consultation on regulating social media companies
was published in the summer there were concerns that it could lead to
regulation of the press by the back door. The Tories then made a manifesto
commitment to “defending freedom of expression and in particular recognising
and defending the invaluable role of the free press”.
Nicky Morgan, who stood down as an MP at the election, was
reappointed as culture secretary after being given a peerage by the prime minister. She has backed
the principle of a duty of care and said at the Tory conference that she would
support a regulatory regime like that imposed on the financial sector.
The government has acknowledged that regulating large
companies based abroad will be challenging. It said in the consultation: “It is
vital that the regulator takes an international approach.”
Cleaning up social media: what is the plan?
FACTCHECKING
The public have been increasingly turning to factchecking websites such as the Independent Full Fact, the BBC's Reality Check, Channel 4 News FactCheck and the Guardian's Factcheck to verify claims made by politicians. In the recent campaign, The Conservative Party's main media account masqueraded as 'factcheckUK' to hide its political origins and push pro-Conservative material to the public.
What Facebook Knows About You
Let us be grateful for small mercies. Thank you Twitter for banning political advertising. Given that such advertising is by its nature biased, tendentious and hard to check, Twitter is behaving as a good publisher should. Politicians may make full use of its outlet. That is democracy. But as the organisation’s chief, Jack Dorsey, points out, with social media awash in “micro-targeting, deepfakes, manipulated videos and misinformation”, those who control it should keep it as clean as possible. Money may not buy truth, but it should not drown fairness.
Facebook disagrees. Its boss, Mark Zuckerberg, declares it “not right for private companies to censor politicians or the news”. He subscribes to the romantic view of social media as the yellow brick road of digital’s global village. The road should not dictate who travels along it, it should just collect the tolls. That includes advertisers, the sustenance of Zuckerberg’s $500bn empire. This argument reruns the celebrated – or notorious – US supreme court ruling on Citizens United in 2010, which overturned restrictions on campaign finance as being an offence against free speech. From then on, lobbyists, corporations, tycoons, anyone with money, could spend what they liked during an election. It declared open season for fake news, targeted ads and dark money “Super Pacs”. That season gave us Donald Trump, and has yet to close.
There was some argument for the 2010 decision, as there is in Zuckerberg’s opposition to curbs on advertising. If an ad is mendacious, let the reader judge it as such. Don’t censor the web. If you suppress one form of debate, you shift power to another – in this case editors of the “elitist media”. The democracy of the web should be free to air, come poor and rich. This might have cut ice in digital’s golden age, the nineties and noughties. If you let everything hang out, was the line, global peace would emerge as if from a celestial algorithm. It has not turned out that way. Not a day passes without some new evil being laid at social media’s door, from Instagram’s part in the death of British teenager Molly Russell to the persecution of Britain’s female MPs.
We can all see benefits in the digital revolution. We can also see bad driving out good. As catalogued in Shoshana Zuboff’s tome of our times, The Age of Surveillance Capitalism, we get intrusion, bullying, obscenity and extremism in a thousand devious guises.
Zuckerberg’s oft-repeated claim that he is not a “publisher” is self-serving rubbish. His platforms are devices to project information and opinion into the public realm for profit – as precise a definition of publication as I know. To allow users to do this anonymously and without liability is a licence to mendacity and slander. It pollutes debate and corrupts democracy.
Free markets are a blessing to human society, but since Adam Smith they have tended towards monopoly and self-interest. Social media are no different. Zuckerberg makes the same claim as America’s gun lobby: Facebook does not kill people, people do. As the Edward Snowden revelations showed, the boundary between social media and mass surveillance, whether by capitalism or the state, is now hopelessly permeable. Like any market, this one must be regulated. It is a poor comment on western democracy that it still is not.
Anyone delving into this realm is easily lost in technology and jargon, but we must try. Wiser heads – including at times Zuckerberg himself – have recognised that things are getting out of hand. Facebook, Google and Twitter this year lobbied the British government to introduce a mechanism for combating “online harm”. They remain reluctant to take that responsibility themselves.
I believe this presents a regulatory challenge on a scale not seen since the early debates over nuclear weapons. We must somehow disentangle the public from the private digital sphere. There must be control over those who trade the identities and behaviour of individual citizens. We must define the concept of causing online harm, as distinct from causing offence. We must place obligation and liability on digital “publishers” for the accuracy and legality of third-party content – as we do on conventional publishers.
As in any public forum, anonymity must end for those expecting the privilege of access to public debate. Facebook and others accept de facto moral liability, by filtering and censoring “unsuitable” content. They accept, at last, that showing teenagers how to take their own lives is not free speech but public menace.
Now they must accept that regulators should dictate the rules of democracy, or it will degenerate into an online Game of Thrones. Publishers have an obligation not to disseminate lies. One reason for the ongoing popularity of the mainstream media, such as the BBC, the New York Times and this newspaper, is that their information is trusted. Some editor is subjecting their output to some ideal of balance, fairness and accuracy. Material is not spewed out at random, or targeted so as to confirm individual bias. Editors are not, as early web enthusiasts predicted, out of date. They are necessary.
The British press has fought against state regulation, other than over monopoly and laws of libel. I think that is right, not out of principle but because statutory regulation is not justified by press misbehaviour or imbalance – or not yet. Self-regulation sort of works. At present it is not the mainstream media driving decent people out of politics. It is what Zuboff calls the “psychic numbing of strip-search technology”. We have not begun to grasp its potency or its aura of invincibility. Today’s robber barons steal from us not our money but something more important, our privacy and integrity as individuals. They cannot remain unregulated.
• Simon Jenkins is a Guardian columnist.
Google has banned eight different adverts paid for by the Conservatives over the last month because they broke its rules, The Independent can reveal.
The move by the search giant comes amid mounting concerns about the Tories' use of disinformation and fake news as campaigning tools at the general election.
Transparency data released by the search giant this week shows that the adverts "violated Google's advertising policies" and had been removed.
Six of the banned adverts were put up by the Tories on the day of the Labour manifesto launch - when the Conservative Party set up a fake website called labourmanifesto.co.uk purporting to contain the opposition's policies.
During that incident, the Tories paid Google to push its fake version of the Labour manifesto to the top of search results for those searching for the deal document.
That incident followed another earlier in the week in which the Tories set up a fake fact-checking service, which they used to pump out party lines from their press office to unsuspecting social media users.
Google would not disclose the content of the Tory adverts that were pulled nor the exact reasons that they were taken down. The company's guidelines however say "we value honesty and fairness, so we don't allow the promotion of products or services that are designed to enable dishonest behaviour".
It also specifically lists "fake documents" as one of the things that cannot be promoted in advertisements and says "we don't allow ads or destinations that deceive users".
None of the election's other major parties had adverts pulled in recent months, according to the records. Nigel Farage's Brexit Party, however, had five adverts pulled at the end of October for policy violations.
Tory adverts still visible that were not taken down still include links purporting to send users to "Corbyn's Labour manifesto" which point to "labourmanifesto.co.uk" - the Tories' site. Other uncensored adverts purport to be a link to "Labour's Brexit Policy", "Labour Party Education Policy", and "Labour's Defence Policy" but instead send users to the Conservative website.
"The fact that the Conservatives are resorting to fake news shows that they have no plans or desire to improve the lives of people in Britain," said Ian Lavery, Labour's chair.
Other than the Google ads and the the fake fact-checking service, the Tories have been criticised for other uses of disinformation or fake news. The latest scandal on Friday erupted after it emerged the party had edited footage of BBC reporters to make it look like they were endorsing Tory attack-lines about a "Brexit delay". The party was also previously criticised for doctoring a video of Labour Brexit chief Keir Starmer. On another occasion, a candidate in a marginal seat was caught on camera setting up a fake encounter with a swing voter to try and deceive a journalist.
The European Commission warned ahead of the UK general election that disinformation was still a problem and that there might be a case for EU-level intervention to reign it in.
UK political parties have spent a total of £358,800 on adverts with Google since March this year. The Conservative Party did not respond to a request for comment on this story.
Cambridge Analytica
Every story has a beginning. For me, the story of Cambridge Analytica and Facebook that has unfolded so spectacularly this past week began in a cafe in Holloway, north London, at the beginning of 2017.
I was having a coffee with my colleague Carole Cadwalladr. She had recently written a series of articles that set out how certain Google search terms had been “hijacked by the alt-right”. In the course of that investigation she explained how she had come across another pattern of activity apparently linking the Trump and Leave.EU campaigns, one that appeared to involve the billionaire Robert Mercer, Steve Bannon – then of Breitbart – and a secretive British company called Cambridge Analytica. She laid out the elements of what she knew, and what she didn’t, testing her conviction that “there’s definitely something there”.
In the year and more since, Carole has painstakingly pieced together that story from its disparate and determinedly obstructive elements. She has done this in the face of much scepticism, a series of legal challenges and several attempts at intimidation (last summer the Leave.EU campaign posted a photoshopped video of her being beaten up and circulated it for days). Last weekend, however, the “something there” that Carole had intuited about the story, and its full implications for our democracy, came into proper focus.
The trajectory of what happened since is a case study in how complex truths stubbornly pieced together can eventually capture the wider imagination. The first act in this drama was a legal challenge by Facebook, an attempt to suppress Carole’s interview with the Cambridge Analytica whistleblower Christopher Wylie the day before it appeared. They must have known a little of what was coming.
The Observer had made the decision to share the revelations with the New York Times and with Channel 4 News, to pool resources and broaden its reach. Even so, on Sunday morning some experienced commentators initially just shrugged. On his sofa, Andrew Marr felt he could pretty much ignore the story, dismissing it as “too complicated” to merit much attention. JK Rowling, meanwhile, suggested in a tweet that it was “surely the story of the year, if not the decade”. In the days that followed, the latter reaction has seemed closer to the mark.
This was partly down to one of the more memorable pieces of journalistic theatre. Rarely, in real time, can hypocrisy have been exposed so pointedly as on Monday night’s Channel 4 News. That afternoon, Alexander Nix, the self-possessed Etonian chief executive of Cambridge Analytica, had been plaintively professing his company’s rectitude to the BBC, and suggesting that he was the victim of a co-ordinated smear campaign. That evening, he and his managing director, Mark Turnbull, were shown explaining to undercover Channel 4 reporters exactly how they had manipulated the voters of democracies across the globe, notably in the US, with unsourced propaganda that was not necessarily true; and boasting of sting operations and honey traps
In a way, that was only the warm-up act of the story. Nix’s unwitting confessions were in marked contrast to the silence from Facebook’s chief executive Mark Zuckerberg. All anyone knew of Facebook’s response on Monday was that it had a swat team of data analysts working overnight at Cambridge Analytica’s offices – though that same data remained out of bounds for the government’s information commissioner, Elizabeth Denham, who was trying in vain to get a warrant to access files before they were potentially compromised. Zuckerberg declined to face his own employees at a meeting on Tuesday, while again a press statement from his PR team suggested that “the entire company is outraged we were deceived”. The continued silence seemed to tell another story, however, not least to Wall Street; in those two days nearly $60bn was wiped off the Facebook market capitalisation, and #whereszuck became a top-trending social media meme.
As the silence persisted, a little of Zuckerberg’s public relations dilemma became clear. The original legal threat to the Observer was over the question of whether the 50 million profiles handed first to the Cambridge academic Aleksandr Kogan and then sold on to Cambridge Analytica constituted a data breach. Facebook insisted that it did not, but that insistence itself amounted to a public acknowledgement of a business model that appeared to allow the unauthorised sale of private data.
When Zuckerberg did eventually come out to try to explain this, his crafted statement was another effort to make the exploitation of the 50 million profiles seem like a technical problem, a glitch. His tone was the default position of T-shirted Silicon Valley plutocrats who insist that they are on our side, while squirrelling away their billions. What had happened was not a data breach “but a breach of trust”, he suggested, a sentiment he repeats in a personal advertisement in today’s newspapers, including the Observer.
This appeal to Facebook users’ faith in its better nature recalled an infamous recorded exchange from the early days of Facebook at Harvard, when Zuckerberg was in conversation with a friend.
Zuck: “Yeah so if you ever need info about anyone at Harvard, just ask. I have over 4,000 emails, pictures, addresses, SNS.”
Friend: “What? How’d you manage that one?”
Zuck: “People just submitted it. I don’t know why. They ‘trust me’. Dumb fucks!”
People closest to the beneficiaries of Cambridge Analytica’s work have been quickest to suggest that it was negligible. Though Cambridge Analytica’s own claims suggest that its tens of thousands of propaganda items were viewed billions of times, Steve Bannon suggested the effect was insignificant: people have minds of their own and are not swayed by what they see and hear on the internet, the argument goes.
To counter this, you don’t really have to point out that we live in a world where a significant percentage of people now believe that the Sandy Hook massacre was a hoax perpetrated by actors, or that sharia law is about to take hold in the home counties, you just have to point to the history of advertising.
Propaganda works best, as Cambridge Analytica’s Mark Turnbull helpfully pointed out to camera, when you do not know its source. He excitedly detailed the way in which extremist views and fake news could be “seeded” in the bloodstream of social media and then take hold. Facebook in particular has, in this respect, delivered what propagandists have always wanted, a complete blurring of the line – still sacrosanct in traditional media – between editorial and advertising, often delivered with the added reliability of having been “shared” by a “friend”.
As David Kirkpatrick, Facebook’s authorised biographer, noted, one characteristic of the first eight years of the company was a tendency for Zuckerberg and his inner circle to sit around and try to establish exactly what business they were in.
Early on, Zuckerberg liked to refer to his creation as “a directory of people” in these discussions; later he came to focus on “connectivity”. A more cynical response to that question has always been that they were in the advertising business, but as the writer John Lanchester pointed out, even that doesn’t really get to the truth. “Even more than it is in the advertising business, Facebook is in the surveillance business.” It is designed to watch our every move, our every like and dislike, and sell those findings to the highest bidder.
For the majority of the 2.1 billion users of Facebook up until now, that has seemed like a price worth paying, in order to connect with friends and family. The wisdom promoted by the tech companies to their users is that privacy is only for those with something to hide. What the Cambridge Analytica story has begun to reveal about those companies’ use of our intimate history of likes and dislikes, of private messages and personal photos, is that they cannot only be used to target us with holidays and theatre tickets, but also to shape our news of the world, and our political ideas, in ways we don’t recognise.
That understanding has, it seems, now reached something of a tipping point.
Immediate ramifications of the exposé will see the prime movers in this story called to clarify previous statements to government committees on both sides of the Atlantic. Beyond that, last week may prove an important step in a long reckoning over whether monopolistic global corporations are best trusted with so much marketable personal data to exploit for personal gain.
The great rupture of the industrial revolution led eventually to the growth of trade unions and a new balance of power between capital and producer. The sheer pace of change of the digital revolution in our century has meant that the equivalent rebalancing is critically overdue. Last week even the Economist was persuaded of the need for Facebook in particular to make radical changes to its data practices, or for governments to call time on its model. “If Facebook ends up as a regulated utility with its returns on capital capped, its earnings may drop by 80%. How would you like that, Mr Zuckerberg?”
When faced with the often anonymised global entity of the internet, it has been easy to buy the argument that the forces at work in it are too opaque and complex to hold to account. What the Cambridge Analytica revelations bring to light – through old-fashioned journalistic persistence – is that those forces are, in fact, open to the same kinds of manipulation and corruption that any media needs protection from, but on a far greater scale. The story has given the growing unease about the unaccountable empire-building of Silicon Valley tech companies an all-too-human set of faces. It may not be a pretty sight, but it is not one that will be easily forgotten.
The Observer’s story last week on the use of Facebook data by Cambridge Analytica sparked a worldwide response:
“This is a serious moment for the web’s future. But I want us to remain hopeful.”
Tim Berners-Lee, world wide web inventor
Tim Berners-Lee, world wide web inventor
“[Facebook] has been misleading in its evidence to a British parliamentarycommittee, arrogant in its instinct to shirk the responsibilities that come with power.”
The Times
The Times
“It is absolutely right the information commissioner is investigating … we expect all the organisations involved to cooperate…” Brian Acton, WhatsApp
“We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you … We also made mistakes.”
Mark Zuckerberg
Mark Zuckerberg
What is disturbing is that Facebook has not yet identified and alerted users whose profile information was vacuumed up
New York Times
New York Times
There are a number of inconsistencies in your evidence... Giving false statements to a select committee is a very serious matter
Damian Collins MP in a letter to Alexander Nix
Damian Collins MP in a letter to Alexander Nix
Kevin Rawlinson, 5 February 2019, The Guardian
Social media companies are to be told to sign a legally binding code of conduct as ministers seek to force them to protect young people online, it has been reported.
Social media companies are to be told to sign a legally binding code of conduct as ministers seek to force them to protect young people online, it has been reported.
Ministers have been considering proposals for an internet regulator and a statutory duty of care. It was reported on Monday that the digital minister, Margot James, was planning to announce such plans on Tuesday.
“We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms and are seriously considering all options,” said a spokesman for the Department for Digital, Culture, Media and Sport (DCMS).
Social media companies are to be told to sign a legally binding code of conduct as ministers seek to force them to protect young people online, it has been reported.
Ministers have been considering proposals for an internet regulator and a statutory duty of care. It was reported on Monday that the digital minister, Margot James, was planning to announce such plans on Tuesday.
“We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms and are seriously considering all options,” said a spokesman for the Department for Digital, Culture, Media and Sport (DCMS).
“Social media companies clearly need to do more to ensure they are not promoting harmful content to vulnerable people. Our forthcoming white paper will set out their responsibilities, how they should be met and what should happen if they are not.”
According to a report, James is preparing to use a speech at a conference for Safer Internet Day to raise the case of 14-year-old Molly Russell, who took her own life in 2017. After her death, her account on the Facebook-owned platform Instagram was found to contain material about depression and suicide.
“The tragic death of Molly Russell is the latest consequence of a social media world that behaves as if it is above the law,” James is expected to say.
The suicide prevention minister is preparing to warn that the normalisation of self-harm and suicide content online poses a risk similar to child grooming.
Jackie Doyle-Price is expected to join James in calling on social media companies to take action to protect users from harmful content.
“We must look at the impact of harmful suicide and self-harm content online … in normalising it, it has an effect akin to grooming,” she will say. “We have embraced the liberal nature of social media platforms, but we need to protect ourselves and our children from the harm which can be caused by both content and behaviour.”
According to the Daily Mail, James will add: “There is far too much bullying, abuse, misinformation as well as serious and organised crime online. For too long the response from many of the large platforms has fallen short.
“We are working towards the publication of the final policy paper, and consultation, before bringing in a new regulatory regime. We will introduce laws that force social media platforms to remove illegal content, and to prioritise the protection of users beyond their commercial interests.
The paper reported that James will call attention to “no fewer than 15 voluntary codes of conduct agreed with platforms since 2008”, adding that that is an “absolute indictment of a system that has relied far too little on the rule of law”.
According to a report from the UK’s media watchdog, Ofcom, the proportion of 12- to 15-year-olds who said they had been bullied over text messages and apps increased from only 2% in 2016 to 9% last year, while the proportion of those who reported having been bullied on social media nearly doubled from 6% to 11% in the same period.
It was first reported that the government was considering proposals for an internet regulator by BuzzFeed last September.
Government Publishes Social Media Regulation Plans
The government has published plans for an independent regulator that would be capable of imposing huge fines on internet firms that propagate dangerous or illegal content.
The Online Harms White Paper, jointly proposed by the Department of Culture, Media and Sport (DCMS) and the Home Office, is a step toward imposing curbs on social media and other internet firms.
It proposes an independent body, either a new regulator or an existing one such as Ofcom, that would create a code of practice for internet firms.
The body would be funded by tech firms themselves, possibly through a levy.
Accountability
Senior managers could be held personally accountable for abuses under the proposal, which also suggests that companies that don’t comply could be blocked or delisted from search engines .
Culture minister Margot James has suggested that the regulator could impose fines of up to billions of dollars in the case of large tech firms.
Other proposals include an obligation to publish transparency reports about measures to combat harmful content, an obligation to respond quickly to user complaints and a requirement to minimise misinformation during election periods.
The government plans to launch a consultation on the matter on Monday.
Charities have called for such regulations for years, and a number of high-profile incidents in recent months, ranging from massive data breaches to the live-streaming of the Christchurch shootings in March, have increased pressure on governments to take action.
Digital, Culture, Media and Sport secretary Jeremy Wright said voluntary actions by industry “have not been applied consistently or gone far enough”, while home secretary Sajid Javid said dangerous content “is still too readily available online”.
Freedom of speech
But internet firms and campaigners said the proposals could be harmful to competition and freedom of speech.
Daniel Dyeball, UK executive director of the Internet Association, called for “proposals that are targeted and practical to implement”.
“The scope of the recommendations is extremely wide, and decisions about how we regulate what is and is not allowed online should be made by parliament,” he said.
Freedom of speech campaigners Article 19 said the government “must not create an environment that encourages the censorship of legitimate expression”, and said it opposed any duty of care being required of internet platforms, arguing that doing so would encourage a restrictive approach to content removal.
Facebook and Twitter both said any new rules should strike a balance between online safety and fostering an innovative digital economy.
Facebook chief executive Mark Zuckerberg recently called for standardised international rules that would put all internet firms on a level playing field.
Ignore Zuckerberg's self-serving rubbish. Facebook must be regulated.
Simon Jenkins, Opinion, The Guardian 31 October 2019
Intrusion, bullying, obscenity, extremism: we must define what ‘online harm’ means and take action to eradicate it.
Let us be grateful for small mercies. Thank you Twitter for banning political advertising.
Given that such advertising is by its nature biased, tendentious and
hard to check, Twitter is behaving as a good publisher should.
Politicians may make full use of its outlet. That is democracy. But as
the organisation’s chief, Jack Dorsey, points out, with social media
awash in “micro-targeting, deepfakes, manipulated videos and
misinformation”, those who control it should keep it as clean as possible. Money may not buy truth, but it should not drown fairness.
Facebook disagrees. Its boss, Mark Zuckerberg, declares it “not right
for private companies to censor politicians or the news”. He subscribes
to the romantic view of social media as the yellow brick road of
digital’s global village. The road should not dictate who travels along
it, it should just collect the tolls. That includes advertisers, the
sustenance of Zuckerberg’s $500bn empire. This argument reruns the
celebrated – or notorious – US supreme court ruling on Citizens United
in 2010, which overturned restrictions on campaign finance as being an
offence against free speech. From then on, lobbyists, corporations,
tycoons, anyone with money, could spend what they liked during an
election. It declared open season for fake news, targeted ads and dark
money “Super Pacs”. That season gave us Donald Trump, and has yet to close.This presents a regulatory challenge on a scale not seen since the early debates over nuclear weapons
There was some argument for the 2010 decision, as there is in Zuckerberg’s opposition to curbs on advertising. If an ad is mendacious, let the reader judge it as such. Don’t censor the web. If you suppress one form of debate, you shift power to another – in this case editors of the “elitist media”. The democracy of the web should be free to air, come poor and rich. This might have cut ice in digital’s golden age, the nineties and noughties. If you let everything hang out, was the line, global peace would emerge as if from a celestial algorithm. It has not turned out that way. Not a day passes without some new evil being laid at social media’s door, from Instagram’s part in the death of British teenager Molly Russell to the persecution of Britain’s female MPs.
We can all see benefits in the digital revolution. We can also see bad driving out good. As catalogued in Shoshana Zuboff’s tome of our times, The Age of Surveillance Capitalism, we get intrusion, bullying, obscenity and extremism in a thousand devious guises.
Zuckerberg’s oft-repeated claim that he is not a “publisher” is
self-serving rubbish. His platforms are devices to project information
and opinion into the public realm for profit – as precise a definition
of publication as I know. To allow users to do this anonymously and
without liability is a licence to mendacity and slander. It pollutes
debate and corrupts democracy.
Free markets are a blessing to human society, but since Adam Smith
they have tended towards monopoly and self-interest. Social media are no
different. Zuckerberg makes the same claim as America’s gun lobby:
Facebook does not kill people, people do. As the Edward Snowden revelations
showed, the boundary between social media and mass surveillance,
whether by capitalism or the state, is now hopelessly permeable. Like
any market, this one must be regulated. It is a poor comment on western
democracy that it still is not.
Anyone delving into this realm is easily lost in technology and jargon, but we must try. Wiser heads – including at times Zuckerberg himself – have recognised that things are getting out of hand. Facebook, Google and Twitter this year lobbied the British government to introduce a mechanism for combating “online harm”. They remain reluctant to take that responsibility themselves.
I believe this presents a regulatory challenge on a scale not seen since the early debates over nuclear weapons. We must somehow disentangle the public from the private digital sphere. There must be control over those who trade the identities and behaviour of individual citizens. We must define the concept of causing online harm, as distinct from causing offence. We must place obligation and liability on digital “publishers” for the accuracy and legality of third-party content – as we do on conventional publishers.
As in any public forum, anonymity must end for those expecting the privilege of access to public debate. Facebook and others accept de facto moral liability, by filtering and censoring “unsuitable” content. They accept, at last, that showing teenagers how to take their own lives is not free speech but public menace.
Now they must accept that regulators should dictate the rules of democracy, or it will degenerate into an online Game of Thrones. Publishers have an obligation not to disseminate lies. One reason for the ongoing popularity of the mainstream media, such as the BBC, the New York Times and this newspaper, is that their information is trusted. Some editor is subjecting their output to some ideal of balance, fairness and accuracy. Material is not spewed out at random, or targeted so as to confirm individual bias. Editors are not, as early web enthusiasts predicted, out of date. They are necessary.
The British press has fought against state regulation, other than over monopoly and laws of libel. I think that is right, not out of principle but because statutory regulation is not justified by press misbehaviour or imbalance – or not yet. Self-regulation sort of works. At present it is not the mainstream media driving decent people out of politics. It is what Zuboff calls the “psychic numbing of strip-search technology”. We have not begun to grasp its potency or its aura of invincibility. Today’s robber barons steal from us not our money but something more important, our privacy and integrity as individuals. They cannot remain unregulated.
• Simon Jenkins is a Guardian columnist
Anyone delving into this realm is easily lost in technology and jargon, but we must try. Wiser heads – including at times Zuckerberg himself – have recognised that things are getting out of hand. Facebook, Google and Twitter this year lobbied the British government to introduce a mechanism for combating “online harm”. They remain reluctant to take that responsibility themselves.
I believe this presents a regulatory challenge on a scale not seen since the early debates over nuclear weapons. We must somehow disentangle the public from the private digital sphere. There must be control over those who trade the identities and behaviour of individual citizens. We must define the concept of causing online harm, as distinct from causing offence. We must place obligation and liability on digital “publishers” for the accuracy and legality of third-party content – as we do on conventional publishers.
As in any public forum, anonymity must end for those expecting the privilege of access to public debate. Facebook and others accept de facto moral liability, by filtering and censoring “unsuitable” content. They accept, at last, that showing teenagers how to take their own lives is not free speech but public menace.
Now they must accept that regulators should dictate the rules of democracy, or it will degenerate into an online Game of Thrones. Publishers have an obligation not to disseminate lies. One reason for the ongoing popularity of the mainstream media, such as the BBC, the New York Times and this newspaper, is that their information is trusted. Some editor is subjecting their output to some ideal of balance, fairness and accuracy. Material is not spewed out at random, or targeted so as to confirm individual bias. Editors are not, as early web enthusiasts predicted, out of date. They are necessary.
The British press has fought against state regulation, other than over monopoly and laws of libel. I think that is right, not out of principle but because statutory regulation is not justified by press misbehaviour or imbalance – or not yet. Self-regulation sort of works. At present it is not the mainstream media driving decent people out of politics. It is what Zuboff calls the “psychic numbing of strip-search technology”. We have not begun to grasp its potency or its aura of invincibility. Today’s robber barons steal from us not our money but something more important, our privacy and integrity as individuals. They cannot remain unregulated.
• Simon Jenkins is a Guardian columnist
Ministers have been taught a lesson on porn
Edward Lucas, The Times 21 October 2019
The government has wisely binned its plan for passes to adult sites — other issues matter more.
Few readers will have heard of Mindgeek. But many (though they may not admit it) will have strayed on to the sites it runs. The world’s largest pornography company provides viewing material, mostly free of charge, to more than 115 million people a day.
Although only a handful of religious hardliners still fret about the dangers of what used to be called self-abuse, many reckon that giving children unrestricted access to pornography is undesirable. The government’s solution was to legally oblige adult-content sites to check users’ credentials. Only those who had bought, in effect, a digital passport providing their age would be allowed in. Mindgeek’s own in-house AgeID system was at the centre of the plan — in effect giving it the same role in the world of porn that the BBC’s licence fee has in conventional broadcasting.
Handing a lucrative new business line to an already dominant company was questionable. So was the idea of creating, in effect, registers of porn users, with the accompanying potential for hacking, leaking and blackmail. However, a mandatory system was doomed for other reasons. The internet makes distance irrelevant and location easily disguised, chiefly through cheap, easily installed virtual private networks (VPNs). A teenager wishing to evade the British ban could access a porn site in, say, the US, using a VPN to pretend to be from Canada. “Porn passes”, which were supposedly going to be bought at newsagents and other retail outlets on presentation of real-world identity documents, could be borrowed or stolen.
After repeated delays Nicky Morgan, the minister responsible, admitted last week that the planned introduction of Part 3 of the Digital Economy Act 2017 was not going ahead. The government will instead push ahead with “wider online harms proposals”, placing a duty of care on tech companies to improve user safety. An independent regulator, supposedly, will give teeth to this.
The climbdown marks a welcome reality check by the government, which has tended to put crowd-pleasing initiatives ahead of the facts of digital life. Priti Patel, the home secretary, backs a plan to curb encryption, because child-abusers and other malefactors use it to make their messaging impenetrable. The concerns are reasonable. The solution is not. Either encryption is strong, in which case it works for everyone, or it is weak, in which case crooks, spooks and other malefactors will mess around with our most sensitive data. If Facebook or other platforms are made to weaken their encryption criminals will simply use other tools while the law-abiding will suffer.
We do face huge difficulties in digital governance. Piracy, for example, is endemic in the porn industry, with profit only one of the motives. Complaints abound of intimate videos made and uploaded without the consent of the person featured — for example, in so-called “revenge porn” where people post material to humiliate their former partners. Mindgeek is at the centre of this storm. The company has its own studios, but it also allows users to upload videos (in return for a share of the advertising revenue when people watch them). It insists that it removes stolen or illegal material once it is notified, but it cannot possibly police the colossal amounts of content (15 terabytes, it boasts, or roughly 7,500 hours) uploaded to its site every day. A huge digital sewer runs through the internet, whether we notice it or not.
The worst is yet to come. The combination of more powerful computers and better software, especially machine learning, speeds up the pace of change. Watch out in particular for “deepfakes”: digitally doctored faces and voices. These have uses ranging from fraud to political warfare, but porn is at the forefront. Business is booming: for a modest fee any face you want can be plastered, expressively, over the lurid video of your choice.
The first, clumsy efforts surfaced in late 2017. A report by a Dutch company called Deeptrace found nearly 15,000 such videos on the internet this summer, almost twice the December figure. Other software such as the now-defunct Deepnude allows you to see (roughly) what any woman looks like starkers. We might not mind someone drawing rude cartoons of us, but what recourse do we have against explicit, lifelike videos available worldwide?
One eventual solution is digital fingerprinting. That would make it clear if material had been pirated (an approach that already works for music). It might even allow us to see if the people featured had given their consent to its production. Digitally signed material would count as real. Anything else is just a caricature.
A more immediate prospect is old-fashioned policing. We do not have to adopt a laissez-faire approach to our children’s internet habits (try turning off the home wifi at night, or telling your teenagers that you can, if you wish, see every site they have visited). Similarly, our authorities can arrest people and put them in real-world prisons. Police raids in 38 countries last week led to the arrest of 337 suspected paedophiles linked to a website, run from South Korea, hosting 250,000 horrific videos. The investigation followed evidence unearthed in the case of Matthew Falder, a prolific digital sex-offender from Cambridge, now serving a 25-year sentence.
The internet lets us escape, for good or ill, some real-world constraints. But supposedly harmless fun easily shades into something nasty for which others pay the price. So too should we.
Google CEO: YouTube is too big to fix completely
KEY POINTS
- Google CEO Sundar Pichai recently said that YouTube probably won’t ever be able to filter out 100% of the harmful content on its site.
- YouTube has come under fire for allowing harassment, hate speech, conspiracy theories and more.
- Pichai said YouTube’s massive scale likely makes it impossible to weed out all the bad content on the site
Britain to have 'toughest internet laws in world' as Government backs duty of care
Britain will have the toughest internet laws in the world, ministers pledge today, as the Government brings in new legislation to protect children online in the wake of the Telegraph's campaign for a statutory duty of care.
Jeremy Wright, the Culture Secretary, and Sajid Javid, the Home Secretary, today unveil their White Paper spelling out plans for a duty of care enforced by a new independent regulator.
Mr Wright said the reforms were the "best way of setting clear, concrete responsibilities for tackling harmful content or activity online" as he paid tribute to this newspaper's nine month campaign.
The regulator will have powers to impose fines on firms for breaches and to potentially prosecute them.
- Tech firms are set to face fines from Ofcom for showing potentially harmful videos online, in the Government's first official crackdown on social media.The proposal would give Ofcom the power to impose multi-million pound fines upon companies, if it judges the platforms have failed to prevent youngsters seeing 'harmful' content. This includes pornography, violence and child abuse.The broadcasting watchdog is set to take charge of the matter from 19 September 2020. It may not be required however, if Brexit occurs in October, as the move is designed to meet the UK's obligations to the EU.A spokesman for the Department for Digital, Culture, Media and Sport said: "The implementation of the AVMSD [Audiovisual Media Services Directive] is required as part of the United Kingdom’s obligations arising from its membership of the European Union and until the UK formally leaves the European Union all of its obligations remain in force. If the UK leaves the European Union without a deal, we will not be bound to transpose the AVMSD into UK law."
The regulator will be able to penalise firms that fail to establish robust age verification checks and parental controls that ensure young children are not exposed to video content that “impairs their physical, mental or moral development.”
The Telegraph originally reported that the proposal was "quietly" agreed before Parliament's summer break and would give Ofcom the power to fine tech firms up to 5% of their revenues and/or "suspend or restrict" them in the UK if they failed to comply with its rulings.The appointment of Ofcom is an interim measure for regulation until a separate online harms regulator is appointed at a later time.Ofcom is ready to accept the current role, with a spokeswoman telling the BBC: "These new rules are an important first step in regulating video-sharing online, and we'll work closely with the government to implement them. We also support plans to go further and legislate for a wider set of protections, including a duty of care for online companies towards their users."Daniel Dyball, The Internet Association's executive director said: "Any new regulation should be targeted at specific harms, and be technically possible to implement in practice - taking into account that resources available vary between companies."This hope for any intervention being proportionate was seconded by TechUK, the industry group that represents the technology sector.It is often debated within the tech industry that the mass of video content posted daily on various platforms is too difficult to be reviewed manually and individually. YouTube has previously tried to tackle this issue by implementing the app, YouTube Kids, to let children view videos in a more contained environment. It still however, is susceptible to flaws.“We use a mix of filters, user feedback and human reviewers to keep the videos in YouTube Kids family friendly,” the YouTube Kids landing page says. “But no system is perfect and inappropriate videos can slip through.”Andy Burrows, head of the NSPCC's child safety online policy welcomed the news: "Crucially, this is a real chance to bring in legislative protections ahead of the forthcoming Online Harms Bill and to finally hold sites to account if they put children at risk."The public demand for prosecution of social media bosses regarding child safety breaches has been ongoing for some time. A poll conducted by the NSPCC in April 2019 found that more than three quarters of British adults said directors oftech giants should be prosecuted if they breached the proposed new statutory duty of care on firms to protect children from online harms.A more recent NSPCC survey published in July 2019 revealed that nine in ten children also agreed that tech firms have a legal responsibility to keep them safe online.Source: BBC News
UK drops plans for online pornography age verification system
Climbdown follows difficulties with implementing plan to ensure users are over 18
Privacy campaigners had major concerns about data security under the plans. Photograph: migstock/Alamy |
Plans
to introduce a nationwide age verification system for online pornography have
been abandoned by the government after years of technical troubles and concerns
from privacy campaigners.
The
climbdown follows countless difficulties with implementing the policy, which
would have required all pornography websites to ensure users were over 18.
Methods would have included checking credit cards or allowing people to buy a
“porn pass” age verification document from a newsagent.
Websites
that refused to comply with the policy – one of the first of its kind in the
world – faced being blocked by internet service providers or having their
access to payment services restricted.
The
culture secretary, Nicky Morgan, told parliament the policy would be abandoned.
Instead, the government would instead focus on measures to protect children in
the much broader online harms white paper. This is expected to introduce a new
internet regulator, which will impose a duty of care on all websites and social
media outlets – not just pornography sites.
She
said: “This course of action will give the regulator discretion on the most
effective means for companies to meet their duty of care.”
Despite
abandoning the proposals, Morgan said the government remained open to using age
verification tools in future, saying: “The government’s commitment to
protecting children online is unwavering. Adult content is too easily accessed
online and more needs to be done to protect children from harm.”
The
decision will disappoint a number of British businesses that had invested
substantial time and money developing verification products. They had been
hoping to capitalise on the large amount of Britons expected to verify their
age in order to view legal pornography. One age verification provider estimated
the potential market was as many as 25 million people.
Although
the age verification policy was first proposed by the Conservatives during the
2015 general election, it took years to develop and make it into law. Its
implementation date was then repeatedly delayed amid difficulties with
implementing the policy.
The
British Board of Film Classification was tasked with overseeing the system,
which would be run and funded by private companies, despite the organisation’s
lack of historical expertise in the world of technical internet regulation.
Some of the age verification sites had close links to existing pornography
providers.
Concerns
over the system grew as the public became increasingly aware of the approaching
implementation date.
Despite
repeated reassurances from pornography websites and age verification sites that
personal details would be kept separate from information about what users had
watched, privacy campaigners continued to raise concerns about data security.
In
addition, earlier this year the Guardian showed how one age verification system could be
sidestepped in minutes. Proponents of the policy privately accepted it would
not block a persistent teenager from accessing adult material but said it could
stop younger children from stumbling across images they found deeply
disturbing.
The
policy had the backing of charities such as the NSPCC that were concerned about
the impact of pornography on children.
The
final blow to the porn block came from an unlikely source: the European Union.
Just weeks before the policy was due to be finally implemented in July, the
government realised it had failed to inform the EU of its
plans.
This
administrative error was initially announced as requiring a six-month delay –
but Morgan’s announcement, made on a day when media attention was focused on
the Brexit negotiations, means the age verification system has now been
abandoned in its current form.
'It's a matter of huge shame women MPs are quitting because of abuse they face'
Anne Diamond says politics shouldn't be so toxic, after 18 female MPs said they were standing down 2 NOV 2019
Heidi Allen, the ex-Conservative MP who defected to the Liberal Democrats, has spoken of “the nastiness and intimidation that’s become commonplace”.
Nicky Morgan and Amber Rudd, both valuable and experienced MPs, have both cited abuse as the reason they’re not standing again.
Diane Abbott, shadow home secretary, has often spoken about the appalling abuse she receives online, and wants social media companies to record the real identities of people using their platforms to tackle the problem.
She believes the fact that people are completely anonymous has made this problem worse.
When the police try to track down the people abusing her, they find they can’t identify them.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.