Big Tech and AI in 2017

By Nick Holmes on February 9, 2018
Comments Off on Big Tech and AI in 2017
Filed under Artificial intelligence, Big Internet, Twitter

Lollipop is coming by Guiseppe Milo

I recently posted a review of What we learned in 2017 on Internet Newsletter for Lawyers. Here are my bits from it and a few extracts from contributors.

It has been apparent for some time that the biggest tech companies, Google, Facebook, Amazon, Apple, Twitter, have grown too large for our collective good. 2017 was the year we finally started trying to figure out how to do something about that.

Trolling and fake news

Paul Bernal writes that 2017 was a year when trolling and fake news started to get serious attention:

“My key takeaway from 2017 [is that] both fake news and trolling, rather than being anomalies or abuses of social media, are pretty much inevitable results of the business models and practices of our social media companies. They’re using the systems as they’re intended to be used: creating and sharing stories and information, targeted at people who show interest in the subject (fake news), or interacting and discussing subjects of interest, in an open and emotional way (trolling).

If we want to seriously deal with either fake news or trolling, we would need to fundamentally reconstruct our social media. I don’t think anyone has the appetite for that.”

Free speech?

Zeynep Tufekci writes in Wired that the flow of the world’s attention is dominated by just a few digital platforms: Facebook, Google, and, to a lesser extent, Twitter and argues that our methods of media regulation are not sufficient.

“These companies – which love to hold themselves up as monuments of free expression – have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself. But at their core, their business is mundane: They’re ad brokers. To virtually anyone who wants to pay them, they sell the capacity to precisely target our eyeballs.”

Zeynep argues that in reality social media posts are targeted and delivered privately, screen by screen; mass discourse has become “a set of private conversations happening behind … everyone’s backs [which] invalidates much of what we think about free speech – conceptually, legally, and ethically.”

Read more in It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci in Wired.

Too big to regulate

Roger McNamee writes in Washington Monthly, that thanks to the US government’s laissez-faire approach to regulation, the dominant internet platforms have been able to pursue business strategies that would not have been allowed in prior decades.

“No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.”

Most of us would agree with McNamee that “Facebook and Google are now so large that traditional tools of regulation may no longer be effective.”

Read more in How to Fix Facebook – Before It Fixes Us by Roger McNamee in Washington Monthly.

Twitter v LinkedIn

Brian Inkster writes that in 2017 he grew to like LinkedIn a lot more:

“I felt it had evolved and come into its own. It is being used far more effectively as a networking/interaction tool than used to be the case. I notice that posts I put out on LinkedIn invariably get more traction and interaction than the same post on Twitter. The spam that used to come via Groups on LinkedIn is a thing of the past although LinkedIn have recently announced a focus on ‘re-integrating Groups back into the core LinkedIn experience’. Connections and referrals are being made on LinkedIn in a way that used to happen on Twitter but no longer seems to happen on there in the same way.”

Twitter matures

My own view is that Twitter’s purpose, other than to make a lot of money for its founders and investors, is very different from LinkedIn’s. It is very much geared towards reporting current developments and reacting to and analysing their importance. Of course, it does depend on which bubble you inhabit as to how deep or trivial are the issues under discussion and how useful or annoying are the replies. For professionals, and lawyers in particular, it offers rich seams of discussion and expert analysis of the sort we used to associate only with meatier articles and blog posts. Two recent developments on the platform have helped.

The maximum tweet length has been increased from 140 to 280 characters. The original restriction encouraged brevity and creativity, but it was so restrictive that it also encouraged less beneficial practices. The longer limit, whilst initially bemoaned by the old school, appears to have been well received.

Twitter “threads” have been officially adopted. Like a number of Twitter features, threads were an innovation by users rather than by Twitter itself. Linking together a sequence of tweets turns out to be a very effective way of developing an argument, telling a story and so on. Threads have very quickly established themselves as a literary form well deployed by lawyers.

Machine learning

By the end of the year we’d learned that much of what we term AI, and certainly much of the AI that is actually being implemented in legal practice, is principally based on machine learning. Give a machine a lot of data and it will learn from it and then apply that knowledge going forward in a virtuous cycle. For example, in the legal sphere we have machines taking over from overworked junior lawyers in conducting document review. So machines are doing the drudge work in an important but fairly narrow field. Is this really intelligence? They are also being used to predict the likely outcome of cases based on precedent. And in the US, AI is risk assessing offenders and even sentencing criminals. What could possibly go wrong?

Algorithms

We learned a fair bit about algorithms in the last year. “Algorithm” is really just a geeky word for “set of rules”. We were previously probably most familiar with the term in relation to Google; its PageRank algorithm was much talked about. In fact Google deploys thousands of algorithms in determining how to rank pages in its results.

Facebook and Twitter use algorithms to decide what to put in your news feed and what ads to show you. Uber uses algorithms to decide which driver to match to your ride and how much to charge when demand exceeds supply. These are all decisions made by powerful companies affecting many aspects of our lives and little is disclosed about how they are made.

Even where we know the rules, we may not appreciate their implications. Leave a decision to an AI machine trained with biased data (which is more than likely) and it will exhibit bias.

So we started worrying about algorithms. From Algorithms and the law on Legal Futures:

“Algorithms are rapidly emerging as artificial persons: a legal entity that is not a human being but for certain purposes is legally considered to be a natural person. Intelligent algorithms will increasingly require formal training, testing, verification, certification, regulation, insurance, and status in law.”

Robots taking jobs

There has been an awful lot of discussion about robots taking jobs. Which jobs, how many, by when? Nobody seems to be able to agree.

In Big Law, AI is doing the drudge work formerly occupying junior lawyers. They believe those jobs can be replaced with more valuable work to generate more profit. That begs the question what will happen to lawyers and paralegals further down the food chain.

Professor Richard Susskind addresses this question in the new edition of Tomorrow’s Lawyers, saying “it is hard to avoid the conclusion that there will be much less need for conventional lawyers.” (For a review of the AI chapter, with extracts, by Ian Lopez, see Corporate Counsel.)

Image: Lollipop is coming (cropped) cc by Giuseppe Milo on Flickr.