Reblogged from Legal Web Watch June 2014.
Reinvent Law London 2014, a conference featuring presentations on “law + technology + innovation + entrepreneurship” was held on 20 June 2014 at the University of Westminster Law School in London.
I missed last year’s event, which was well received (covered by Michael Scutt for the Newsletter), so I was keen to experience the buzz for myself this time. The day consisted of a few quickfire “ignite” sessions with several presentations of less than 10 minutes each and a few “talk” sessions with slightly longer presentations (not a lot of difference presentationally to my mind). As you can see from the selection below, there’s no shortage of interesting ideas for the future of law and lawyers. A stimulating day.
As well as two giant screens showing presenters and their slides, a key feature of the set-up was a “Twitter wall” streaming all tweets that included the #reinventlaw hashtag. This meant that not only could you clearly follow the presentation visuals, but also you could see the immediate reactions of the other delegates (and a few other respondents off-site).
You can get a good flavour of the presentations from tweet collections put together using Storify. Robert Richards provides a storify of tweets from the whole day. and LexBlogNetwork produced storifies of most of the presentations. Here are those that most captured my interest:
Sands McKinley, McKinley Irvin, Lawyers in Wonderland (on regulation and the ABA, notable for the illustrations).
Dana Denis-Smith, Obelisk, What’s Love Got to Do With It? (on what the legal industry can learn from online dating).
Christina Blacklaws, Blacklaws Consultings, Legal Futures – The Rise of the Machine (on the potential of ODR for family breakdown cases).
Maurits Barendrecht, HiiL, In the Future, Will Law Be More Like Health Care? (on how actual needs for legal services are similar to those for healthcare).
Ivan Rasic, LegalTrek, Lean: Can NewLaw Learn from Tech Startups? (on the innovative power of New Law: startup firms have more flexibility).
Reblogged from Legal Web Watch May 2014.
Insofar as we still measure column inches on the web, many yards in the last month have been devoted to commentary and analysis of the Google Spain decision, or the "right to be forgotten" as it is popularly but inaccurately known.
As ever, Laurence Eastham provides some refreshing comment and useful pointers on Computers and Law.
One of the key questions is how practicable it would be for Google and other search engines to remove specific links from their indexes. Neil Cameron (on his blog) pictures "an army of de-Googlers, frantically and manually removing links for every claimant with a past they would rather forget" (simply not practicable). Laurence is filled with dread at either "Google making a judgment based on an algorithm" (leading to inappropriate deletions) or "some sort of tribunal" (unaffordable).
But thinking, rather, of a "right to be disassociated", it is easier to see how this might be effectively implemented. Google should not be put in the position of making legal judgements (certainly not without an e), but I can think of no organisation better able to come up with an elegant solution to interpreting accurately a direction from a judicial authority to disassociate person A from event B in context C.
So I say, if we must have this right, then leave the onus on the ICO to provide the right quality input to Google. GIGO and all that.
Laurence is seeking short sharp impact assessments of the case. Let him have yours.
Google says it is "working to finalise our implementation of removal requests under European data protection law as soon as possible". In the meantime, it's initial effort is this removal request form.
Image by Jason Eppink on Flickr.
Google's Panda 4.0 – small business friendly
Google’s Panda algorithm has been around since 2011. It's designed to prevent sites with poor quality content from working their way into Google’s top search results.
Update 4.0 is a major update which rolled out on 20 May and is designed to help small businesses do better. Your guess is as good as mine as to what small means.
Reblogged from Legal Web Watch April 2014.
April 26 was World IP Day. I didn’t notice too many people getting excited by this. But one who did was Graham Smith.
Graham is a partner at Bird and Bird and the leading expert in internet law, central to which is IP law and, in particular, copyright law.
Graham’s bible on Internet Law and Regulation is sadly out of date. We carried a review of the 4th edition in the January 2008 Newsletter. Happily, the 5th edition is due out in December
Links and the law
Crucial to the web is the link. After all, there would be no web without them. So the legality of linking has exercised the courts from day one.
Indeed, the Shetland Times case of 1997 was one of the first to consider the question. I was on the case in one of my early “Pages on the Web” for the Solicitors Journal.
Most recently, we’ve had the Svensson judgment from the CJEU on the legality of linking to infringing material. Much has been written about this. Here are a few to get you started:
- Graham Smith on his Cybereagle blog (also published on INFORM)
- Patricia Mariscal on the Kluwer Copyright Blog
- Alberto Bellan and Eleonora Rosati on The IPKat
There is comment on the case in most IP law blogs: all are catalogued on infolaw Lawfinder.
In the upcoming issue of the Internet Newsletter for Lawyers (May 2014) we have a veritable feast of articles on copyright:
Much has been made about the fact that the web is 25 years old this month. Certainly, it was 25 years ago that Tim Berners Lee, working at CERN, “invented” the web. But the much more significant date was April 1993 when he (and CERN) gifted the web to us. It is unthinkable that the web would have developed as rapidly, with all its attendant benefits, had it been a proprietary system.
Tim BL has always stressed the importance of keeping the web open. Most recently in a birthday speech:
Key decisions on the governance and future of the Internet are looming, and it’s vital for all of us to speak up for the web’s future. How can we ensure that the other 60 percent around the world who are not connected get online fast? How can we make sure that the web supports all languages and cultures, not just the dominant ones? How do we build consensus around open standards to link the coming Internet of Things? Will we allow others to package and restrict our online experience, or will we protect the magic of the open web and the power it gives us to say, discover, and create anything? How can we build systems of checks and balances to hold the groups that can spy on the net accountable to the public? These are some of my questions—what are yours?
And also maybe read this from some chap I’ve never previously heard of over at betanews, which is an organisation I’ve never previously heard of, who says it quite well that A digital bill of rights is essential to the future of democracy. You are free to agree, or not.
Isn’t the web wonderful!
Long ago, circa 1985, colleagues, friends and family used to think I was interested in technology because I used a PC in my work and they did not. I was not interested in technology. I was interested in how technology could be applied to my interest in (law) publishing.
People still think I’m interested in technology. I am a little more so than before. But primarily I’m interested in applying technology to problems that I want to solve. Note that word “applying”. Applying technology to solve problems is “application” of technology, hence software “applications”, web “applications”, mobile “apps”. I don’t think of applications as technology; they are what you want to do with technology.
Just as my fridge is not technology, it’s an application of technology which keeps things cool, so too BusChecker is not technology, it’s an application of technology that tells me when the next bus is arriving. You get the drift.
Armed with that brilliant insight, I went along last week to an SCL knowledge management group meeting “Apps within law firms”, held at City firm Berwin Leighton Paisner’s offices.
I don’t know what I was expecting. Turns out we were looking at apps produced by the top 25 law firms, which narrows the field a bit (to approx 0.3%).* Apparently, of the top 25 firms, only 8 have published between them 14 apps for Apple and only one for Android. So we were not considering the bleeding edge.
It was suggested that legal apps fall into three categories: directory-type apps, information research apps and transactional apps. Fair enough. We looked at some examples of each.
Then, in a 10 minute session mid-way through we were asked to break out into three groups, each with one of the above focuses, and to consider the most important features of apps and what might make a killer legal app. In 10 minutes!
Needless to say, we didn’t get very far, though Team 1 I think made a particularly good fist of it.
There was a lot of talk of “content”, which is understandable coming from legal knowledge managers, and even a reference to legal apps being “content heavy”, but that I think is the antithesis of what apps are about; apps are not about content any more than they are about technology; and everything about what makes a successful app screams “light” rather than “heavy”. Google Maps, Twitter etc – these apps we use the most are accessing huge amounts of data, but they’re not data heavy, they work by making light of it.
We then had a look at BLP’s Tax Residence Test app which “helps you work out whether you are tax resident in a given tax year”, which question doesn’t trouble enough of us for it ever to be considered for the Killer App award.
I was hoping we’d have some discussion of the likes of Shake Law and what those apps represent. But no.
*There are many more apps by law firms listed by Alex Heshmaty on Delia Venables page Legal apps for individuals. And a good article on apps by law firms on SCL by Kim Tasso from June 2012 and a follow-up on Kim’s blog in May 2013.
Kevin O’Keefe is a tireless promoter of the benefits of blogging for lawyers. I may disagree with him on many points but I’m with him all the way with the underlying proposition that blogs (for lawyers) are the best thing since, well, sliced bread.
His recent post Bloggers to be driving force in legal web journalism is inspired by an article by Ezra Klein in the New York Times on web journalism asserting itself, from which
More and more, it’s becoming apparent that digital publishing is its own thing, not an additional platform for established news companies. They can buy their way into it, but their historical advantages are often offset by legacy costs and bureaucracy.
This is another way of saying that the web has democratised (news) publishing which should come as news to no-one.
Kevin extrapolates from this
And blogging lawyers are at the heart of legal Web journalism. Lawyers, law professors, law students, and judiciary, consuming and producing content at the same time.
I am not referring to pseudo-bloggers producing content for web traffic. I am referring to lawyers who are following and engaging other bloggers and legal publishers in a real and authentic way. Lawyers offering insight and commentary on niches never covered before.
Lawyers will not be drawn by money and opportunity to join digital media networks. Lawyers will publish and engage, be it blogging versus articles and traditional networking, for the same reason they have for decades. To grow their influence and network of relationships.
Blogging lawyers will come from small and large firms and from rural and metropolitan areas. Large marketing budgets will not be a required. Gatekeeper publishers and editors will not be a hurdle. A keyboard, passion, knowledge, and a willingness to offer insight and commentary in an engaging fashion will get you a [seat at] the table.
Technology will be important going forward. WordPress, as with digital media networks, gets us only part of the way.
Technology platforms will need to enable curation (manually and via machine learning), encourage lawyers to blog, enable easy mobile consumption and sharing, drive advertising, and harness data generated by user activity.
Legal web journalism may trail the digital media networks which have drawn reporters from the Wall Street Journal, The New York Times, and the Washington Post.
But it seems inevitable that legal Web Journalism will assert itself – if it has not already – and that legal bloggers will be the driving force.
I’m wondering why the future tense and why confine the comments to journalism? Good law bloggers established themselves as a publishing force some years ago. In the UK there are several I could name who have established themselves as leading experts in their fields and cemented their firms’/chambers’ reputations; others who have set up their own publishing businesses off the back of their blogs; some who contribute also to national news networks. All have this in common: they are digital-first publishers, enabled by the web, blogging in particular.
What has changed in the last few years is that everyone is now at it. Not only do we have more good blogs, we also have more indifferent blogs, more bad blogs, more pseudo-blogs, more downright evil blogs. Blogs for marketing, blogs for SEO, blogs “made for Adsense” and the like: these do not count as journalism or even publishing nor do they add a jot to the sum total of human happiness. But, hey, blogging is for everyone and democracy is messy so we can’t complain except about the worst excesses.
Where are we headed? I looked back at a post I wrote 7 years ago (yes indeedy) Community, democracy and the future of law publishing and though we have moved on and it sounds a bit dated, the sentiments are pretty much still intact.
Richard Heaton, First Parliamentary Counsel at the Cabinet Office, gave an important speech at IALS in October which has only recently been published on the GOV.UK interwebs. In Making the law easier for users: the role of statutes he reviews past attempts to codify the common law and explains how legislation.gov.uk hopes to mesh with the common law.
It is only recently with the advent of the internet and the development of comprehensive primary law sources such as legislation.gov.uk and BAILII that direct access to the law for all citizens has even been a remote possibility.
Experience from legislation.gov.uk is that “people are using legislation – reading, searching, accessing, downloading – in a way that has never happened before”, but without legal training and a knowledge of the role of case law:
the poor reader is perhaps rather like someone who sets sail in a boat. He has been handed some nautical charts which seem to map out most of the route (though they’re rather heavily amended). But he’s also been told that some rocks (he’s not sure which) aren’t marked on the charts. However, he’s given some data about shipwrecks, and perhaps he’ll be able to work out the rest from that. Immediately, the charts are unreliable; and as for the data, well where to start? …
It’s an aspiration of the legislation.gov.uk team to be able to mash the legislation database with the law report database, so that the reader will at least be alerted to those words and phrases that have been discussed in case law.
This aspiration may seem modest, but it’s a good start. It at least gets the purveyors of case law talking to legislation.gov.uk. The former predominantly serve lawyers, the latter serves everyone, not least because it is open data. So long as case law is not open data, it will not serve everyone.
Even the informed layman who recognises the role of case law won’t make any headway ploughing through BAILII, because the law is not expressed directly in those many thousands of cases; it is an expert distillation of the ratio and obiter in those cases and a computation of the citations therein and their precedence.
I’ve been trying to get a handle on Google’s new Hummingbird algorithm update, but with so many self-appointed experts spewing out so much tosh on the web, it’s taken a while to gather my thoughts.
One of the most helpful pieces I have come across is by Jeremy Hull on Wired Insights, from which (my emphasis):
“Hummingbird”… represents the biggest change to Google search since 2001. It’s not just a tweak to the search functionality – Hummingbird is a completely new search algorithm that affects 90 percent of all searches. The most interesting part is that Hummingbird actually launched a month before the announcement… and no one noticed.
Another is from Malcom Slade, SEO Project Manager at Epiphany Search, one of many SEO experts giving their view on Econsultancy. He describes Hummingbird succinctly:
Hummingbird is basically a change in how Google interprets the intent of a user’s query to ensure the returned results are appropriate.
Assuming that pre Hummingbird (pre August 2013) Google was using pattern matching to understand intent, post Hummingbird it is using much higher level NLP (Natural Language Programming) concepts to ensure that the full query is answered by the returned sites.
Why is Goog doing this? Well, because these days, in particular because so many more people are using mobile to access the web, we are no longer typing two- or three-word keyword searches in the Google search box but rather literally asking Google questions.
Nathan Roberson on Business2Community attempts to make sense of this new landscape with some nice infographics. He clearly knows enough about this to feel he can confidently advise “content managers” about what to focus on henceforth.
But to be honest I don’t think the SEO industry really have an answer for this and are casting about to justify their existence.
Hummingbird is not about ranking websites; it’s about interpreting user intent. There’s little a site can do to rank better other than try better to answer the questions it is being asked, which is exactly what Goog has been saying all along. Yet I’ve seen so-called SEO experts advising that this means sites should provide more content in Q&A format. That’s just incredibly dumb.
Here’s what I think. If I ask the question “Is there a dark side of the moon?” I’m asking an astronomical question and I don’t want to be served up answers relating to the seminal Pink Floyd album. Pre-Hummingbird Google would probably have ranked some of the latter sites higher, so they will now lose out on some traffic, but since that traffic was the astronomically inclined, not looking for Pink Floyd, who has lost out?
And no, there is no dark side of the moon.
I’m conscious that I’ve not put a virtual pen to WordPress for more than a month now. Last Saturday a friend asked me where I found the inspiration for my daily blog! Clearly he does not take as close an interest in my virtual presence as I thought.
Walter White broke bad early on: series 1, episode 1 as I recall. Yet here we have Google, founded 15 years ago, still claiming not to be evil, but the truth is that Google is a near monopoly which is as near to evil as you can get in polite company in my neck of the woods. Certainly since it went public it has ceased to be the cuddly startup we all used to love. Don’t let the casual dress fool you; the suits are in charge.
Want to rank well in Google, well …
The key to getting links to your site is to create unique, compelling content that other people want to link to. [But] Google’s very good at detecting unnatural links that violate our Webmaster Guidelines (for example, those that come from link-exchange schemes, paid links schemes, or are auto-generated), so participating in such schemes could end up doing more harm than good.
But we all know that creating unique, compelling content is not enough. Except in small niches it only gets you so far and we have a huge SEO industry that recognises this and buys links for its clients in one way or another to vault them over the unique, compelling content.
So everyone gets involved in this game to game Google. Sites that want to be ranked well approach sites that do already have unique, compelling content, seeking to buy links which will enhance their ranking. What’s a site with good Google juice to do? Sell ads, that’s what.
Yet Google says “You can’t sell that sort of advertising; we don’t like it. You must use this HTML tag we invented (rel=nofollow) to say ‘this link is worthless’.” Seriously, can’t they figure out a way to value the link according to its context? I thought that was their raison d’etre!
And there’s this guy called Matt Cutts who heads the Webspam team at Google and pontificates on this stuff. He’s not anything like as handsome or charismatic as Jesus but his word is treated as Gospel. Seriously. Words fail me.