The Freedom to Innovate and the Freedom to Investigate

Earlier this week, I was at SXSW for CTA‘s annual Innovation Policy Day.

My session, on Labor and the Gig/Sharing Economy, was a lively discussion including Sarah Leberstein from the National Employment Law Project, Michael Hayes from CTA’s policy group (which reps companies from their membership including Uber and Handy), and Arun Sundararajan from NYU, who recently wrote a book on the Sharing Economy.

But, that’s not the point of this post!  The point of this post is to discuss an idea that came up and a subsequent session, on Security & Privacy and the Internet of Things.  The idea that struck me the most from that session was the tension — or depending on how you look at it, codependence — between the “freedom to innovate” and the “freedom to investigate“.

Defending the Freedom to Innovate was the Mercatus Center‘s Adam Thierer. Adam is one of the most thoughtful folks looking at innovation from a libertarian perspective, and is the author of a book on the subject of permissionless innovation.  The gist of permissionless innovation is that we — as a society and as a market — need the ability to experiment.  To try new things freely, make mistakes, take risks, and — most importantly — learn from the entire process.  Therefore, as a general rule, policy should bias towards allowing experimentation, rather than prescribing fixed rules.  This is the foundation of what i call Regulation 2.0[1].

Repping the Freedom to Investigate was Keith Winstein from the Stanford CS department (who jokingly entered his twitter handle as @Stanford, which was reprinted in the conference materials and picked up in the IPD tweetstream).  Keith has been exploring the idea of the “Freedom to Investigate”, or, as he put it in this recent piece in Politico, “the right to eavesdrop on your things”.  In other words, if we are to trust the various devices and services we use, we must have a right to inspect them — to “audit” what they are saying about us.  In this case, specifically, a right to intercept and decrypt the messages sent between mobile / IoT devices and the web services behind them.  Without this transparency, we as consumers and a society have no way of holding service providers accountable, or of having a truly open market.

The question I asked was: are these two ideas in tension, or are they complimentary?

Adam gave a good answer, which was essentially: They are complimentary — we want to innovate, and we also need this kind of transparency to make the market work.  But… there are limits to the forms of transparency we can force on private companies — lots of information we may want to audit is sensitive for various reasons, including competitive issues, trade secrets, etc.  And Kevin seemed to agree with that general sentiment.

On the internet (within platforms like eBay, Airbnb and Uber), this kind of trade is the bedrock of making the platforms work (and the basis of what I wrote about in Regulation the Internet Way).  Users are given the freedom to innovate (to sell, write, post, etc), and platforms hold them accountable by retaining the freedom to investigate.  Everyone gladly makes this trade, understanding at the heart of things, that without the freedom to investigate, we cannot achieve the level of trust necessary to grant the freedom to innovate!

So that leaves the question: how can we achieve the benefits of both of these things that we need: the freedom to experiment, and the freedom to investigate (and as a result, hold actors accountable and make market decisions)?  Realistically speaking, we can’t have the freedom to innovate without some form of the freedom to investigate.  The tricky bit comes when we try to implement that in practice.  How do we design such systems?  What is the lightest-weight, least heavy-handed approach?  Where can this be experimented with using technology and the market, rather than through a legal or policy lever?  These are the questions.

[1] Close readers / critics will observe an apparent tension between a “regulation 2.0” approach and policies such as Net eutrality, which I also favor.  Happy to address this in more depth but long story short, Net Neutrality, like many other questions of regulations and rights, is a question of whose freedom are we talking about — in this case, the freedom of telcos to operate their networks as they please, or the freedom of app developers, content providers and users to deliver and choose from the widest variety of services and programming.  The net neutrality debate is about which of those freedoms to prioritize, in which case I side with app developers, content providers and users, and the broad & awesome innovation that such a choice results in.

Internet meets world: rules go boom

imw

Since 2006, I’ve been writing here about cities, the internet, and the ongoing collision between the two.

Along the way, I’ve also loved using Tumblr to clip quotes off the web, building on the idea of “the slow hunch” (the title of this blog) and the “open commonplace book” as a tool for tracking the slow hunch over time.

Today, I’m launching the next iteration of both: Internet Meets World.

On IMW, I’ll be tracking the big questions, like:

I’ll still continue to blog here, but will syndicate certain posts — those specifically digging into the macro / legal / policy / societal issues created by the collision of the internet and the world, on IMW.   In addition to collecting my own posts, I’ll also be collecting other articles from across the web, and will move my quote clipping from tumblr into Medium.

I’m also looking for one or more co-editors for IMW.  If you’re interested, shoot me an email at nick [at] usv [dot] com, including a handful of links / quotes that you think really capture the essence of this conflict / opportunity.

Onward!

Big innovation and small innovation

Yesterday at one of our bi-monthly team deep dives at USV, we got into the conversation of essentially “Big Innovation” vs. “Small Innovation”.  Those who have followed USV for some time know that at the core of the investment thesis is a belief in “decentralized”, “bottom-up” innovation — the kind that really became possible with the advent of the web.

Given that, one of the market condition / policy issues that we care about is consolidation and excessive market power — the potential for small players and new entrants to get blocked from a market by entrenched incumbents.  For example, this is why we care about the Open Internet and have supported the FCC’s rules to prevent ISPs and telcos from blocking or throttling web-based applications and content.

This issue, of course, is not limited to ISPs and telcos — there is also a similar concern at the application / platform level: when does Google / Apple / Amazon / Facebook / Uber become too big?  What does too big mean?  What are the risks to “bottom up” innovation when that happens?  What should be done about it?

Which led us to the flip side of the bottom-up innovation argument: the value of “big” innovation — innovation that’s possible because of size and scale.   For example, Uber is able to offer an incredible customer experience because they have invested in building a big, liquid, network (currently at a big big loss, but that’s part of the strategy at this point).  In this case, “big” enables a kind of innovation that wouldn’t be possible otherwise.  The whole world now knows that it’s possible to summon a ride immediately at the push of a button.  That’s a real innovation, with real practical implications for lots of people.

Or take Amazon: their bigness means that I can get almost anything delivered to my house, for free, in 2 days or less.  Embodied in that are huge consumer innovations (I shop very differently than I did previously, and it’s way more convenient), and huge organizational innovations, in terms of supply chain management, logistics, etc.

Or AWS: perhaps Amazon’s greatest achievement has been turning their e-commerce platform into a developer platform.  This was not only a huge innovation for them, process-wise and business model wise, but it has also (as Brittany pointed out) been a total boon to “bottom-up” innovation, by drastically lowering the cost and complexity of building and deploying a web application.  Practically every new startup begins by using AWS for their infrastructure, and some large degree, we can thank Amazon for this gift to the startup sector.

And on and on — it’s relatively easy to find examples of “big” innovations that add up to huge consumer benefit.

Where it gets a little tricker, though, is when you flip the perspective and look at these same big platforms, not from the consumer perspective, but from the supplier perspective.

If you’re a marketplace seller on amazon, or an uber driver, or an app developer trying to get distribution through the App Store or Facebook, you are keenly aware of the perils of relying on a big platform for distribution (e.g., not “being your own bitch“).  The bigger the platform, the more they can offer you in terms of access to customers, but the more control they can exert, regarding pricing and terms.

This is where the tension between big innovation and small innovation lives.

What to do about it?  It depends.  When things become grotesquely anti-competitive and anti-consumer, the government steps in with regulations or antitrust enforcement this is not ideal, but sometimes it’s necessary.  Alternatively, sometimes there’s a market opportunity to serve the supply-side on better terms, as OpenBazaar is trying to do for marketplaces, and as are all of the companies looking to serve workers in the Gig Economy.

What really stands out to me, as I write this, is the tendency for “big” to be great for the demand side, but bad for the supply side.  So, where “bottom up” innovation depends on a thriving and diverse supply side of the market (whether that’s gig workers, content creators, app developers, or users in a social network contributing their content), we need to be on the lookout for ways to make sure that “big” doesn’t get in the way and squelch that.  While at the same time, recognizing that “big” can bring lots of direct benefits to consumers.

Beam should have a hardware API

We’ve got a few Beam telepresence robots at USV, and use them all the time.  Fred has written about them here.  We had a team meeting today, and we had two beams going at once — Fred and I were the first to arrive, and we were chatting beam-to-beam — he in LAUtah, me in Boston, both of us in NYC by robot:

It works amazingly well.  It has now become somewhat normal for robots to be roving around the office, having conversations w people, USV team folks and visitors alike.

One idea that keeps coming up is an extensible peripherals API — the Beam robots already come w a USB port (used for initial setup), and it should be possible to use that to extend it with hardware.  We joke about jousting (and have done some), but I could seriously imagine bolting on devices such as additional displays / LCDs, sensors of various kinds, devices that can perform human-like gestures (the way the Kubi can nod, shake and bow), etc.

Thinking of Beam as a platform in this way would certainly extend its capabilities (in particular for industry), and would also position Beam in a much stronger position at the center of an ecosystem.  Would love to see that happen.

Learning to skate

10398697_10153682836965399_5155193764294936955_n

For the past few winters, I’ve been teaching my kids to ice skate.  Above is my son Theo at hockey practice a few weeks ago.

At a certain point along the way, I got the bug and realized that skating was awesome and hockey was a beautiful sport.  So for the past year or so, I’ve been playing adult rec hockey through an great program here in Boston called StinkySocks.

The thing is, I’ve never played ice hockey before, and am only a so-so skater (maybe above average for regular people, but way way below hockey quality).  So it’s been a steep learning curve.  What’s been so illuminating about it is the combination of how hard it is — meaning, how unnatural some of the moves are at first — and how quickly progress does happen with enough practice.

I played last night — my third game of this season — and realized that while I’m not all the way there yet, I’m much much more comfortable on the ice than I was at the beginning of last year.  I’m skating backwards, making hard turns, and just generally keeping good balance most of the time.  Reflecting back on the past year, it’s really satisfying to feel those changes sink in, and what it’s pointed out to me is how much change is accomplished by a series of small steps, rather than a single big bang.

This whole process has also been a great exercise in learning online.  Turns out that YouTube is full of video tutorials on the minutia of ice skating.  For example, check this one out, on “backwards crossovers” (the skill I’m working on right now):

It’s just so great to be studying something — whether that’s law or chemistry or programming or ice skating — and be able to benefit from such great resources.  It continues to amaze me how much time and effort people will invest in building educational resources online for others.

As with a lot of things, the trick here seems to be developing a habit and a routine.  Since my son started hockey lessons in November, we’ve been going skating every Saturday afternoon, and for the past month I’ve been playing every Wednesday evening.  Getting on the ice twice a week, even just for an hour, has done so much to develop my feel for skating.  This is true for lots of other things, and has been a great reminder to me how much routine and patience matter when building any new skill or habit.

Finally, I’ve been thinking a lot lately about my Uncle Gerry who passed away over Christmas.  He’s the one I credit with teaching me to love winter sports, skiing and skating included.  He was 87, and in his day, was a monster hockey player, including an NCAA championship win in 1957. Here’s a picture of Gerry on the ice during his youth:

gerry

So perhaps part of why I’ve been so into this lately is the way that Gerry was fading, while my own son was growing up.  And part of it was wanting to get that feeling into my legs that I know he knew and loved.  It’s been a fun journey and I’m hoping I can keep the practice going.

Zero-rating: putting Net Neutrality to the test

It’s been an intense 10 months since the FCC approved its latest Open Internet rules (aka Net Neutrality).

On the wired side, we’ve seen the unbundling of content, as channels such as HBO (via HBO Now) and ESPN (via Sling TV) have split from cable to go “over-the-top” with direct-to consumer offerings.  These are a direct result of the clear FCC rules prohibiting broadband providers from throttling, degrading, or otherwise fucking with this internet traffic.  This is clearly pro-consumer, as people can now buy the channels they want unbundled from the crap they don’t, and it’s pro-innovation, in that even the smallest video startup is now competing on even footing with the big guys — I can launch a video service tomorrow that competes head-on with HBO or ESPN, and both of us have exactly the same distribution, without having to cut a deal with the cable company.

On the wireless side, it’s been much more of a circus, as wireless providers experiment with a variety of so-called “zero-rating” plans.  Zero-rating is the practice of selectively exempting certain content from wireless data caps.  Zero-rating isn’t monolithic — there are many ways one can do it, which are varying degrees of bad — which is why the FCC didn’t explicitly rule on zero-rating, but rather left it up for review on a case-by-case basis.

The two cases that are happening right now are T-Mobile’s Binge-On, which exempts certain video providers from data caps (and throttles the speed of all video), and Facebook’s Free Basics (formerly Internet.org), which offers free access to Facebook and partner content to mobile users in India and Africa.

Both have been controversial, and Free Basics wildly so.

The question that we’ve been wrestling with is: if you believe that exemptions from data caps are pro-consumer (and this is not a given), then to what extent to these programs enable or limit open competition?  To what extent are they “open” or “neutral”?  To what extent are the underlying platforms controlling access, playing favorites, and limiting competition?

Looking at it that way, then Free Basics is really, really, really bad, and Binge-on is just kind of bad.

With Binge On, any “qualifying provider” can join the program and have their video content exempted from participating users’ data caps (here are the exact terms).  So, you still need to jump through a hoop to get outside of the cap, but theoretically anyone can do it; you don’t need to cut a special deal with T-Mobile.  Then, T-Mobile also throttles download speeds of all video for participating customers (regardless of whether the source is a Binge-on partner).  While this is sucky and disingenuous, and clearly violates the FCC open internet rules, it doesn’t have as huge a direct impact on competition & innovation as Free Basics does.

The question Binge-on raises is: are data caps necessary at all, and what impact does throttling video have on video innovation and investment in network capacity.  Those are valid points which are central to the theory of the virtuous cycle of investment in content and infrastructure and the reason for the FCC’s ban on throttling by content provider or by class of content (in this case, video).

Free Basics, on the other hand, is creating a Facebook-controlled walled-garden — a modern-day “AOL on the Internet” — where partners must both be approved on a one-by-one basis, and must also submit to having their content completely proxied and remixed through Facebook’s platform.  This post — Free Basics is a Nightmare on the Internet — has a very detailed breakdown of the issues with Free Basics.

For the hundreds of millions of users who join the “internet” via Free Basics, they won’t be joining the real internet, they’ll be joining “Facebooknet” — a limited, controlled version of the internet that lives inside of Facebook.  This is clearly not a charitable program offering access to millions of unconnected users, but rather a brilliantly evil user acquisition and business development strategy.

So, the question then becomes, what would be a better way to deliver internet access to the hundreds of millions of people who will be coming online in the next decade?  How can we ensure that they get connected, and also ensure that they benefit from the diversity, openness and innovation of the real internet?  How can we design access programs that are pro-consumer, pro-competition and pro-innovation?

I won’t go into all the details here, but there are a few ideas, including:

Related are the points that internet access in India is already growing very quickly without Free Basics (India grew from 300M to 400M mobile users in 2014) and that smartphone purchase is actually the most expensive part of getting online, not data access.

So, I suppose that the “network level” innovation that’s happening here is good in that it’s teasing out all the possible schemes and giving us a real close look at the details of each.  My view is that, despite its warts, T-Mobile’s Binge-On is closer to the spirit of bringing users the whole internet as quickly and cheaply as possible, while Free Basics is closer to a geniusly evil world domination scheme.

Hello, 2016

Breaking the ice — been off the blogs for quite a while now.

Looking forward to this year, the way I tend to every year.  2015 was a tough one for me personally — went through a bunch of shit on the family front that both demonstrated how tough life can be and also how resilient people are.

I’m incredibly thankful for my friends, family and colleagues, who continue to inspire, support and challenge my family and me.

Let’s get it on.