Archives de catégorie : CloudFlare

The History of Email

This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps.


— Text of the first email ever sent, 1971

The ARPANET (a precursor to the Internet) was created “to help maintain U.S. technological superiority and guard against unforeseen technological advances by potential adversaries,” in other words, to avert the next Sputnik. Its purpose was to allow scientists to share the products of their work and to make it more likely that the work of any one team could potentially be somewhat usable by others. One thing which was not considered particularly valuable was allowing these scientists to communicate using this network. People were already perfectly capable of communicating by phone, letter, and in-person meeting. The purpose of a computer was to do massive computation, to augment our memories and empower our minds.

Surely we didn’t need a computer, this behemoth of technology and innovation, just to talk to each other.

The computers which sent (and received) the first email.

The history of computing moves from massive data processing mainframes, to time sharing where many people share one computer, to the diverse collection of personal computing devices we have today. Messaging was first born in the time sharing era, when users wanted the ability to message other users of the same time shared computer.

Unix machines have a command called write which can be used to send messages to other currently logged-in users. For example, if I want to ask Mark out to lunch:

$ write mark
write: mark is logged in more than once; writing to ttys002

Hi, wanna grab lunch?

He will see:

Message from [email protected] on ttys003 at 10:36 …
Hi, wanna grab lunch?

This is absolutely hilarious if your coworker happens to be using a graphical tool like vim which will not take kindly to random output on the screen.

Persistant Messages

When the mail was being developed, nobody thought
at the beginning it was going to be the smash hit
that it was. People liked it, they thought it was
nice, but nobody imagined it was going to be the
explosion of excitement and interest that it
became. So it was a surprise to everybody, that it
was a big hit.

— Frank Heart, director of the ARPANET infrastructure team

An early alternative to Unix called Tenex took this capability one step further. Tenex included the ability to send a message to another user by writing onto the end of a file which only they could read. This is conceptually very simple, you could implement it yourself by creating a file in everyones home directory which only they can read:

mkdir ~/messages
chmod 0442 ~/messages

Anyone who wants to send a message just has to append to the file:

echo “🍕?n” >> /Users/zack/messages

This is, of course, not a great system because anyone could delete your messages! I trust the Tenex implementation (called SNDMSG) was a bit more secure.


In 1971, the Tenex team had just gotten access to the ARPANET, the network of computers which was a main precursor to the Internet. The team quickly created a program called CPYNET which could be used to send files to remote computers, similar to FTP today.

One of these engineers, Ray Tomlinson, had the idea to combine the message files with CPYNET. He added a command which allowed you to append to a file. He also wired things up such that you could add an @ symbol and a remote machine name to your messages and the machine would automatically connect to that host and append to the right file. In other words, running:

SNDMSG [email protected]

Would append to the /Users/zack/messages file on the host cloudflare. And email was born!


The CPYNET format did not have much of a life outside of Tenex unfortunately. It was necessary to create a standard method of communication which every system could understand. Fortunately, this was also the goal of another similar protocol, FTP. FTP (the File Transfer Protocol) sought to create a single way by which different machines could transfer files over the ARPANET.

FTP originally didn’t include support for email. Around the time it was updated to use TCP (rather than the NCP protocol which ARPANET historically used) the MAIL command was added.

$ ftp
< open bbn

> 220 HELLO, this is the BBN mail service

< MAIL zack

> 354 Type mail, ended by <CRLF>.<CRLF>

< Sup?
< .

> 250 Mail stored

These commands were ultimately borrowed from FTP and formed the basis for the SMTP (Simple Mail Transfer Protocol) protocol in 1982.


The format for defining how a message should be transmitted (and often how it would be stored on disk) was first standardized in 1977:

Date : 27 Aug 1976 0932-PDT
From : Ken Davis <KDavis at Other-Host>
Subject : Re: The Syntax in the RFC
To : George Jones <Group at Host>,
Al Neuman at Mad-Host

There’s no way this is ever going anywhere…

Note that at this time the ‘at’ word could be used rather than the ‘@’ symbol. Also note that this use of headers before the message predates HTTP by almost fifteen years. This format remains nearly identical today.

The Fifth Edition of Unix used a very similar format for storing a users email messages on disk. Each user would have a file which contained their messages:

From MAILER-DAEMON Fri Jul 8 12:08:34 1974
From: Author <[email protected]>
To: Recipient <[email protected]>
Subject: Save $100 on floppy disks

They’re never gonna go out of style!

From MAILER-DAEMON Fri Jul 8 12:08:34 1974
From: Author <[email protected]>
To: Recipient <[email protected]>
Subject: Seriously, buy AAPL

You’ve never heard of it, you’ve never heard of me, but when you see
that stock symbol appear. Buy it.

– The Future

Each message began with the word ‘From’, meaning if a message happened to contain From at the beginning of a line it needed to be escaped lest the system think that’s the start of a new message:

From MAILER-DAEMON Fri Jul 8 12:08:34 2011
From: Author <[email protected]>
To: Recipient <[email protected]>
Subject: Sample message 1

This is the body.
>From (should be escaped).
There are 3 lines.

It was technically possible to interact with your email by simply editing your mailbox file, but it was much more common to use an email client. As you might expect there was a diversity of clients available, but a few are of historical note.

RD was an editor which was created by Lawrence Roberts who was actually the program manager for the ARPANET itself at the time. It was a set of macros on top of the Tenex text editor (TECO), which itself would later become Emacs.

RD was the first client to give us the ability to sort messages, save messages, and delete them. There was one key thing missing though: any integration between receiving a message and sending one. RD was strictly for consuming emails you had received, to reply to a message it was necessary to compose an entirely new message in SNDMSG or another tool.

That innovation came from MSG, which itself was an improvement on a client with the hilarious name BANANARD. MSG added the ability to reply to a message, in the words of Dave Crocker:

My subjective sense was that propagation of MSG resulted in an exponential explosion of email use, over roughly a 6-month period. The simplistic explanation is that people could now close the Shannon-Weaver communication loop with a single, simple command, rather than having to formulate each new message. In other words, email moved from the sending of independent messages into having a conversation.

Email wasn’t just allowing people to talk more easily, it was changing how they talk. In the words of C. R. Linklider and Albert Vezza in 1978:

One of the advantages of the message systems over letter mail was that, in an ARPANET message, one could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense… Among the advantages of the network message services over the telephone were the fact that one could proceed immediately to the point without having to engage in small talk first, that the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time.

The most popular client from this era was called MH and was composed of several command line utilities for doing various actions with and to your email.

$ mh

% show

(Message inbox:1)
Return-Path: joed
Received: by (5.54/ACS)
id AA08581; Mon, 09 Jan 1995 16:56:39 EST
Message-Id: <[email protected]>
To: angelac
Subject: Here’s the first message you asked for
Date: Mon, 09 Jan 1995 16:56:37 -0600
From: “Joe Doe” <joed>

Hi, Angela! You asked me to send you a message. Here it is.
I hope this is okay and that you can figure out how to use
that mail system.


You could reply to the message easily:

% repl

To: “Joe Doe” <joed>
cc: angelac
Subject: Re: Here’s the first message you asked for
In-reply-to: Your message of “Mon, 09 Jan 1995 16:56:37 -0600.”
<[email protected]>

% edit vi

You could then edit your reply in vim which is actually pretty cool.

Interestingly enough, in June of 1996 the guide “MH & xmh: Email for Users & Programmers” was actually the first book in history to be published on the Internet.

Pine, Elm & Mutt

All mail clients suck. This one just sucks less.

— Mutt Slogan

It took several years until terminals became powerful enough, and perhaps email pervasive enough, that a more graphical program was required. In 1986 Elm was introduced, which allowed you to interact with your email more interactively.

Elm Mail Client

This was followed by more graphical TUI clients like Mutt and Pine.

In the words of the University of Washington’s Pine team:

Our goal was to provide a mailer that naive users could use without fear of making mistakes. We wanted to cater to users who were less interested in learning the mechanics of using electronic mail than in doing their jobs; users who perhaps had some computer anxiety. We felt the way to do this was to have a system that didn’t do surprising things and provided immediate feedback on each operation; a mailer that had a limited set of carefully-selected functions.

These clients were becoming gradually easier and easier to use by non-technical people, and it was becoming clear how big of a deal this really was:

We in the ARPA community (and no doubt many others
outside it) have come to realize that we have in
our hands something very big, and possibly very
important. It is now plain to all of us that
message service over computer networks has
enormous potential for changing the way
communication is done in all sectors of our
society: military, civilian government, and


Its like when I did the referer field. I got nothing but grief for my choice
of spelling. I am now attempting to get the spelling corrected in the OED
since my spelling is used several billion times a minute more than theirs.

— Phillip Hallam-Baker on his spelling of ’Referer’ 2000

The first webmail client was created by Phillip Hallam-Baker at CERN in 1994. Its creation was early enough in the history of the web that it led to the identification of the need for the Content-Length header in POST requests.

Hotmail was released in 1996. The name was chosen because it included the letters HTML to emphasize it being ‘on the web’ (it was original stylized as ‘HoTMaiL’). When it was launched users were limited to 2MB of storage (at the time a 1.6GB hard drive was $399).

Hotmail was originally implemented using FreeBSD, but in a decision I’m sure every engineer regretted, it was moved to Windows 2000 after the service was bought by Microsoft. In 1999, hackers revealed a security flaw in Hotmail that permitted anybody to log in to any Hotmail account using the password ‘eh’. It took until 2001 for ‘hackers’ to realize you could access other people’s messages by swap usernames in the URL and guessing at a valid message number.

Gmail was famously created in 2004 as a ‘20% project’ of Paul Buchheit. Originally it wasn’t particularly believed in as a product within Google. They had to launch using a few hundred Pentium III computers no one else wanted, and it took three years before they had the resources to accept users without an invitation. It was notable both for being much closer to a desktop application (using AJAX) and for the unprecedented offer of 1GB of mail storage.

The Future

US Postal Mail Volume, KPCB

At this point email is a ubiquitous enough communication standard that it’s very possible postal mail as an everyday idea will die before I do. One thing which has not survived well is any attempt to replace email with a more complex messaging tool like Google Wave. With the rise of more targeted communication tools like Slack, Facebook, and Snapchat though, you never know.

There is, of course, a cost to that. The ancestors of the Internet were kind enough to give us a communication standard which is free, transparent, and standardized. It would be a shame to see the tech communication landscape move further and further into the world of locked gardens and proprietary schemas.

We’ll leave you with two quotes:

Mostly because it seemed like a neat idea. There was no directive to ‘go forth and invent e-mail’.

— Ray Tomlinson, answering a question about why he invented e-mail

Permit me to carry the doom-crying one step further. I am
curious whether the increasingly easy access to computers by
adolescents will have any effect, however small, on their social
development. Keep in mind that the social skills necessary for
interpersonal relationships are not taught; they are learned by
experience. Adolescence is probably the most important time period
for learning these skills. There are two directions for a cause-effect relationship.
Either people lacking social skills (shy people, etc.) turn to
other pasttimes, or people who do not devote enough time to human
interactions have difficulty learning social skills. I do
not [consider] whether either or both of these alternatives actually occur. I believe I am justified in asking whether computers will
compete with human interactions as a way of spending time?
Will they compete more effectively than other pasttimes?
If so, and if we permit computers
to become as ubiquitous as televisions, will computers have
some effect (either positive or negative) on personal development
of future generations?

— Gary Feldman, 1981

Use Cloudflare Apps to build tools which can be installed by millions of sites.
Build an app →
If you’re in San Francisco, London or Austin: work with us.

Our next post is on the history of the URL! Get notified when new apps and apps-related posts are released:

Email Address

(function($) {window.fnames = new Array(); window.ftypes = new Array();fnames[0]=’EMAIL’;ftypes[0]=’email’;fnames[1]=’FNAME’;ftypes[1]=’text’;fnames[2]=’LNAME’;ftypes[2]=’text’;}(jQuery));var $mcj = jQuery.noConflict(true);

/* Social */

.social {
margin-top: 1.3em;
.fb_iframe_widget {
padding-right: 1px;
.IN-widget {
padding-left: 11px;

/* Hide period after author */

.post-header .meta a {
border-right: 5px solid white;
margin-right: -5px;
position: relative;

/* Post */

body {
background-color: white;
pre, code {
font-size: inherit;
line-height: inherit;
section.primary-content {
font-size: 16px;
line-height: 1.6;
color: black;
blockquote {
padding-bottom: 1.5em;
padding-top: 1em;
font-style: italic;
font-size: 1.25rem;
blockquote.pull-quote-centered {
font-size: 1.2em;
text-align: center;
max-width: 100%;
margin-left: auto;
margin-right: auto;
blockquote blockquote {
margin-left: 1em;
padding-left: 1em;
border-left: 5px solid rgba(0, 0, 0, 0.2);
padding-bottom: 0.5em;
padding-top: 0.5em;
margin-bottom: 0.5em;
margin-top: 0.5em;
figure.standard {
position: relative;
max-width: 100%;
margin: 1em auto;
text-align: center;
z-index: -1;
.figcaption {
padding-top: .5em;
font-size: .8em;
color: #888;
font-weight: 300;
letter-spacing: .03em;
line-height: 1.35;
.figcontent {
display: inline-block;
p.attribution {
color: #666;
font-size: 0.9em;
padding-bottom: 1em;
a code.year {
text-decoration: underline;
.closing-cards #mc_embed_signup .mc-field-group {
margin: 0.75em 0;
.closing-cards #mc_embed_signup input {
font-size: 1.5em;
height: auto;
.closing-cards #mc_embed_signup input[type=”email”] {
border: 1px solid #bcbcbc;
border-radius: 2px;
margin-bottom: 0;
.closing-cards #mc_embed_signup input[type=”submit”] {
background: #f38020;
color: #fff;
padding: .8em 1em .8em 1em;
white-space: nowrap;
line-height: 1.2;
text-align: center;
border-radius: 2px;
border: 0;
display: inline-block;
text-rendering: optimizeLegibility;
-webkit-tap-highlight-color: transparent;
-webkit-font-smoothing: subpixel-antialiased;
user-select: none;
-webkit-appearance: none;
appearance: none;
letter-spacing: .04em;
text-indent: .04em;
cursor: pointer;
.closing-cards #mc_embed_signup div.mce_inline_error {
background-color: transparent;
color: #C33;
padding: 0;
display: inline-block;
font-size: 0.9em;
.closing-cards #mc_embed_signup p:not(:empty) {
line-height: 1.5;
margin-bottom: 2em;

.closing-cards #mc_embed_signup input[type=”email”] {
font-size: 20px !important;
width: 100% !important;
padding: .6em 1em !important;

.closing-cards #mc_embed_signup .mc-field-group {
margin: 0 !important;

.closing-cards #mc_embed_signup input[type=”submit”] {
font-size: 20px !important;
margin-top: .5em !important;
padding: .6em 1em !important;

.closing-cards #mc_embed_signup div.mce_inline_error {
padding: 0;
margin: 0;
color: #F38020 !important;

aside.section.learn-more {
display: none;

.closing-cards {
background: #eee;
width: 100%;
list-style-type: none;
margin-left: 0;
.closing-card {
width: calc(50% – 10px) !important;
font-size: 20px;
padding: 1.5em;
display: inline-block;
box-sizing: border-box;
vertical-align: top;

@media (max-width: 788px){
.closing-card {
width: 100% !important;
.closing-card + .closing-card {
border-top: 10px solid white;

Source: CloudFlare

A New API Binding: cloudflare-php

Back in May last year, one of my colleagues blogged about the introduction of our Python binding for the Cloudflare API and drew reference to our other bindings in Go and Node. Today we are complimenting this range by introducing a new official binding, this time in PHP.

This binding is available via Packagist as cloudflare/sdk, you can install it using Composer simply by running composer require cloudflare/sdk. We have documented various use-cases in our “Cloudflare PHP API Binding” KB article to help you get started.

Alternatively should you wish to help contribute, or just give us a star on GitHub, feel free to browse to the cloudflare-php source code.

PHP is a controversial language, and there is no doubt there are elements of bad design within the language (as is the case with many other languages). However, love it or hate it, PHP is a language of high adoption; as of September 2017 W3Techs report that PHP is used by 82.8% of all the websites whose server-side programming language is known. In creating this binding the question clearly wasn’t on the merits of PHP, but whether we wanted to help drive improvements to the developer experience for the sizeable number of developers integrating with us whilst using PHP.

In order to help those looking to contribute or build upon this library, I write this blog post to explain some of the design decisions made in putting this together.

Exclusively for PHP 7

PHP 5 initially introduced the ability for type hinting on the basis of classes and interfaces, this opened up (albeit seldom used) parametric polymorphic behaviour in PHP. Type hinting on the basis of interfaces made it easier for those developing in PHP to follow the Gang of Four’s famous guidance: “Program to an ‘interface’, not an ‘implementation’.”

Type hinting has slowly developed in PHP, in PHP 7.0 the ability for Scalar Type Hinting was released after a few rounds of RFCs. Additionally PHP 7.0 introduced Return Type Declarations, allowing return values to be type hinted in a similar way to argument type hinting. In this library we extensively use Scalar Type Hinting and Return Type Declarations thereby restricting the backward compatibility that’s available with PHP 5.

In order for backward compatibility to be available, these improvements to type hinting simply would not be implementable and the associated benefits would be lost. With Active Support no longer being offered to PHP 5.6 and Security Support little over a year away from disappearing for the entirety of PHP 5.x, we decided the additional coverage wasn’t worth the cost.

Object Composition

What do we mean by a software architecture? To me the term architecture conveys a notion of the core elements of the system, the pieces that are difficult to change. A foundation on which the rest must be built. Martin Fowler

When getting started with this package, you’ll notice there are 3 classes you’ll need to instantiate:

$key = new CloudflareAPIAuthAPIKey(‘[email protected]’, ‘apiKey’);
$adapter = new CloudflareAPIAdapterGuzzle($key);
$user = new CloudflareAPIEndpointsUser($adapter);

echo $user->getUserID();

The first class being instantiated is called APIKey (a few other classes for authentication are available). We then proceed to instantiate the Guzzle class and the APIKey object is then injected into the constructor of the Guzzle class. The Auth interface that the APIKey class implements is fairly simple:

namespace CloudflareAPIAuth;

interface Auth
public function getHeaders(): array;

The Adapter interface (which the Guzzle class implements) makes explicit that an object built on the Auth interface is expected to be injected into the constructor:

namespace CloudflareAPIAdapter;

use CloudflareAPIAuthAuth;
use PsrHttpMessageResponseInterface;

interface Adapter

public function __construct(Auth $auth, String $baseURI);


In doing so; we define that classes which implement the Adapter interface are to be composed using objects made from classes which implement the Auth interface.

So why am I explaining basic Dependency Injection here? It is critical to understand as the design of our API changes, the mechanisms for Authentication may vary independently of the HTTP Client or indeed API Endpoints themselves. Similarly the HTTP Client or the API Endpoints may vary independently of the other elements involved. Indeed, this package already contains three classes for the purpose of authentication (APIKey, UserServiceKey and None) which need to be interchangeably used. This package therefore considers the possibility for changes to different components in the API and seeks to allow these components to vary independently.

Dependency Injection is also used where the parameters for an API Endpoint become more complicated then what is permitted by simpler variables types; for example, this is done for defining the Target or Configuration when configuring a Page Rule:


$key = new CloudflareAPIAuthAPIKey(‘[email protected]’, ‘apiKey’);
$adapter = new CloudflareAPIAdapterGuzzle($key);
$zones = new CloudflareAPIEndpointsZones($adapter);

$zoneID = $zones->getZoneID(“”);

$pageRulesTarget = new CloudflareAPIConfigurationsPageRulesTargets(‘*’);

$pageRulesConfig = new CloudflareAPIConfigurationsPageRulesActions();

$pageRules = new CloudflareAPIEndpointsPageRules($adapter);
$pageRules->createPageRule($zoneID, $pageRulesTarget, $pageRulesConfig, true, 6);

The structure of this project is overall based on simple object composition; this provides a far more simple object model for the long-term and a design that provides higher flexibility. For example; should we later want to create an Endpoint class which is a composite of other Endpoints, it becomes fairly trivial for us to build this by implementing the same interface as the other Endpoint classes. As more code is added, we are able to keep the design of the software relatively thinly layered.

Testing/Mocking HTTP Requests

If you’re interesting in helping contribute to this repository; there are two key ways you can help:

Building out coverage of endpoints on our API
Building out test coverage of those endpoint classes

The PHP-FIG (PHP Framework Interop Group) put together a standard on how HTTP responses can be represented in an interface, this is described in the PSR-7 standard. This response interface is utilised by our HTTP Adapter interface in which responses to API requests are type hinted to this interface (PsrHttpMessageResponseInterface).

By using this standard, it’s easier to add further abstractions for additional HTTP clients and mock HTTP responses for unit testing. Let’s assume the JSON response is stored in the $response variable and we want to test the listIPs method in the IPs Endpoint class:

public function testListIPs() {
$stream = GuzzleHttpPsr7stream_for($response);
$response = new GuzzleHttpPsr7Response(200, [‘Content-Type’ => ‘application/json’], $stream);
$mock = $this->getMockBuilder(CloudflareAPIAdapterAdapter::class)->getMock();

->with($this->equalTo(‘ips’), $this->equalTo([])

$ips = new CloudflareAPIEndpointsIPs($mock);
$ips = $ips->listIPs();
$this->assertObjectHasAttribute(“ipv4_cidrs”, $ips);
$this->assertObjectHasAttribute(“ipv6_cidrs”, $ips);

We are able to build a simple mock of our Adapter interface by using the standardised PSR-7 response format, when we do so we are able to define what parameters PHPUnit expects to be passed to this mock. With a mock Adapter class in place we are able to test the IPs Endpoint class as any if it was using a real HTTP client.


Through building on modern versions of PHP, using good Object-Oriented Programming theory and allowing for effective testing we hope our PHP API binding provides a developer experience that is pleasant to build upon.

If you’re interesting in helping improve the design of this codebase, I’d encourage you to take a look at the PHP API binding source code on GitHub (and optionally give us a star).

If you work with Go or PHP and you’re interested in helping Cloudflare turn our high-traffic customer-facing API into an ever more modern service-oriented environment; we’re hiring for Web Engineers in San Francisco, Austin and London.
Source: CloudFlare

Project Jengo Strikes Its First Targets (and Looks for More)

Jango Fett by Brickset (Flickr)

When Blackbird Tech, a notorious patent troll, sued us earlier this year for patent infringement, we discovered quickly that the folks at Blackbird were engaged in what appeared to be the broad and unsubstantiated assertion of patents — filing about 115 lawsuits in less than 3 years, and have not yet won a single one of those cases on the merits in court. Cloudflare felt an appropriate response would be to review all of Blackbird Tech’s patents, not just the one it asserted against Cloudflare, to determine if they are invalid or should be limited in scope. We enlisted your help in this endeavor by placing a $50,000 bounty on prior art that proves the Blackbird Tech patents are invalid or overbroad, an effort we dubbed Project Jengo.

Since its inception, Project Jengo has doubled in size and provided us with a good amount of high quality prior art submissions. We have received more than 230 submissions so far, and have only just begun to scratch the surface. We have already come across a number of standouts that appear to be strong contenders for invalidating many of the Blackbird Tech patents. This means it is time for us to launch the first formal challenge against a Blackbird patent (besides our own), AND distribute the first round of the bounty to 15 recipients totaling $7,500.

We’re just warming up. We provide information below on how you can identify the next set of patents to challenge, help us find prior art to invalidate those targets, and collect a bit of the bounty for yourselves.

I. Announcing Project Jengo’s First Challenges (and Awards!)

We wrote previously about the avenues available to challenge patents short of the remarkable cost and delay of federal court litigation; the exact cost and delay that some Blackbird targets are looking to avoid through settlement. Specifically, we explained the process of challenging patents through inter partes review (“IPR”) and ex parte reexamination (“EPR”).

Based on the stellar Prior Art submissions, we have identified the first challenge against a Blackbird patent.

U.S. Patent 7,797,448 (“GPS-internet Linkage”)

The patent, which has a priority date of October 28, 1999, describes in broad and generic terms “[a]n integrated system comprising the Global Positioning System and the Internet wherein the integrated system can identify the precise geographic location of both sender and receiver communicating computer terminals.” It is not hard to imagine that such a broadly-worded patent could potentially be applied against a massive range of tech products that involve any GPS functionality. The alarmingly simplistic description of the patented innovation is confirmed by the only image submitted in support of the patent application, which shows only two desktop computers, a hovering satellite, and a triangle of dotted lines connecting the three items.

Blackbird filed suit in July 2016 against six companies asserting this ‘448 patent. All of those cases were voluntarily dismissed by Blackbird within three months — fitting a pattern where Blackbird was only looking for small settlements from defendants who sought to avoid the costs and delays of litigation. A successful challenge that invalidates or limits the scope of this patent could put an end to such practices.

Project Jengo’s Discovery – The patent claims priority to a provisional application filed October 28, 1999, but Project Jengo participants sourced four different submissions that raise serious questions about the novelty of the ‘448 patent when it was filed:

Research literature from April 1999 describing a system utilizing GPS cards for addressing terminals connected to the internet. “GPS-Based Geographic Addressing, Routing, and ResourceDiscovery,” Tomasz Imielinski and Julio C. Navos, Vol 42, No. 4 COMMUNICATIONS OF THE ACM (pgs. 86-92).
A request for comment from the Internet Engineering Task Force on a draft research paper from November 1996 on “integrating GPS-based geographic information into the Internet Protocol.” IETF RFC 2009
One submission included seven patents that all pre-date the priority date of the ‘448 patent (as early as July 1997) and address similar–yet more specific–efforts to use GPS location systems with computer systems.
And on a less-specific but still relevant basis, one submitter points to the APRS system that has been used by Ham Radio enthusiasts and has tagged communications with GPS location for decades.

Project Jengo participants who provided these submissions will each be given an award of $500!

What we plan to do — Because this patent is written (and illustrated) in such broad terms, Blackbird has shown a willingness to sue under this patent, and Project Jengo has uncovered significant prior art, we think this case provides a promising basis to challenge the ‘448 patent. We are preparing an ex parte reexamination of the ‘448 patent, which we expect to file with the US Patent and Trademark Office in October. Again, you can read about an ex parte challenge here. We expect that after review, the USPTO will invalidate the patent. Although future challenges may be funded through crowdsourcing or other efforts, we will be able to fund this challenge fully through funds already set aside for Project Jengo, even though this patent doesn’t implicate Cloudflare’s services.

US Patent 6,453,335 (the one asserted against Cloudflare)

Project Jengo participants have also done an incredible job identifying relevant prior art on the patent asserted against Cloudflare by Blackbird Tech. Blackbird claims that the patent describes a system for monitoring an existing data channel and inserting error pages when transmission rates fall below a certain level. We received a great number of submissions on that patent and are continuing our analysis.

Cloudflare recently filed a brief with the U.S. District Court in which we pointed to eleven pieces of prior art submitted by Jengo participants that we expect will support invalidity in the litigation:

World-Wide Web Proxies, by Ari Loutonen and Kevin Altis;
Intermediaries: New Places for Producing and Manipulating Web Content, by Rob Barrett and Paul P. Maglio;
U.S. Patent No. 5,933,811, “System and method for delivering customized advertisements within interactive communications systems;”
U.S. Patent No. 5,826,025, “System for annotation overlay proxy configured to retrieve associated overlays associated with a document request from annotation directory created from list of overlay groups;”
U.S. Patent No. 5,937,404, “Apparatus for bleaching a de-activated link in web page of any distinguishing color or feature representing an active link;”
U.S. Patent No. 6,115,384, “Gateway architecture for data communication bandwidth-constrained and charge-by-use networks;”
Performance Issues of Enterprise Level Web Proxies, by Carlos Maltzahn, Kathy Richardson, and Dirk Grunwald
Microsoft’s Proxy Server 1.0
U.S. Patent No. 5,991,306, “Pull based, intelligent caching system and method for delivering data over a network;”
Novell’s BorderManager server
Exploring the Bounds of Web Latency Reduction from Caching and Prefetching, by Thomas Kroeger, Darrell Long, and Jeffrey Mogul.

Bounty hunters who first submitted this prior art that was already used in the case will each receive $500. The Project Jengo Team at Cloudflare is continuing analysis of all the prior art submissions, and we still need your help! The litigation is ongoing and we will continue to provide a bounty to prior art submissions that are used to invalidate the Blackbird patents.

The Search Goes On… with new armor

These challenges to Blackbird patents are only the start. Later in this blog post, we provide an extensive report on the status of the search for prior art on all the Blackbird patents, and include a number of new patents we’ve uncovered. Keep looking for prior art on the Blackbird patents, we still have plenty of bounties to award and a number of patents ripe for a challenge. You can send us your prior art submissions here.

Even if you didn’t receive a cash award (yet), our t-shirts are about to hit the streets! Everyone who submitted prior art to Project Jengo will be receiving a t-shirt. If you previously made a submission, we’ve emailed you instructions for ordering your shirt. This offer will remain open for the duration of Project Jengo for anyone that submits new prior art on any of the Blackbird patents. Enjoy your new armor!

II. Elsewhere in Project Jengo…

Ethics complaint update

We know Blackbird’s “new model” is dangerous to innovation and merits scrutiny, so we previously lodged ethics complaints against Blackbird Tech with the bar disciplinary committees in Massachusetts and Illinois. This week, we sent an additional letter to the USPTO’s Office of Enrollment and Discipline asking them to look into possible violations of the USPTO Rules of Professional Conduct. As with the other jurisdictions, the USPTO Rules of Professional Conduct prohibit attorneys from acquiring a proprietary interest in the lawsuit (Rule §11.108(i)), sharing fees or equity with non-lawyers (Rules 11.504(a) and 11.504(d). Blackbird’s “new model” seems to violate these ethical standards.

Getting the word out
Cloudflare’s Project Jengo continues to drive conversation about the corrosive problem of patent trolls. Since our last blog update, our efforts have continued to draw attention in the press. For the latest, you can see…

“The hunted becomes the hunter: How Cloudflare’s fight with a ‘patent troll’ could alter the game,” — TechCrunch

“Cloudflare gets another $50,000, to fight ‘new breed of patent troll,’”
-Ars Technica

“This 32-year-old state senator is trying to get patent trolls out of Massachusetts,” — TechCrunch

III. A Progress Report on Challenges to the Blackbird Patents

As you continue your search for prior art as part of Project Jengo, we’ve updated our chart of Blackbird patents, and identified a number of new patents and applications we’ve found that Blackbird has acquired.

As reflected on the chart (in red), so far 5 of the patents are being challenged or have been invalidated. In addition to our pending challenge of the ‘448 patent:

In June 2016, Blackbird Tech sued software maker kCura LLC and nine of its resellers for allegedly infringing U.S. Patent 7,809717, which was described as a Method and Apparatus for Concept-based Visual Presentation of Search Results. kCura makes specialized software used by law firms during document review. The judge in kCura’s case invalidated every claim in the ‘717 patent because the “abstract idea” of using a computer instead of a lawyer to perform document review cannot be patented.
US Patent 6,434,212 — This patent seeks protection for “a pedometer having improved accuracy by calculating actual stride lengths.” Numerous challenges to this patent have been filed with the Patent Trial and Appeal Board (PTAB), which adjudicates some IPR challenges. There are currently challenges against this “Pedometer” patent that have been filed by Garmin, TomTom and Fitbit.
US Patent 7,129,931 — This patent for a “multipurpose computer display system” is undergoing IPR challenge brought by Lenovo, Inc.
US Patent 7,174,362 — This patent for a “method and system for supplying products from pre-stored digital data in response to demands transmitted via computer network” was challenged by Unified Patents, Inc.

In the charts below, we’ve highlighted 11 Blackbird patents (in green) that seem ripe for challenge — based on a combination of the fact that they seem broadly applicable to important industries, may have already been the basis of a Blackbird lawsuit, and/or already have some valuable prior art sourced through Project Jengo. We’ll take submissions on any Blackbird patent, but these are the patents we’re focused on and should get extra attention from Project Jengo participants seeking a bounty.

After our review is a bit further down the road, we’ll make all the prior art we’ve received on these patents available to the public so that anyone facing a challenge from Blackbird can defend themselves. We hope to have that information posted by the end of October.

And finally, Cloudflare is funding the first ex parte challenge fully out of funds it has set aside or had donated to Project Jengo. Should any of these patents hit home for you, and you are interested in supporting this fight financially, please reach out to [email protected]

-Happy Hunting!

-Project Jengo Submissions


-Newly Uncovered Blackbird Patents

Source: CloudFlare

#FuerzaMexico: A way to help Mexico Earthquake victims

Photo Credit: United Nations Photo (Flickr)

On September 19, 1985 Mexico City was hit with the most damaging earthquake in its history. Yesterday -exactly 32 years later- Mexico’s capital and neighbouring areas were hit again by a large earthquake that caused significant damage. While the scale of the destruction is still being assessed, countless people passed away and the lives of many have been disrupted. Today, many heroes are on the streets focusing on recovery and relief.

We at Cloudflare want to make it easy for people to help out those affected in central Mexico. The Mexico Earthquake app will allow visitors to your site to donate to one of the charities helping those impacted.

The Mexico Earthquake App takes two clicks to install and requires no code change. The charities listed are two well respected organizations that are on the ground helping people now.

Install Now

If you wanted to add your own custom list of charities for disaster relief or other causes, feel free to fork the source of this app and make your own.

#FuerzaMéxico: Una manera de apoyar a los damnificados del SismoMX

El 19 de septiembre de 1985 la Ciudad de México fue afectada por uno de los peores sismos en su historia. Ayer – exactamente 32 años después – la CDMX y áreas circunvecinas fueron afectadas por otro fuerte sismo. Aunque la escala de la destrucción todavía no se conoce a fondo, muchísimas personas han sufrido daños. Miles de héroes mexicanos se enfocan en búsqueda, rescate y reconstrucción.

En Cloudflare queremos poner nuestro granito de arena y asegurarnos que los donativos para los afectados puedan llegar de forma fácil. Nuestra app Mexico Earthquake permitirá a aquellos que visitan tu sitio web que donen a asociaciones civiles que apoyan a los damnificados.

Install Now

Si quieres agregar otras organizaciones y/o caridades, puedes modificar el código fuente disponible aquí.
Source: CloudFlare

Cloudflare and Google Offer App Developers $100,000 in Cloud Platform Credits

When Cloudflare started, our company needed two things: an initial group of users, and the finances to fund our development. We know most developers face the same issues. The Cloudflare Apps Platform solves the first problem by allowing third parties to develop applications that can be delivered across Cloudflare’s edge network to any of the six million sites powered by Cloudflare. The Cloudflare Developer Fund alleviates the second by giving developers the financial support they need to fund their company. Today, we are excited to announce another initiative that will make it possible for developers to make their app dreams a reality.

Cloudflare and Google Cloud are working together to offer developers the resources needed to quickly launch and scale Cloudflare Apps. This partnership will give any Cloudflare Apps developer the chance to access a wide range of benefits including $3k – $100k of Google Cloud Platform (GCP) for one year at no cost. Some startups will also be eligible for 24/7 technical support, and access to GCP’s technical solutions team. This supports a core belief of the Cloudflare Apps initiative: we want developers to focus on building great Apps, not worry about paying for infrastructure. Hundreds of startups have already built successful applications on Cloudflare Apps and those applications have grown to serve hundreds of thousands of users. This program with Google Cloud significantly decreases the friction to getting up and running on Cloudflare Apps, allowing the next generation of developers and startups to make their living by building Apps.

How does it work?

$100k for Exceptional Apps: After an approval process your App could be awarded $20k in Cloud Credits, extendable to $100k based on usage in the first year.

Up to $3,000 for early stage startups: If you are an early-stage startup you are entitled to a $3,000 Google Cloud credit. Even if you aren’t quite a startup yet, you are entitled to $500 if you are a first-time Google Cloud Platform user, and $200 if you are an existing user.

Collect your credits now!
Source: CloudFlare

Truth Lives in the Open: Lessons from Wikipedia

Victoria Coleman, CTO, Wikimedia Foundation

Moderator: Michelle Zatlyn, Co-Founder & COO, Cloudflare

Photo by Cloudflare Staff

MZ: What is the Wikimedia Foundation?

VC: We pride ourselves in aiming to make available information broadly

We’re the 5th most visited site on the planet.
We are the guardians of the project. There are 12 projects that we support, Wikipedia is the most prominent but there are others that will be just as influential in the next 5 years: e.g. Wikidata.
299 languages

Let’s also talk about the things that we don’t do: we don’t do editing. We edit as community members but not as members of the foundation.

We don’t monetize our users, content, or presence. We are completely funded by donations, with an average donation of $15.

MZ: If your mission is to help bring free education to all, getting to everyone can be hard. So how do you get access to people in hard-to-reach areas?

VC: It’s definitely a challenge. We built this movement primarily in NA and EU, but our vision goes beyond that. We started doing some critically refined and focused research in Brazil, Mexico, Nigeria.

Trying to understand what global communities need in other parts of the world.

We found that some people don’t know who we are, so we need to communicate to these people who we are.

MZ: We just heard on the last panel, and the notion of fake news came up. What is the foundation’s point of view around fake news? How can you give us hope for the future?

VC: First of all, the Foundation does not deal in news. One of our core principles is knowledge / notions? existing knowledge. What we do do is make it reliable as we can possibly make it. We have a community of 200,000 editors.

In our community, we live by principles: reliability of the source (“citation needed”),
maintain and ask our community of more than 200,000 people to make sure these principles are upheld. We are vocal and we hold each other accountable.
“Democracy dies in darkness.” “Truth thrives in openness.” We create quality content through openness.

MZ: When something controversial is posted on Wikipedia, how quickly does it get pulled?

VC: It depends on how front of mind the topic is, Sometimes in seconds.

Content that is incorrect very rarely persists past a week or month.

Medicine and Military history are the two most popular Wiki topics.

ER doctor is one of our most prolific editors; he said that if I can edit Wikipedia, i can reach 45 million people a month.

MZ: One of the reasons I went into tech rather than the medical field was because it was another way to help people at scale. Everything on Wiki has to have a source, a citation. But that must be hard. What are the implications for this?

VC: We take that very seriously. This past June, we were able to liberate 45% of all citations from the platform. Suddenly 60 million citations became available for everybody to use. This is very important material for research.

Being able to share the citations e.g. about Zika virus is what allowed this community to accelerate finding solutions. We advocate vociferously for openness, content that is not behind the wall.

Awhile ago they decided not to allow citations or references to the daily mail in the encyclopedia because they felt that as a source of news, it was less reliable.

MZ: Has that since been reversed?

VC: I don’t believe so.

MZ: You mentioned that Foundation builds other tools; what are some of the other open-source tools you are building that our audience might find useful?

VC: For example, Media Wiki is being used by Department of Energy, Intelligence community. the intelligence community has a product called Intellipedia that gets 350,000 hits per day. Another way of making tools through which people share knowledge.

Another example is ToolForge: taking data sets and making them available to volunteers who write tools.

So you come to us and we will give you what you need; not just computing and storage but data sets to work with. And people make magic…

MZ: The Foundation is a study in people coming together around the world–example of optimism. Wikipedia is one of top 5 sites; how do you keep that position? What’s next for the foundation?

VC: We want to continue to scale. It’s a matter of a lot of introspection. This will tell you about how you work. We’re at the tail end of an 18-month consultation project with 1000s of volunteers in our community all over the world. I came from a corporate background, and you know how strategy is made there. You go into the boardroom and come out and say this is how it is going to be. This is not how it works for us: it’s not our movement, it’s the movement of our volunteers.

We are going to continue focusing on making knowledge available to everybody. They told us they want us to go beyond the confines of North America and EU.

Now the challenge is to figure out how to get there.


Q: Silicon Valley has a gender issue; what about Wikipedia? Who is the Wiki community? Who is invited to participate, what articles are challenged or not? How does the leadership of the community meaningfully address these issues going forward?

VC: You bring up a very good point. I must say that we are fairly balanced within the Foundation itself. But I sympathize and agree. People that edit can use whatever identity they want, so we don’t actually know what gender identity our editors are.

E.g. One of our researchers noted differences in men & women’s bio: women’s had more info about their spouse.

The first step is recognizing the problem; From a tech perspective, we are building tools to help reduce bias if possible. But the real solution is not to have bias in the first place. We are doing a lot of work with community engagement to make the experience of becoming an editor more welcoming for women.

The first step is recognizing the problem; our community engagement department is working with people to help them make their first edits.

Q: Things in Wikipedia are footnoted; often with links from the web which are brittle and changeable. Can there be a partnership between Wikipedia and internet archive to keep links?

VC: Yes. We look to build partnerships with everyone.

All our sessions will be streamed live! If you can’t make it to Summit, here’s the link:
Source: CloudFlare

Will Data Destroy Democracy?

Lawrence Lessig, Roy L. Furman Professor of Law and Leadership, Harvard Law School and Darren Bolding, CTO, Cambridge Analytica

Moderator: Matthew Prince, Co-Founder & CEO, Cloudflare

Photo by Cloudflare Staff

MP: If there’s one person responsible for the Trump presidency, it seems there is a compelling argument that that might be you.

DB: I very much disagree with that.

MP: How does Cambridge Analytica work, and how did the Trump campaign use it to win the presidency?

DB: we take that data and match it up with lists of voters, and combine that data science to come up with ideas about you who might want to sell a product to, or in teh case of politics, this is this person’s’ propensity to vote, this is the candidate they are likely most interested in. WE also do all the digital advertising. By combining data with digital advertising, we have lots of power.

MP: so you don’t want to take credit for having won the election; but the campaign’s use of data and targeting was an important factor in the election.

DB: Yes, and what Cambridge did was basically a great turnaround story.

MP: larry you ran a presidential campaign focused on one issue; finance reform. Yet the candidate that spent half as much as HIllary Clinton won. Is finance still the issue or do we need to start thinking about data as the divider.

LL: My slogan was not “fix campaign finance” but “fix democracy first”. This means to fix all the different ways the system denies us a democracy in the sense that we are equal citizens. If you have a congress spending 30-70% of their time raising money, or gerrymandering, that is not a congress concerned with representing its citizens. This is not a system that produces citizenship driven to electing a president.

Our electoral college means that the vote of republicans here in california si worth nothing. These are all the ways in which we have a failed democracy.

I wanted to at least have a voice in the debate to rally around these issues.

What happened is the democratic party changed the rules just as i qualified to be on that stage. But i would suggest that the man who won took the same set of slogans – Drain the Swamp – and ran as full-force as he could and targeted as his opponent a woman who was “sold out” to these interests precisely.

I think it is the fundamental issue.

MP: One of the core tenets of democracy seems like a shared understanding.

If you have 15 different targeted messages, does that corrode the shared understanding?

LL: The truth is, in the half of DB’s world focused on commerce, it’s the best of all possible times. The half of the architecture of communication focused on giving people access to netflix, it’s the best of all possible times. We have to recognize that the internet is the best and worst of all possible times at the same time.

So when you shift to democracy, the same technologies undermine our ability to do democracy the way we did before.
It used to be that the process of winning election was same as building coalition.
It was in front, in plain sight, and when you won, you knew why.

When you have technology like Cambridge Analytica has perfected, the process of winning election is totally separate from governing.

MP: So Darren are you destroying democracy?

DB: The act of democracy is allowing people to choose who their representatives are. That doesn’t imply that everyone has to have the same shared context. I think it’s possibly beneficial that people with disparate points of view / interests they should have those interests addressed.

MP: But you work for a company that says they have a unique tech to do this better. What is it about the tech that makes it so much more better that doesn’t corrode shared understanding, on the other side?

DB: The shared understanding is out there is almost more cultural than anything. I think that having a conversation with you about the regulations that Germany might impose doesn’t permit you from knowing about other aspects of foreign policy with Germany; it’s just a specific thing you care about. Now if the messages are contradictory, that’s when it becomes a problem. But as long as people are maintaining consistent points of view, it’s not wrong to communicate about issues that are important to a specific set of persons.

LL: I wouldn’t say that CA produced diffuse culture where there is no shared understanding. But what we don’t recognize enough is how extraordinary 1960s and 70s were for democracy, when everybody was focused on three television shows every night. America basically understood the same stuff.

MP: Former chair of FCC says that maybe this is actually the natural state today; in the 60s and 70s, 3 companies controlled profitable technology and spent more time being neutral and elevating conversation. Is this time period what we should be striving for or is that a reaction to fear of regulation?

LL: I agree this was extraordinary period. It defined how we understand democracy, and that period is gone.

That period is gone. I don’t wnat to return to it. Those three shows were too narrow in a number of ways. My point is that we don’t yet have a good model for how to work a democracy where we all live in our own niche worlds of the basic facts.

The architecture of media today is just like the architecture of media in the 19th century.

Most journalism was partisan, all about rallying troops to own version of truth. The difference is that we have no way of knowing what the public thought.

The difference is that we have no way of knowing what the public thought then. We could only know what the politicians thought. We didn’t even have polling.

MP: But back then, you also had a particular understanding of what you were reading; today, FB has an algorithm, there is an editorial voice, and you don’t know what that is. There is some neutrality.

LL: Back then, media drove people to vote in a certain way or not. But today, the views of people about whether we should go to war in Iraq or whether immigrants deserve to be blocked, the views of the people matter directly.

Supposed to have a representative democracy, but we increasingly have a direct democracy composed for a public that doesn’t know anything about issues because we live in niche market bubble worlds that don’t inform us the way our broader world has in the past.

DB: data science is part of the solution. I can use a tool on FB to tell me what percentage of my wall is democrat or republican.

MP: So that’s the argument that we are only just getting used to tech. We will get better at being able to interpret these things and see through them.

DB: These tools also make it easier for smaller groups to get their povs out there into the general market. It costs less to get their message out there. You couldn’t do that before because all the power was in a small number of hands. So
Data science available to anybody through FB is actually quite powerful.

I for one thing that if you are accurately representing what the populace is interested in, that is not a bad thing for democracy; that’s a good thing.

If the public is fractured, that’s what the public deserves.

LL: As a kid, as a republican, I was celebrating the internet, I was saying exactly what DB was, but we didn’t think enough about the ways it would change the context in which we could have the conversation.

We have never had the ability of someone to speak to 30 million people without an editor standing between. This is new. But now a guy can tweet, and it is seen by 30 million people, and we don’t yet know how to run a democracy with that dynamic.

I hitchhiked across the soviet union when i was young. And was told that in the soviet union they have a better system of free speech than you do in America. We wake up and realize that every newspaper is lying to us; so we have to read 7-8 different newspapers before we understand the truth. This develops a better culture of critical understanding than you have in US.

We have become the soviet union; our parents don’t yet know how to deal with a world in which everyone is lying to you.

But our kids know, and can figure it out based on 7or 8 different feeds.

MP: So is the solution time? Over time, If Cambridge Analytica won the election, what is the next trick? Who will win the next one?

DB: I think personalization of information that will allow individuals to better communicate with people they know. Rather than have one person broadcasting, you’ll have personal relationship.
The dispersion of the central control over the message out to individuals is very powerful. Now instead of Donald Trump talking at you, you have someone else…

MP: It’s a way to trick the kids, then, isn’t it? If your friends are telling you something, that’s how you get the cynics.

LL: it’s certainly a wonderful development; but the problem is that if they’re doing that on the basis of a totally different understanding of the world. Some people think climate changes real; others false . If there’s no common gorond of understanding, that may be good for winning elections, but not for actually governing.

DB: you’re building a virtual community in each “town,” and each community is discussing what is important to them.

MP: I was just talking to an engineer in China, who said that democracy is great but it always drops below its lowest common denominator. How do we fix that if that’s the case?

DB: Our original founders wrestled with that idea; we have to keep trying.


Q: Does Cambridge Analytical make problems like Willy Horton worse or better?

DB: I don’t think it plays that much of a role one way or another. Your context is the ads that played during the Bush campaign?

I think it just makes the message more amplified.

LL: Here we have a real disagreement. You have an assumption that people can’t be inconsistent in how they represent their world view. If we have a technology that perfects ability to elect people, but not through public conversation, that encourages this dramatic

DB: as long as the campaign is consistent and does not change its point of view…

LL: when have we ever seen that?

Q: Where do you draw the line on ethical microtargeting? Are you creating models to target people on the basis of racial messaging?

DB: I don’t think Cambridge pushed any racially charged messages–

MP: do you identify people… do you have a category that is racists?

DB: we had 15 models. It never even came up.

MP: how do we set a framework or a social contract so that Oxford Analytica doesn’t have a racist profile?

LL: Today story that broke about ProPublica and FB basically had an anti-semitic ad category to market to people who hate Jews, and had used algorithms inside of FB to target anti-semites.

Mark Zuckerberg is interested in finding what people want and catering to it; and that’s fine. In 99% of what we care about, that’s what we want. But in democracy, that’s a terrifying possibility.

Q: People make decisions based on knowledge & information they consume. We are now talking about driving mass behavior, which is different from just giving people what they want.

How can data science be used responsibly? What regulations do we need when social networks are driving mass behavior? If it’s not regulation, what other structures do we need?

DB: If you look at the EU, they have the GDPR, and there’s a control over how much information is available. People being aware of how much information they have to give up is going to be somewhat helpful. If you know what information you are giving up, you know what you are able to be targeted on. There will also need to be some sort of code of ethics about what is right and what is wrong to do with data. I am inherently not a fan of regulation. When you have that, entrenched players will create regulatory capture which will stifle innovation.

There should be some sort of element there. “Algorithms will find the worst in us if you let them go nuts.” And this is not all happening on one side of the spectrum.

LL: It’s fun and hopeful to talk about codes of ethics stifling the worst, but if the worst is profitable, the code of ethics will be eaten by the profit.

In one of Steven Bannon’s last interviews, he said, “What we want is the democrats to talk about identity politics every single day until the next election, and we’re going to talk about economic policy and we will kill them.” And you begin to realize that racism is just them playing the democrats. In our world of 2-second attention spans, what do you do to resist that?

All our sessions will be streamed live! If you can’t make it to Summit, here’s the link:
Source: CloudFlare

As Seen on TV

Chris Cantwell, Co-Creator and Show Runner, Halt & Catch Fire

Moderator: John Graham-Cumming, CTO, Cloudflare

Photo by Cloudflare Staff

CC: first off, we have very low ratings! The story came from my father who worked in computers in the early 80s in dallas; later in california. The dynamic between those characters was influenced by my dad.

This was largely a story about reverse engineering. The underdog story was interesting: not Bill Gates, not Silicon Valley, but a different story about the computer world.

JGC: and you managed to do 4 seasons

CC: In four seasons we go from ‘83 to ‘94; we cover everything from small networks to building of internet backbone, rise in search and www

JGC: I watched it before I came; it gave me some bad memories because there were AOL disks

CC: We have an incredible prop team. Some comes from RI computer museum; i have to ask our prop master, he might have manufactured them from images online.

JGC: This is a show about tech but also about money; these people are trying to build companies. The same people trying again and again. Is that a metaphor for recycling something?

CC: Yes, i think so; a big theme is reinvention, on a personal level and about what they’re working on.
Reinvention as a theme that is championed by Silicon Valley is a really universal concept.

We learned from our research & tech advisor that there are ideas that float into ether and are diffuse and shared, and at some point one person catches something and the idea takes off. That idea “wins”, but really it’s a chaotic mess of people playing with possibilities.

JGC: What strikes me is how the characters are trying to build something and they don’t’ know what they’re doing. Then one of the characters talk about building an index for the web. In some ways that’s the nature of creation; you don’t know what direction you’re going. There’s a link-up with art there.

CC: Absolutely, in season 4 a character has been in the basement since 1990; we realized that it took a while for web to take off. We portrayed guy in a basement collecting post-its handwriting URLs. It was a website every few days; into a website a second. So he has collected them, and we have a visual representation of his whiteboard.
He gives them to his friends, who gives them to someone else; she builds her own website that links to each site;

Organically people discover that site, and then they have a proto-viral site on their hands.
It didn’t start that way.
It was like the yellow pages, but they don’t really exist anymore.

JGC:I was also struck me that what they’re doing in the ‘83 clip is really quite technical. It struck me that tech has gotten much more complicated but also much simpler.

CC: Yes, because of the rise of computing industry we’ve also experienced accessibility of tech.
That you can go to Best Buy right now and buy a cinematic camera that you once had to rent and go to film school for for years and knot knwo what it looked like until the filml was dveloped.

Accessibility is a great power and virtue of the industry. We tracked this over the course of the season

In the first season they feel like young upstarts. By season 4 they are struggling ot keep up with what’s going on.

It’s amazing to see this happening even today, given the democratization of so many things.

JGC: The characters are always optimistic—this isn’t Black Mirror or Westworld. What happened to that optimism?

CC: on a character level, we follow people who are always focused on the next thing.

We’re placing our happiness on what’s to come, and there is a kind of grasping that the characters are constantly engaged in, born of a real belief in what they are doing. And yet they are never satisfied with where they are. Over the course of the journey of the series, these five characters realize that about their lives and wonder whether they can actually step off the wheel.

On the tech side, when people started pulling apart / indexing web pages it was done for fun. First experiences on internet were just about exploring, and there was a joy in that.

What is unspoken on the show is a tremendous ambivalence that couldn’t happen now.

“We might be on a train that we are no longer piloting.”

Tech is moving so fast that we can’t adapt as quickly as the things we are building.

JGC: Is it also that we can’t imagine the consequences of what we are doing?

CC: i think so. There isn’t much foresight; the characters on our show don’t have the benefit of hindsight.

The characters in the show talk a lot about the future. Future is a heavy word. People sometimes say: “There’s no such thing as the future; it’s just people trying to sell you a crappy version of the present”. We can never predict it.

JGC: if you look at ‘83, they have a physical machine, and by ‘94, it’s all software. So a lot of what you’re trying to portray is really quite boring; how do you dramatize sitting in front of the computer?

CC: Again, low ratings! It’s interesting, since the pilot i love it when characters have something to hold.

Our pilot director was a filmmaker named Juan Campinelli [?]; we turned on an IBM for him, and he turned to us and said; “that’s what it does?” It was so boring for him. Now, we have screens that are blank and actors typing and building websites that are inert pages—that’s even less interesting.

JGC: Is this some sort of terrifying metaphor? The machine doesn’t know what we are typing?

CC: We tried to turn one machine on, and it actually caught on fire.

JGC: How do you research this show?

CC: Carl has been an incredible resource on our show; he’s a venture capitalist, has done everything under the sun; he does this because he loves it.

We liberal arts guys needed someone like Carl to help us understand what was going on

Everything we have tried to put on screen we have tried to get right, out of respect for historical telling. But we had to go from perfectly right to defensible, because sometimes even our sources began to disagree with each other.

I just learned that we accidentally used 2013 reissue of Doom: people got pissed off. At a certain point, we’re doing the best we can. Hopefully the human drama is carrying you through.

JGC: How do you get inside characters’ heads?

CC: the actors do their homework to try to understand as much as possible; but we try to convey that these characters are masters of their field; the viewers have to trust that they know what they’re talking about.

It’s really about character stories. “Technology can be a delicious metaphor for so many things.” E.g. Automated vs human touch.

You can pit the characters that have so much animus toward each other against each other.

If we get sidetracked in the writer’s room talking about print drivers, we gotta bring it back to the human drama.


Q: Being a grey beard who has worked in Silicon Valley for 40 years, I noticed it was mainly engineers running things at first; then transition to business types in the 1990s. Do you agree with that phenomenon, and will it affect your future storylines?

CC: Season 4 is our last; there is push and pull between those who build and understand it, and those who sell it.

When you have someone who is just “the suit” / the ideas guy, there’s a really interesting struggle that we try to dramatize throughout.

As the tech gets more ephemeral and seems like magic, business guys may have gained upper hand. You see the venture capitalists holding all the chips and the engineers fewer and farther between in the later episodes.

Q: I’m assuming you’ve seen the movie Hackers, with a visualization of traveling through the network. Have you thought about other ways of visualizing the activity of sitting down at the computer to do this work?

CC: We have. It’s tricky. We once tried to do a sequence where 2 characters moving through digital community they created online, but then it looked bad.

Sometimes we can visualize, and sometimes we have to go with what’s real, and I think sometimes a viewer can respond to the latter more.

Q: What about the notion of origin story? Do you think there are 4 seasons of drama buried behind every million dollar company?

CC: the way we determine that is by meeting with the people themselves on the ground; that’s where we’ve gotten the best stories. Carl has amazing stories. So i’m sure the same could be said of Cloudflare.

All our sessions will be streamed live! If you can’t make it to Summit, here’s the link:
Source: CloudFlare

Private Companies, Public Squares

Daphne Keller, Director, Stanford Center for Internet & Society, and Lee Rowland, Senior Staff Attorney, ACLU Speech, Privacy & Technology Project

Moderator: Matthew Prince, Co-Founder & CEO, Cloudflare

Photo by Cloudflare Staff

MP: Technology and law seem like they are colliding more and more. Tech companies are being asked to regulate content. For a largely non-lawyer audience, give us some foundations about basic rules when you have content on your network?

LR: Communications 2.0 makes the 1st amendment almost quaint. The vast majority of speech that we exchange happens online. When it is hosted by private companies, the 1st amendment doesn’t constrain it. So this is a space governed by norms and individual choices of people like Matthew. In the wake of Cloudflare’s decision to take down the Daily Stormer, Matthew penned a piece saying it’s scary that we have this power, and I exercised it. We have a completely unaccountable private medium of communication.

MP: There are shields for companies for this; What is intermediary liability and why is this a position at Google/Stanford?

DK: No one knows what it means; it’s a set of laws that tell platforms when they have to take down user speech because that speech is illegal. In the US, platforms don’t have to take anything down; but outside of the US, the rule is that when platforms discover something they have to take it down or face liability themselves. The problem is that anytime someone alleges that something is illegal, it can be taken down. So the rules about when platform should to do this are very consequential for practical free speech rights of users on the internet.

LR: We can’t undervalue how much these rules have created today’s online ecosystem: Yelp would not exist without intermediary liability. Any content provider platform exists because of these laws passed in late 90s.

MP: In both the US and the EU, laws are coming under threat; we tend to focus on US, but Germany’s top priority in the last G7 meeting was limiting intermediary liability.

LR: There’s an opportunity here for companies with ties to US to make sure that we don’t allow countries with less protected speech regimes to ratchet to the lowest common denominator. Multinational pressures risk going to that lowest common denominator. I think companies like Cloudflare have a duty to uphold the values that reflect our first amendment landscape. Do we want a world where Nazis cannot have a website? It’s not a comfortable thing to talk about; but I want the ability to see and find speech that reflects human beliefs, because that’s how we know it is out there. Enforcing that kind of purity only hides beliefs it does not change them. Companies that are part of web infrastructure have fundamental responsibility to provide neutral platform. We are providing a neutral platform and it’s other people’s job to see that speech and counter it.

DK: There’s also an ugly dynamic between governments and major platforms; private companies are taking over government functions, which is weird because they are not subject to government constraints. This creates an opportunity where private companies can do things that government can’t but maybe want to do e.g. collecting user data.

In Europe, the commission reached agreement with 4 big platforms on the EU hate speech code of conduct: The agreement was that they would voluntarily take down hate speech as described in the agreement, which is not the same as hate speech as defined in the law. They are voluntarily agreeing with the government to take down hateful speech. Many Americans find this odd.

MP: Is this a fight that we can win? Views on free expression ideals have changed since 4 years ago; “don’t be evil” doesn’t translate well in German; What argument persuades rest of world that we should be neutral platform?

LR: These borders have real impacts on speech; but for American consumers and companies giving internet access to American Internet users, we do have the ability to help people understand not to race to moral panics. No one is out there picketing AT&T because Richard Spencer has a cell phone account with them.

MP: We have had a tradition of newspapers having editorial perspective, conservative or liberal.
Is Facebook like the modern newspaper? Or are they like the printing press? What is the analogy that makes sense?

DK: In Europe, people are inclined to say that Facebook needs to admit that it is a media company. The difference between Facebook and a media company is that the media company hand-selected everything that it published, whereas Facebook is an open platform

MP: But if you put up a link to Daily Stormer on Facebook with support for the site, it was taken down; if you were critical of the organization, however, it was kept up.That sounds like a media company.

DK: They take down a lot. That’s not the same as saying they could be legally accountable for everything that is transmitted on their platform.

LR: I do think that people on a gut level hold newspapers accountable for their world view.
Facebook already exists as a content review company; they’re a platform but they’ve always had algorithms and curation. Each of these is a choice that affects what you hear/see.

MP: “it’s the algorithm it’s neutral”

LR: That has always struck me as horseshit…

MP: Does it surprise you there’s not a Fox News search engine?

LR: This has been constant conversation in the net neutrality debate. Internet service providers have said: we don’t discriminate: but we want the right to not take you to a certain website.

Can you have a bespoke ISP? The Disney ISP that makes for damn sure you don’t see porn? Maybe, no one has done it. People’s willingness to replicate their own bubble. There seems to be enough of a demand of that.

DK: The fact that there isn’t a Fox News search engine is actually important.

People who are saying, Facebook should not be able to take over my political speech are also noting that there is no place else to go: friends, etc. are all on Facebook. It matters when there’s somewhere else to go. If there’s only one place to go, it’s easier to imagine there being government regulations on them.

MP: The question is: is there any scale at which you think maybe it’s not the right time… Is there a time when that’s the way to think of your status? Steve Bannon is proposing that giant companies should be regulated as utilities. Is there a time when that’s the right way that this should be thought of? If you are Facebook and you are the only place to reach this audience, does that mean that you have another set of obligations?

DK: I don’t think that works. This may apply to your business, but for the service that Facebook offers, the service creates a community that people want to come to because it is not full of hate speech and bullying. And without that kind of curation, they would no longer have the value proposition for their users.

MP: That suggests that there are different rules depending on where you are in the stack. What should a registrar do vs. DNS vs. browser provider? What is the framework you’d use to determine where internet is or is not neutral vs. curated?

LR: I want to admit that as a 1st amendment advocate, there are interests on the other side. I may think it is a dangerous precedent, but you have the right to decide who to keep and kick off.

For us, as ACLU, we focus on two things:
Government subsidies and the kind of centrality and importance of that service.

Are you a neutral … or common carrier? Are you actively curating content?

Generally there isn’t a model where you are distinguishing based on content; this isn’t the most profitable path to success.

MP: ACLU has been force for free speech in US; who is fighting for free and open web outside of this country?

DK: There are organizations around the world that work on this. Some of the best efforts are in Brazil, Argentina, India; much smaller in EU. We’re paying attention to these differences.

It’s important for smaller companies, for journalistic interests to show up and let them know.

MP: What are the arguments you’ve found that are persuasive in these conversations about regulation? What works?

DK: I think people get it when you say you are sacrificing sovereignty by standing back and asking an American company to decide this for you. In some cases, the economic argument is also persuasive. Outside US, American lawyers yelling about 1st amendment do not get much respect. But there are other important points you can make

LR: Domestically, if we’re talking about convincing legislators to think about roles, there’s the Communications Decency Act. At the time in the late 90s when it was passed it was overwhelmingly bipartisan because conservatives and republicans knew Silicon Valley is liberal.

In the last 15 years, there has been moral panic about human trafficking online. Some of the unholy alliances come when women’s advocates on left and libertarians on right agree with each other. It’s the First time congress has amended SESTA since late 90s.

The only thing that’s ever effective besides a lawsuit is reminding people that they might be the goose or the gander next time. You might not always be on the right side.

Facebook agreed to the hate speech rules. So many human rights activists voices have been silenced according to that agreement. The Intercept article on human rights activists that have been silenced under over censoring.

MP: What are 1 or 2 things that you are worried about, that people aren’t thinking enough about right now?

DK: There is tremendous pressure to build technical filters to find and suppress content and widespread belief that this tech can be built to identify terrorist speech. Companies are under pressure and end up agreeing; the result is that videos documenting atrocities in Syria re-being taken down. So the push for mechanized content removal is very dangerous.

LR: I totally agree, and I also highlight the importance of due process. If someone censors our speech we can say, hey wait a minute. But you don’t have that option with FB.
Hand in hand, algorithmic ratcheting combined with lack of due process is a problem.


Q: Besides basic issue about media making judgments about censorship, there are two additional dangers: 1) what makes companies like Cloudflare more or less susceptible to pressure from governments; 2) the danger of companies colluding on these things.

DK: On vulnerability, What makes you vulnerable to pressure form a government: people on ground that can be arrested; assets that can be seized; wanting to have a market in that company, or already having a market that you are afraid to lose.In terms of collusion, I worry about monoculture that systematically discriminates against speech of particular people.

Companies that don’t want to be regulated decide to self-regulate.

Q: One of the challenges with open internet is its openness; what about dark web that is encrypted? Is that potentially an answer, where regulating free speech becomes difficult because we don’t know where it comes from.

LR: I think it addresses free speech values problem; but for average internet user, probably will create less attractive ecosystem. If you want anonymity that’s great, but is it an actual useful web? If you want useful web that is free, effective, and accessible, answer is probably no.

All our sessions will be streamed live! If you can’t make it to Summit, here’s the link:
Source: CloudFlare

Betting on Blockchain

Juan Benet, Founder, Protocol Labs, and Jill Carlson, GM, Tezos Foundation

Moderator: Jen Taylor, Head of Product, Cloudflare

Photo by Cloudflare Staff

JT: Tell us about what BlockChain is

JC: Going back to 2008, advent of blockchain came with bitcoin white paper.

The word Blockchain wasn’t mentioned at that point, but that was the advent of this tech.

What it solved was niche problem called double spend problem. Creation of digital cash.

What you see in a bank account isn’t digital cash. The problem in cryptography was how to create digital cash that doesn’t rely on 3rd party intermediary. This is what Bitcoin created.

JB: Blockchain packs in lots of stuff: useful as brand. Like internet/web in early 90s, the meaning is fuzzy.

Properties that all of these apps have in common:

Academic definition: A blockchain is an indelible chain of blocks; once you insert information into one of them it remains.

Marketing definition: many applications have been developed over last few years, all have to do with public verifiability. Reliance on cryptographic methods to achieve goals on clearing payments and the ability to check and verify.

Across the board, removing 3rd parties from equation. Establishing publicly verifiable state of structures. Trust protocol removes trust needed from individual parties.

It points to a return to what people called for in the early 2000s. Decentralization of the power structures that control the internet.

Removing power from entrenched places.

JT: you’re both doing great work with organizations looking at moving blockchains forward. What is currently happening with this tech?

JC: I work with Tezos; a blockchain protocol and platform used to build decentralized apps. Hearkens to a concept of a hard fork of a blockchain

Hard fork: 2 different assets of bitcoin, assets and cash

Comes back to idea of decentralization. Decentralization offers many things; one problem it raises is how you push upgrades to the tech. Generally there is one centralized party. With blockchain it’s different. Lots of politicized infighting among communities and users of tech; Tesos seeks to solve this infighting and enable coordination.

If everyone here owned one Tesos token, everyone would have one vote as to how the roadmap proceeds.

Also seek to innovate on formal verification of core base of protocol to make applications more easy and accessible.

This comes back to the language we’ve chosen for the protocol and implementations on top of that. The tech will underpin trillions of dollars worth of industry, and it should be built with that in mind.

JB: we works on IPFS and …
IPFS is a decentralized hypermedia protocol.
Think of the web, and if the web itself had no notion of locations or sites but was more decentralized than now; content would not be addressed by where it is and who owns it, but instead by what the information is itself. The same information would have the same address. This isn’t how the webo works now. Today that’s not the case. We want to rethink the stack for how the web works: content addressing rather than location addressing. Peer-to-peers structure.

Think about how easy is it for content to become hypercentrialized and censored;

Also efficiency: channels of low bandwidth and so on.

If we can move entire sections of the web to a remote location and serve them at protocol level, take what we’ve learned from CDNs and build into the protocols themselves.

Finally, it’s a way of thinking if you have decentralized way of creating protocols that organize work in a public network, can you organize a system to store data for all of it. A utopian decentralized market where storage is proper commodity? Allowing ISPs to participate in cloud storage.

Today we have a hypercentralized storage system as well.

JT: The power of decentralization could really change the world. What are some of the other benefits or uses that we could apply this tech to?

You get to work with the community in such a rich way; what other use cases?

JC: Inspiration from investment bank. Started off as a bond trader.

The real innovation is just not about decentralization like bitcoin but also reshaping entire market structures.

Reshaping entire market structures that today depend on rent-seeking middlemen. Logical conclusion of this is: redefining what it means to own something in digital form.

Today we don’t really own anything that is in digital form. BoA database represents my ownership. So i get excited thinking about how completely different market structures will look in a couple of years.

JB: At its core, this has to do with establishing decentralized computing platform where you can run programs and encode business logic and where participants can’t overturn results, and there is no litigation over the events that take place. What happens to law when you can express legal agreements in a digital context.

Transactions are easier if you don’t have to draft agreements and think about them in depth every time.

The major innovation with blockchain is that law and finance were right away ripe for changes, in terms of investments and ownership.

You have the first real wave of smart contracts, finance and law are immediately being changed.

Potential is massive: you can change pretty much everything, how we reason about markets and providing services and utilities. This is the first public utility that is completely international, governing themselves.

You can run all kinds of services: cloud storage, cloud computing changed in fundamental ways.

That said, it will take a while. UX is still atrocious. Quality of platform is bad relative to modern standards. Considering migrating an application into a different context is almost a non-starter. The tech has to catch up with banking to enable developers to change how they maintain applications.

You’ll have developers able to create something like Twitter, put into into the network, and never having to worry about maintaining it anymore, because participants will. Completely different way of approaching development.

Now is the right time to get involved to help develop.

When you have a 100-line code of contract, you want to leverage all you can to make sure you get the right answer.

JC: Precisely because there are no 3rd party intermediaries to call about reversing an action.


Q: I am an investment banker, but i don’t understand what mining is.

JB: When you think about decentralized consensus protocol, where a bunch of parties are proposing values for the head of the chain, and they have to agree upon what that value is.
Mining is a way that lots of work/resource expenditure have to exert in order to propose a vote on value.
You have a whole bunch of people with computers hooked up trying to give one value weight and declare a winner.
It’s like a voting system where you use resource expenditure…

JC: proof of stake algorithm vs. proof of work system

On any blockchain network you need a validator who verifies certain things about transactions and then is broadcasting that badge or block to the network. The validator gets elected based on how much computational power they are putting into the system.

Next generation of systems will use proof of stake, where election process relies on how many tokens you have: creates new incentive structure

JB: we found a way to resource expenditure with a valuable side effect: mining is useless otherwise. It’s useful insofar as it lends weight to your proposed value, but no value outside of your company.

We found a way to use valuable storage of files and computational work which shows that resource expenditure is actually proving network that you have stored files. We think proof of stake is valuable area of research in the future.

Governance of these systems will evolve dramatically over the next years.

Q: I’m curious about how people are thinking about preventing recentralizing things, e.g. smart contracts. Like in agreeing on price of wheat over a given day, Everyone has to agree on what the price of wheat is on a given day, and certain nodes have more power. Secondly, how are you thinking about preventing recentralization as you are going through these processes of decentralization.

JB: There are many things happening that might cause recentralization; Oracle solution is solid if you have verifiability and if you know they can’t charge exorbitant fees.
Approach is to decentralize in pieces. A good enough solution for now and then go back and decentralize more along the way.

We think about it in terms of storage providers or distribution providers: how to carefully structure things to get as much value as possible.

JC: running joke in crypto space is that crypto-currency has created far more 3rd parties than it has destroyed.

A new protocol has to be very specific about the problem it is trying to solve. If i am in Venezuela using crypto-currency, the trust problem I’m solving there is different from the trust problem in file storage. One protocol won’t solve every trust problem.

There are different trust problems and there is no “one protocol to rule them all”

All our sessions will be streamed live! If you can’t make it to Summit, here’s the link:
Source: CloudFlare