Category Archives: Uncategorized

Some further points on my network neutrality article in Wired

I published a piece in Wired today called “We’re About to Lose Net Neutrality — And the Internet as We Know It.” It is apparently #1 on Wired, front-paged on Reddit, and receiving many thousand views at the same time. 

I’ll use this space to expand on some points. This won’t make sense unless you read the Wired article probably, or know the issues.

1. The legal mumbo jumbo

The court will likely uphold the FCC’s authority to regulate the Internet in many ways (perhaps to do mischief or help out the telcos and cable companies) but likely strike down the FCC’s authority to do one thing: enforce network neutrality. I wrote the piece because I didn’t want people to be confused by language like this: “The FCC has authority under 706 of the Telecom Act to regulate the Internet, and to forbid blocking of websites, but lacks authority under 47 USC s. 153 (forbidding applying common carrier rules to a Title I service) to enforce an anti-discrimination principle.” Translation: net neutrality is dead. Don’t be confused.

2.   How can anyone predict the court’s decision?

You can’t be sure. But the oral argument went on for 2 hours, an hour longer than scheduled, and the judges weren’t hiding their thoughts. The media in the room reported that the court would likely make the decision I described above–regarding 706 and 153. Many just wrongly thought that was a “split the baby” decision when it was a complete loss for all of us. I wrote the article partly to clear up that misimpression.

Also, many people are afraid of predicting decisions, should they end up with egg on their faces if wrong. I’m willing to take that chance here.    

3. EFF’s worst nightmare.

Many great tech advocacy groups support adopting network neutrality rules by law–Free Press, Public Knowledge, and others. The EFF (which is also great) is a partial exception: it generally supports network neutrality as a concept but has been worried about giving the FCC the authority to do any meddling in Internet regulation, including net neutrality. That’s because the FCC is an agency often beholden to telecom and cable companies.

The court will likely decide that the FCC has authority regulate the Internet broadly under one provision (706 of Telecom Act) but not to do network neutrality under another (found at 47 USC s 153). So the FCC will be empowered to meddle, perhaps to do mischief to benefit the carriers, but forbidden to help Internet users through anything like net neutrality. (My friends at EFF should just let me if I’m wrong.)

 

Cybersecurity Panel

Cybersecurity Panel

I spoke a few weeks ago in NYC at the Louise Blouin Foundation’s Creative Leadership Summit on a panel about cybersecurity. I emphasized the importance of privacy safeguards and trust.

 

cyber_security_24E_62

Filtering the Internet is Still a Bad Idea: DCA, ABC, and Steroid Searches

A few days ago, ABC News ran an “investigative” piece called “Group Probes Ease and Danger of Buying Steroids Online.” ABC describes the “group” at issue as “an online watchdog,” the Digital Citizens Alliance. That group determined that some of the millions of available YouTube videos encourage steroid use and that YouTube (which is owned by Google) places ads next to steroid-related videos and search results. They argue that Google and YouTube should be held legally responsible for any illegal content linked or posted.

ABC News could have told the story differently: A Microsoft-backed group led by a public relations firm (but named for an “alliance” of “citizens”) is holding Google & YouTube to a standard that Microsoft fails, while effectively arguing for filtering of the Internet, through appeals to the emotional issue of teenage steroid use.

Let’s begin with the big picture and move to the details of this group. 

Filtering the Internet is a terrible idea, even to stop illegal drug sales.

It is awful that teenagers turn to any illegal drugs. But perspective is needed. We know some teenagers buy drugs at school; we don’t shut down schools, we don’t search every student, we don’t monitor everything they say, we don’t require them to get permission from an adult before speaking with one another. We engage in education efforts and responsive actions. We also know that people will use the Internet to communicate about everything from coordinating a democratic revolution and reporting government corruption to idle chit chat to illegal activity.

The Digital Citizens Alliance is actually arguing for a filtered Internet. DCA claims that companies should be liable for any illegal content shared on a site. If Twitter, Google, Facebook, Yahoo, and others were guilty of the acts of all the slanderers, copyright infringers, fraudsters, conspirators, and drug pushers on their sites, then they would have to filter all the content on their sites. With a billion users, if even 0.1% of them are wrong-doers, then a platform would be liable for one million wrong-doers. They wouldn’t be able to take on the risk of legal action for all those potential wrong-doers. That means these companies would have to filter content in advance. The Digital Citizens Alliance cannot mean that companies simply have to act quickly and take down illegal content once notified; these companies all take down content when it is reported or flagged for violating their terms of service forbidding illegal activity.

The existing rules strike the right balance. For the past 2 decades we have had a set of rules to ensure freedom of expression online while limiting illegal activity. Those rules generally enable companies like Twitter, Facebook, Google, and the New York Times online to carry the speech of millions or billions of people empowering all of us to publish and comment–through tweets, posts, pages and videos, or comments on stories. They are able to carry the speech of so many people because they are not guilty for all illegal content posted by every single person. (The laws include the celebrated 230 of the Communications Decency Act and also 512 of the Digital Millennium Copyright Act.)  Instead of these companies being liable, the actual wrong-doers are responsible: the slanderers, the sites that traffic in drugs, etc. Recently, the authorities busted an online drug bazaar and a child prostitution ring without having to change the Internet’s magna carta and make tech platforms liable for all the content on their sites. If they were liable, these companies simply would not be able to act as platforms and networks for billions of people. They would have to filter all content in advance and become editors of their platforms, closing opportunities for average speakers.

Companies like Google make huge efforts to remove illegal content. Most platforms for the speech of billions of users have to rely in part on users flagging or reporting content. It’s far more effective and respectful of free expression than attempts to filter through computer algorithms. Go to Twitter: you can “report” every tweet. Check YouTube: every single video has a flag icon. Every piece of content on Facebook can be reported. Considering the number of users and content shared, this flagging is essential. I wrote about this in some detail here. More briefly: one-hundred hours of video are uploaded to YouTube every single minute and that much content can’t be filtered in advance without requiring YouTube to to limit who can post. Googles search engine includes trillions of sites and reflects the web; Google can’t filter them all and shouldn’t have to. In one month alone, however, Google processed over 18 million requests to remove URLs from its search results based on copyright concerns and removed removed 97% of the requested URLs from July 2011 to December 2011. Google also makes efforts to ensure ads are not placed alongside illegal content. (I provide the sources in the other post.)

The Digital Citizens Alliance is a Microsoft-backed group, which is the only reason Microsoft is not their target.

This is an old story. The story is Microsoft’s ongoing strategy of attacking Google in slanted advertisements and through political PR efforts. It’s also the story, it seems, of the copyright industry, which has long argued, in various ways, for pre-filtering all content, including when it attempted to push an infamous censorship bill called SOPA.   

DCA is backed by Microsoft and not a citizens alliance. 

The Digital Citizens Alliance is not an actual alliance of citizens, but instead is known to be backed by Microsoft. Techdirt called DCA an obvious “astroturf” group not a real “grassroots” group. Two of DCA’s three staff members are employees of the DC public relations firm, 463 Communications (Tom Galvin and Dan Palumbo), and the other is also in PR. That is not the makeup of, say, the ACLU, EFF, or Consumers Union, or a legitimate consumer group. The alliance’s advisory board includes someone from the Alliance for Competitive Technology, an organization that receives over a million dollars from Microsoft every year. I live in DC and know folks at 463, ACT, and Microsoft–in fact I even like all of them I know. It’s just that it’s obvious to me and anyone in DC: an organization with this backing and structure is not an online watchdog or an advocacy group but a corporate PR vehicle. 

This close connection with Microsoft explains why DCA has not attacked Microsoft for the same exact things. In fact, if you do a Microsoft Bing search for “buy steroids,” you will see that ads accompany the results, but you will not for the same search on Google.

It’s understandable why something might fall through the cracks on Bing: the Internet is a big place with trillions of sites and billions of real human users who do things that are sometimes unsavory. It is impossible to police them all in advance and requiring them to do so would undermine free expression and change the nature of the Internet. The Digital Citzens Alliance should let Bing know about this issue. But that’s clearly not the intent of the alliance. It’s not around to actually make the Internet a safer place, just to be part of a PR attack on a specific company.

Disclosure: I advise several companies, including Google, on free expression law and public policy.

Tagged , ,

The Internet as Metaphor

I am old enough to remember when we had trouble explaining the Internet. It was 1994 or 1995, and everyone was talking about America Online. When telling someone a web address, people would say, “Go to h, t, t, p, colon, slash, slash, w, w, w, dot.” People didn’t even know how to pronounce @ or explain the Internet.

To explain the Internet, we’d use metaphors and analogies. It’s like TV, kind of. It’s like a printing press for everyone. A computer billboard. Electronic mail. It’s one to one and one to many and many to many, unlike TV which is one to many and phone which is one to one.

For the past few years, the Internet is so embedded that we now use even websites as metaphors for other websites: “it’s like Uber, for food,” “it’s like Airbnb, for dogs.” Even our understanding of the brain’s neuron networks seems to borrow heavily from a metaphor to computer networks.  Essentially, it’s so familiar that we intuitively understand it, and have to make sense of new things by reference to the old and familiar–the Internet.

Recurring Myths about the Legal Obligations of Online Platforms

In recent months, some copyright holders, pharmaceutical companies, and  state attorneys general have made allegations against Internet companies that help users find and share information. In short, they claim that because some users engage in copyright infringement, sell counterfeit products, or otherwise encourage potentially criminal activity on the Internet, the users’ Internet platforms should be held responsible for these misdeeds. That is, Google should be punished for any user’s copyright infringement on YouTube, Facebook for any user’s harassing post, and Twitter for any user’s slanderous tweet. According to the critics, that is, these companies should screen all users’ speech and take on the role of editors or publishers, rather than being open platforms for the speech of millions.

Many of these allegations focus exclusively on the biggest company in the space, Google, even though Google already invests considerable resources in reducing infringement, counterfeiting, and unlawful activity on its platforms. One state attorney general accused Google of “a failure to stop illegal sites from selling stolen intellectual property,” as though Google has the obligation or even the ability to stamp out copyright infringement on every “site” on the Internet.

For those who follow Internet policy, these types of arguments should sound familiar, stale, and still misguided. These arguments have failed repeatedly in federal courts, Congress, and the court of public opinion. One wonders why, like zombies in a classic horror movie, these arguments just keep coming back from the dead.

As recently as 2011, some in Congress supported a now-infamous bill called SOPA designed to target Internet intermediaries for their users’ copyright misdeeds. SOPA’s co-sponsors also targeted Google and similarly served on committees focused on intellectual property—committees that often show an unbalanced attentiveness to the copyright industry’s concerns over those of average users and over important principles of free speech more generally.

To ensure digital platforms for user expression, Congress has wisely held that speech platforms should generally not be guilty of their users’ misdeed. Congress has done so through established and widely praised laws such as section 230 of the Communications Decency Act and Section 512 of the Digital Millennium Copyright Act. Courts have construed 230 of the CDA “broadly in all cases arising from the publication of user-generated content.”

Nonetheless, every few years, we see attempts to undermine intermediary immunity. While many such attempts might be well-intentioned  they are deeply flawed and would threaten the Internet’s role as an engine of free expression for hundreds of millions of Americans.

In this post, I respond to the recent allegations by rights-holders and state attorneys general. These critics mistakenly accuse companies of turning a blind eye to users’ potentially illegal behavior on search engines and video platforms. They also advance legal claims that technology platforms should be liable for any abuse on any of its services, despite a lack of support for such claims in the case law (and considerable support for the opposite position). As many of these arguments are specific to Google, I reply to those arguments and explain how my responses apply more broadly to other Internet companies.

Continue reading

WordPress Blogging With Google Glass–in Iceland

I am going on vacation in July this year and will be traveling around Iceland.

On this trip, I will be taking photos with Google Glass and automatically posting them from Glass to this WordPress site. I am sure you are wondering how that’s possible.

For my trip, my partners at Silica Labs decided to build the first WordPress integration for Google Glass. This will make it possible for me to directly blog from Glass to WordPress, sharing photos and text directly from the device to a blog. (Already, a Google Glass user can directly share to Facebook, Tumblr, and Twitter, thanks to apps developed by those companies.)

I think the Glass-Wordpress integration can help people tell the stories of their travels and journeys.

I am a huge fan of WordPress and of Google Glass, so I’m glad we have integrated the two. Silica Labs has been working with media companies like National Geographic, and Glass integration with WordPress should provide a useful tool.

I think the photos from Glass will be cool. I’ve been to Iceland before and took some of the best photos of my life there. It has volcanic ash beaches, interlocking bays, mountain ranges, and geothermal pools evoking an almost alien beauty. You feel like you’re in a movie about another planet or an ancient realm. Here, look:

iceland lunar

Fred Wilson: The Economic Case For Privacy

In June, venture capitalist Fred Wilson of Union Square Ventures spoke with WSJ reporter Spencer Ante about mobile technology and other topics at SourceDigital2013.

http://www.youtube.com/watch?v=ksGPK6FCuMA

At around the 3:30  mark, the conversation turns to privacy in the wake of the NSA surveillance scandal. Wilson says:

I would hope that the people who worry about business interests in our government are fighting with the people who are worried about national security interests and having a debate about this specific issue, because if Western Europeans stop using Dropbox because they’re afraid the US government is able to see everything they put in their Dropbox, they may start using a service that’s based in Iceland or Sweden or Germany or France or something like that. That’s going to be really bad for Dropbox. It wouldn’t surprise me that some of that will happen. How much of it is a bigger question, right? So many people are upset about this at some level, but the question is whether people will change their behavior.

Back in March, Marvin and I made a similar case in Politico, arguing that privacy is not only a civil liberties issue but also an economic issue. We also warned that policymakers are helping foreign competitors win customers from American companies:

Other nations exaggerate the flaws in our privacy laws to edge out American tech companies for both enterprise and personal users. Our lawmakers are doing them a favor by casting a black cloud of legal uncertainty over our own industry and refusing to fix the most obvious problems. Now is the time to act.

An article on Politico this morning suggests that the NSA scandal may be waking up Congress to this issue, possibly to a strong enough degree for action. If policymakers want to ensure American tech companies remain competitive, they’ll need to swiftly pass legislation to protect privacy.

Code.org Continues To Inspire

Last week, I received an update from Code.org, the group behind this well-known video, encouraging kids to learn to code:

According to Hadi Partovi, Code.org’s founder, the organization has done a lot of good in the four months since its launch:

Thanks to your sharing & tweeting, 3.5 million kids tried learning to code online, 12,000 schools asked our help to teach computer science, and 25,000 software engineers volunteered too!  We’ve connected thousands of schools with opportunities and helped set up hundreds of classes.

In a piece in The Atlantic from last year, Marvin described why it is so important for younger generations to learn this skill:

Thinking like a programmer is not only helpful to succeed in any technical career, it will also become integral to simply navigating our increasingly digital world. Code consists of languages that can be taught just as we already teach the “language” of math, the language of music, and the language of Spanish vocabulary and grammar. Students could decide whether or not they want to pursue greater fluency and expertise in coding (or Spanish), and (if nothing else) students would benefit from the distinct problem-solving framework of a coding mentality — which may be a more entrepreneurial mentality than memorizing the dates of famous battles in the Thirty Years War. It would help students to think critically — to analyze and solve problems.

Marvin argues that teaching programming in and out of the classroom can pay dividends far down the road, and ensure that America remains competitive in the global economy. Code.org is one of the organizations leading the way, and over 718,000 people have signed its petition, arguing that “Every student in every school should have the opportunity to learn to code.”

No doubt that number will continue to grow, as will the number of students who take a crack at developing this valuable skill set. Check out the site if you want to learn more.

Ammori and Meinrath Publish in Emory Int’l Law Review

Marvin and Sascha Meinrath, Director of the New America Foundation’s Open Technology Initiative, published an article for the Emory International Law Review’s Symposium on “International Law and the Internet: Adapting Legal Framework In Response To Online Warfare and Revolutions Fueled By Social Media.” Their piece examines the recent conflicts over Internet freedom and how new technologies can foster a more open society, if we allow them to do so.

From the piece:

Which brings us to another conundrum in the battle between the Internet Freedom Fighters and the Cold Warriors—how do we empower free society while creating technological mechanisms to police malfeasance? The same “good” surveillance technologies are used regularly for such evil purposes, yet have been so impotent in actually stopping piracy in the first place.

Like any powerful tool, technology offers both tremendous boons for adept users and dramatic new pitfalls for an unsuspecting public. But the battles over SOPA and PIPA give us hope.

You can read their article here.

True Threats, Free Speech, and Government Trust: the case of Cameron D’Ambrosio

The Center for Rights, a nonprofit civil liberties organization, is running a campaign to raise awareness of a legal case in Boston that seems to disregard basic First Amendment protections. (The Center for Rights is the 501(c)3 sister of Fight for the Future.)

As usual, the defendant in a free speech case happens to be unsympathetic, and the speech is poor taste, but the First Amendment doesn’t let you throw unsympathetic kids in prison for poor taste.

From the facts recounted by the organization, a teenager named Cameron D’Ambrosio posted a rap song on Facebook that bragged he would be more famous than the Boston Marathon bombers and apparently criticized the White House. As a result of those lyrics, the kid has been in jail for a month and is facing 20 years in jail for “communicating a terrorist threat.”

First Amendment Would Protect Rapping this Song Lyric

I haven’t had a chance to review all the material out there, but everything I’ve read suggests that the First Amendment protects this kind of speech and the kid shouldn’t be sitting in a jail cell.

According to Fight for the Future, the rap line was fairly harmless: “Fuck a Boston bominb wait till u see the shit I do, I’ma be famous rapping.” Some media reports apparently left off the word “rapping,” suggesting that he wanted to become “famous” through his own bombings.

If he said he’d be famous rapping, there is no way that’s a threat, even if it’s a tasteless comparison.

If the rap song suggested he’d become famous bombing stuff, I am not totally sure how that is a threat and not the standard stuff of rap songs. I know that gun violence doesn’t trigger the same angst as bomb-violence (even if there’s more gun violence), but rappers say tasteless violent things all the time–we all know the expression “bust a cap in his ass” for a reason, and that reason is excellent rap music. We all know Eminem raps about killing an ex wife named Kim, and has an ex wife named Kim… Rap music is full of references to violence and slaying rival sucker MCs, usually with guns, only sometimes with bombs apparently.

In light of the obvious context, the legal analysis would probably begin and end with the Supreme Court case of Watts v. United States. That case stands for the principle that political speech is protected but a “true threat” is not.  In Watts, a Vietnam protestor said that if he were drafted to fight in the war, and made to carry a rifle, “the first man I want to get in my sights is L. B. J.” That was ruled protected political speech because it wasn’t a “true threat.” The analysis turns largely on the notion that a reasonable listener would not consider it a true threat of violence.

We do not believe that the kind of political hyperbole indulged in by petitioner fits within that statutory term. For we must interpret the language Congress chose “against the background of a profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open, and that it may well include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.” The language of the political arena, like the language used in labor disputes, is often vituperative, abusive, and inexact. We agree with petitioner that his only offense here was “a kind of very crude offensive method of stating a political opposition to the President.” Taken in context, and regarding the expressly conditional nature of the statement and the reaction of the listeners, we do not see how it could be interpreted otherwise.

The rap song here appears to also be general political (or cultural or counter-cultural) speech and not a true threat of violence.

Government Trust At Stake

I emailed a few First Amendment scholars to see what they thought of the facts of the case. One told me she was troubled and asked how she could help. The other said, effectively, that “the government must know something we do not,” and is otherwise acting “insane,” though perhaps not for the first time. Around the same time I received the email from the second scholar (the one who assumed the government “knew something”), I read an email from a friend explaining that he had been on a grand jury and the jury would indict often based on the idea that the prosecution “must know” something they couldn’t tell the jury.  And these two bits of information reminded me of what several people told me in the run-up to the Iraq War, when there was some doubt we would find weapons of mass destruction based on the available evidence. People would say, “the government must know something they’re not sharing with us.” In short, just trust the government.

I am not so sure American citizens are going to continue “just trusting” their government without evidence in light of recent history, but “government distrust” is generally considered a core principle of the First Amendment in cases such as this one.

If the prosecution has evidence of a planned bombing, beyond an apparently misinterpreted lyric, the prosecution should make that as public as possible.

Despite the alarm after the terrible bombing, this completely unrelated kid has been in jail for over a month and faces 20 years in prison apparently for what the Supreme Court would call a very “crude” and “offensive” expression, not what seems to be a true threat.

Note: I am on the Board of Directors of the Center for Rights.