The Singularity … and Government in the Future

The sixth Singularity Summit is this weekend. The Summit is a TED style conference of 700 scientists, engineers, businesspeople, and technologists discussing issues pertaining to the Singularity. The Singularity is that point in time when computer intelligence exceeds human intelligence. The concept was set out in a 1993 article by Vernor Vinge, and popularized by Ray Kurzweil and others. In the words of Time Magazine, the Singularity isn’t science fiction: “no more than a weather forecast is science fiction. … it’s a serious hypothesis about the future of life on Earth.”

The Summit this weekend promises to be amazing. The list of speakers range from Peter Thiel and Ken Jennings to Stephen Wolfram. Several speakers will discuss “Watson,” the computer that took on Jeopardy! champ Ken Jennings and defeated him.

Watson also defeated every member of Congress it faced other than New Jersey Democrat Rush Holt.

And the good Congressman brings me to the point of this post: what happens to government after the singularity, when computers can beat even Rush Holt?

By definition, the singularity means that machines would be smarter than us, and, in their wisdom, they can innovate new technologies. The innovations would come so quickly, and increasingly quickly, that the innovation would make Moore’s Law seem as antiquated as Hammurabi’s Code.

In the face of all this innovation, we can ask: should government have innovation policies after the singularity?

Today, the US does a lot today to promote innovation. In theory, antitrust law takes innovation into account, as do telecom and Internet policy, energy policy, and so on. The government has small business benefits and has programs to promote entrepreneurship. The tax code also provides deductions for research and the government funds basic science research in health and technology, through universities, direct grants, and federal entities from DARPA to In-Q-Tel.

But today innovators, subject to such regulations and subsidies, are mere mortals. Even with private-sector mortals, there is a question whether government officials, and the institutions in which they must operate, are competent to judge and encourage private-sector innovation, rather than stifle it.

That question becomes more acute when mere mortals try to regulate and encourage innovation engineered not by other mortals but by superior computer intellects. What laws could a John Kerry Jr., future head of the Internet subcommittee, propose in the post-Singularity world? He would need one of two things to legislate: a supercomputer lobbyist helping his staff draft sound legislation or a supercomputer staffer working for him.

Both are problematic; both could outsmart the good Senator and advance their own interests over those of the Senator or his constituents. The supercomputer lobbyist would do so for obvious reasons–to advance its own client’s interests. The supercomputer staffer has other reasons, but similar ones. Many staffers today are looking out for their next job while staffing Congress, and that next job is as a lobbyist, so staffers may curry favor with existing lobbyists. There’s no reason to think supercomputer staffers would be any different in their motivations (they’d just be smarter about hiding it). All the public choice literature and theory on regulatory capture points in this direction–and will point even more strongly once supercomputers devise even stronger economic models supporting them.

At this point, we could simply abandon innovation policy or try another plan. Abandoning innovation policy is dangerous: what if one supercomputer group obtains a monopoly and engages in anticompetitive behavior to keep that monopoly? That would slow down innovation. What if some supercomputers start using the patent system to stifle the innovations of other supercomputers, refusing to license broad patents granted by simpleton humans and suing everyone with similar technology. Clearly that would stifle innovation, and then some patent reform would be necessary as a matter of innovation policy.

So, rather than abandoning an innovation policy, we could adopt another plan. We could simply put the supercomputers in charge of the government too.

The Singularity Government may (not) be a brilliant idea. The Singularity Summit definitely is. It couldn’t come soon enough.

One thought on “The Singularity … and Government in the Future

  1. PeterKinnon says:

    There is good reason to suppose that supercomputers and/or direct human agency will not of themselves produce the next generation of “intelligent” life.

    There is good evidence to support a model that views the development of technology as an autonomous evolutionary process and the transition to this next phase (corresponding to the inappropriately labeled “singularity”) to be derived from what we at present know as the Internet.

    By a process of .self-assembly which we observe to be already well under way.

    For an informal expansion on this evolutionary model see “The Goldilocks Effect: What Has Serendipity Ever Done For Us?” (free download in e-book formats from the “Unusual Perspectives” website)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: