Samir Chopra – and how he looks to his iPad.
BA in Mathematical Statistics, Delhi University
MS in Computer Science, NJ Institute of Technology
PhD in Philosophy, CUNY Graduate Center
Two decades ago, Samir Chopra was a recent college graduate working at AT&T’s Bell Labs, where he wrote computer programs for electronic voice and data switches. Then Chopra made a different kind of switch – leaving the corporate world for the CUNY Graduate Center, where he studied the philosophy of science. Now an associate professor of philosophy at Brooklyn College and the Graduate Center, Chopra is coauthor of three books, including Decoding Liberation: The Promise of Free and Open Source Software (Routledge, 2007) and A Legal Theory for Autonomous Artificial Agents (University of Michigan Press, 2011), which explores how the legal status of robots could evolve in the 21st century.
Why did you switch from working at Bell Labs to academia?
It was good work but I was finding the 9 to 5 grind a bit dispiriting. I thought, “Maybe I don’t have to be stuck in this job for the rest of my life.” I had always been interested in the philosophy of science, and I decided if I was going to make a career change I might as well make it a big one. So, I chucked my job, moved to Manhattan and started at the Grad Center.
What was the significance of Decoding Liberation: The Promise of Free and Open Source Software, the book you coauthored with Scott Dexter?
We unpacked the philosophical significance of free and open-source software. It’s not just about software being free in price but free in the sense of not restricted – in how one has access to the software, how it is controlled, how it is distributed, how it could be modified. This has interesting implications for the political economy, for intellectual property and for the nature of our society in the coming century.
How has the open-source software movement affected our larger culture?
It has prompted a very broad-ranging discussion about these legal doctrines that go by the name of intellectual property. If economics is concerned with the allocation of scarce resources, why import the same old legal regimes and economic principles to regulate digital products when they aren’t scarce? Open-source software licenses have given us a very strong, important ethical message that sharing trumps an enclosure method.
How do you think creators should be compensated?
People are talking about modes of direct payment to artists that don’t require intermediaries like record companies. This will require accurate tracking, micropayments perhaps, payments for tangible, live performances, movements away from private collections of music to cloud services; a whole bunch of different things will fall into place. The whole infrastructure will have to change.
Do you see any connection between the free software movement and 21st century social movements like Occupy Wall Street?
The software giants like Oracle and Microsoft have sewn up the technical and economic landscape of software with a very clever deployment of intellectual property regimes. There is a kind of 99% to 1% balance that the free and open- source software phenomenon aims to redress. If there’s a broad historical narrative of computer science, I think it would revolve around the tension between the economic significance of computing and the compulsion to play with it, to do more things with it, to fully unleash its potential, to share it with as many people as you want.
What do you see as the trajectory for robotics and artificial intelligence in the coming decades?
More and more things will become automated and this will become more mundane. Digital personal assistants will organize our work for us in ways that would require human thought today. As machines replicate more of our capacities, we might lose some of our sense of uniqueness. It might help us think about exactly what we believe distinguishes us from machines or animals.
What do you think distinguishes us?
The flexibility and richness of our relationships with each other and with the environment, our use of language, our rich use of symbols.
Why do you propose to recast robots as legal entities in your latest book?
When you go to Amazon.com and buy a book, you don’t interact with a human clerk, you interact with a program. But these kinds of programs are not like vending machines. They are more like quasi-autonomous or quasi-intelligent machines that are capable of making up the terms of a contract. They can arrive at conclusions like, “Oh, it turns out that you’re a 35-year-old man who lives in Kings County and I’ve noticed something about your buying patterns and can now offer you special discounts.”
Rather than being thought of as a mere tool like a hammer, these programs should be considered legal agents of the principals. They needn’t be fully independent legal persons, but once you understand them as legal agents it would resolve some doctrinal puzzles in the law.
In what ways?
Think about something like Gmail’s e-mail scanning program, which reads our e-mail and shows us advertisements based on it. Google says, “Don’t worry, people aren’t reading your e-mail. It’s only programs that are reading it.” But it’s not relevant whether humans are reading my e-mail. What matters are the abilities of the thing that is reading my e-mail. If you recognize these kinds of entities as the legal agents of Google, then the knowledge they have becomes the knowledge Google has – which is in fact the case. Recognizing this, by the way, would put Gmail in violation of the US Wiretap Act.
What are some other examples?
Google has developed a self-driving car. And while there are some things it can’t yet do, it can safely drive in traffic, get on or off a highway, park and so on. So in assessing liability for robotic vehicles, what else can we compare this to in the law? We have pets, which are in many ways autonomous, but for which we are legally responsible. Should we compare robotic vehicles more to animals, to children, to a bulldozer parked overnight by the side of the road? Such choices give different answers in fixing how much liability you have, and what kind of duty of care you have with respect to that situation.
It’s not a question of whether there is human liability, but how it is shaped. For example, if you are using a robot car, is it ever reasonable to take your eyes off the road? Defining the car as a legal agent, in given situations, helps answer such questions.
Ultimately it’s about how much control we have over them, and how much control they have over themselves.