Close Menu
    Trending
    • Harry & Meghan’s Cold Move Shows They Don’t ‘Need The Palace’
    • Timeline of attacks on the Jewish community in London
    • $25bn or $1 trillion: How much has Iran war really cost the US? | US-Israel war on Iran News
    • More information emerges on concerning Bo Nix update
    • Opinion | Why Are We Still Driving?
    • The fake magazine in ‘The Devil Wears Prada 2’ is having a better year than most real magazines
    • Is an AI version of Mark Zuckerberg – or any boss – a good plan?
    • Portugal’s Defense Sector Rising | Armstrong Economics
    Benjamin Franklin Institute
    Thursday, April 30
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Opinions»Opinion | Why Are We Still Driving?
    Opinions

    Opinion | Why Are We Still Driving?

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteApril 30, 2026No Comments45 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    How soon are self-driving cars coming to your city? “Self-driving vehicles are here in Minneapolis.” “Phoenix and San Francisco. Los Angeles and Miami.” Will the world be safer with fewer people behind the wheel? “I got a flight to catch.” “Why is this thing going in a circle?” “I’m getting dizzy.” “A Tesla, believed to be on autopilot — started braking, causing an eight-car pileup.” Or will it be less human and less free? “Never get fatigued.” “Never get bored.” Is driving the next great culture war battlefield? “I don’t want a human anymore.” “I want a Waymo.” “The cars should be driven by a person.” “Like it or not, they’re here.” My guest this week is a transportation expert, who thinks that self-driving cars absolutely are the future. But the choices we make right now will determine whether that future is a driving utopia — or a traffic nightmare. Andrew Miller, welcome to Interesting Times. Thank you. It’s a pleasure to be here. So I want you to start by giving me a sales pitch for self-driving cars. Explain why people might welcome them. What would be good about a self-driving future? So we can approach this from the micro or the macro level. At the macro level, 40,000 Americans die every year in road incidents, and that is only those who die. It excludes those who suffer life-altering injuries. None of those need to happen. And the vast mass majority of those are caused by driver error. So at scale, the more automated driving there is, the safer the roads are, the safer Americans are, the safer anyone who uses the roads are. But at a micro level — not just safety — driving is an immense consumer of people’s attention. They have to give, or they should give, their full attention to the road. In theory, yes. That’s the goal. That’s the ambition. If they don’t, we get more of those road incidents that I was describing. But what it allows you to do is it unlocks vast reservoirs of attention. Hundreds of millions of hours every year that Americans would get back for other things. And as a good liberal, I don’t prescribe a vision of the good life, whether they want to play Candy Crush or whether they want to read The New York Times. There’s any number of things that they could do, but they can’t right now because they must pay attention to the road. It will be a huge liberation of time and attention, which can lead to so many good things. When would you expect, on the current trajectory, self-driving cars, automated driving to become a normal part of life in lots and lots of North American cities? I like to — it’s not so much a joke. It’s a wry observation, that around this time last year, I could name every city that Waymo is operating in from memory because there were so few. Sometime late last summer, that stopped being true. I believe they have announced plans to be in more than 15 cities. Their footprint in each of those cities is small, but they’re going to grow quickly. So it really depends on how fast Waymo can scale and how fast their two big competitors, Zoox and Tesla, can scale. I’m always wary of making predictions because this field is so rife with hucksters and charlatans who make predictions. But if I — It’s an occupational hazard of podcasting, though, so a general prediction. It’s going to be — 10 years is a good anchoring thing, 2035. 2035. Then the normal North American city will have a large fleet of self-driving taxis? Most likely, they’ll be mostly taxis in this scenario? Yes. OK. Why is this accelerating and taking off now? We’ve been hearing about self-driving cars for as long as I’ve been an adult. Is it connected just directly to the A.I. revolution? What’s the big push at this moment? It’s partially connected to the A.I. revolution. The A.I. revolution is making some of the problems that were associated with iterating the technology easier to solve. But, I mean, Google has been working on this since the first decade of the century. And the reason that Google’s been working on it and others have been working on it, the reason that Elon Musk thinks that self-driving is the future is because rather like generative A.I., teaching a car how to drive is very expensive initially. But once you know how to do it, it is very, very cheap to copy. And then because it is a shared vehicle as opposed to a privately owned vehicle, a robotaxi can be used as much — as many hours a day as you can keep it clean and charged. So, it can just spit out money for you endlessly, every hour, every day, every week. So, from a business point of view, it’s a wonderful business to be in if you can spend enough money to get to the point where you have a safe and reliable product. How much of an obstacle is serious bad weather to this kind of technology right now? So one way to look at it is that if humans can drive in bad weather, a machine can. The question of how they do it depends on which technology stack you are thinking of. So the Waymo approach relies on the consensus of the field that for self-driving car to no, I’ll put that in quotes. Know where it is. It has to rely on a variety of senses. So you, Ross, you can see, but you can also smell. You can also taste. The Waymo view is a self-driving car should be able to see with its cameras. It should see with its radar. It should see with its lidar Leader think like radar. But it’s with light. It shoots out lasers, and then it measures how long it takes to get the measurement back so it can know with great fidelity where everything is in space around the vehicle to tens of meters. So if you have a car that’s got all of these modes, then it can rain might occlude a sensor case, a sensor, snow might confuse lidar, but the radar works. So the more sensing modes you have, the more expensive your car is, the harder it is to scale up your operations because every car costs so much, but the more reliable it is in a variety of conditions. Tesla is making a big bet that you don’t need any of that. Tesla thinks you can do it all with cameras, and if they’re right, that gives them a huge advantage because cameras are very, very cheap. So Tesla, once they start rolling out their cyber cab, they will be able to produce vehicles in vast amounts and so reach scale very quickly. But it’s not clear that approach is as safe because it doesn’t have the same sensors. And it’s not clear that they have got the same skill of programming behind them that Waymo does. So it’s very much an open contest between these two, which is going to win. So the limiting factor on Tesla potentially right now is safety. And the limiting factor on Waymo is cost. And then the presumption is that essentially in the same way that Uber lost tons and tons of money for an extended period of time, but that was O.K because everyone assumed they would make money eventually. This is the same kind of arc, right Yeah, it took Waymo a big investment to get this far, but they are so far ahead and they’ve got such a great record. They’re going to be very difficult to catch. So yeah I wish, I wish Tesla all the best in this contest. I think they’re going to need it. So you’ve got mid 2030s as a zone where it’s as normal to hail a self-driving car in an American city as it is to hail an Uber right now, let’s say, at what point does this become part of people’s transportation reality outside cities, whether as a kind of suburban phenomenon, the way Uber is right now or is there a self-driving future in the near term for rural America. The rural case is easy to answer. No, just like Uber isn’t a big thing in rural America now, right. My take is that the American suburb is actually a good bet for robotaxis. If you can get robotaxis cheap enough, there’s enough demand in the suburbs to make it work, particularly because the way that we’ve designed the North American suburbs since Levittown, it is really hard to retrofit those for public transit, whereas robotaxis, it is entirely possible that the suburbs get them. But what it does is your local suburb pays some stipend to a robotaxi company to offset the cost of doing business in that, and that makes the economics profitable. So I can absolutely see this being something that would work in the American suburbs, but it may require us to put aside 20th century ideas of what a public transit agency is. And so then in that scenario, people in the suburbs are using them for commuting? Is there carpooling? Like, what is the culture of self-driving car use look like in that scenario. Well, now you get into an interesting question because there’s two schools of thought. There is the transport planning professionals school and then there’s everybody else’s school. The average American school, the transport planning professional says, look, roads are fixed, finite space. There’s only so many cars that can fit them. This is an asset we have to use efficiently. Therefore, we should have shared vehicles just like we get 20 people on a bus. We should have multiple people in every robotaxi or shuttle bus. You’ll get more use of that road. Everyone will have more efficient trips. And then the average American says, go pound sand. I like being alone. I like my privacy. I don’t want to share my space with strangers. I’m going to be in a robotaxi alone. And if you won’t let me do that, then I will just buy my own car and it can drive me around. So the question is how we thread that needle between what a planning future of efficient use and the overwhelming revealed preference. Again, in this extremely hypothetical and contingent timeline, when is it normal for people to have their own self-driving car available for purchase. That’s not part of a taxi fleet. You just are going. You’re just like, I’m going to buy a car. And of course, it’s going to be a self-driving car, because why wouldn’t I want that capacity. The trick there is liability. You can imagine a world where Tesla’s going all in on complete self-driving, but the conventional automakers, your VWs and your Fords, particularly your GMs, they would love for you to have every year. That’s that driving assist gets more and more sophisticated. The steering wheel never goes away, but it can handle more and more of your daily driving until, yeah. In 10, 12 years. You could imagine if we solve the liability issue, it can be doing your driving almost all the time. There’s no reason a privately owned vehicle, if you’re willing to pay for it, can’t have all of these sensor systems to make it work. And if Waymo leads the charge and makes lidar rigs incredibly cheap, everyone’s going to pile on that. What level of self-driving is available in Teslas right now? So I drive a Tesla personally. You hear a lot about these levels. Level 3, level 4, level 5. I think that language is misleading. All you need to understand about a self-driving is, does it require a human to be actively monitoring the situation, or does it not. You get in the back seat and it goes. But if I turn on autopilot in my privately owned Tesla, I need to be keeping my foot on the brake and my hands on the wheel and my eyes on the road at all times. The car can handle most situations, but some it can’t, and it’s my responsibility to intervene in those cases. A Tesla at its most sophisticated level can not. Only you can plug-in your destination and it will take you to the road, it’ll take you at the speed limit or more than the speed limit. If you tell it to. You’re in the center of the lane. It’ll make turns, it’ll stop. It’ll even change lanes for you. And what are you doing when you say you have to keep your hands and feet active while it’s doing all this, what are you doing with them. Are you just hovering over the brake in the steering wheel until a large bison stampedes across the road. Exactly you don’t have to do anything, but you have to. As I said on The Simpsons once, maintain yourself in a state of cat like readiness in case something happens. There was a time I was using my autopilot. I was traveling in a part of my town I didn’t know very well, and it wanted to take me down a private road, which was sealed off by a chain hung between two posts. And it took me at it at full speed. And I was curious, so I was willing to wait to see how close it would get. I broke before it did. I had to slam on the brakes before we hit the chain, but it was a near-run thing. So we don’t know basically how good Tesla’s self-driving is going to be. You can’t generalize from what the cars can do right now. We are essentially waiting to see what their emergent taxi fleet looks like. Well, they are operating in Austin right now, and they’ve been operating in Austin for more than half a year now. And we have some safety data, and it is how you feel about what Tesla’s reporting has been will depend on what standards you’re holding it to. Most of the time, it works just fine, but Waymos have no safety operators in them. There’s no human controlling the vehicle in the vehicle. Tesla does. In Austin. In Austin, those safety drivers have to intervene an awful lot. So far, the safety record of Tesla is not nearly what Waymo’s was when it was at this stage of its journey. But I mean, it’s always tough in early days. Will they be able to get better? I hope so, but they’ve got to do it quickly. How autonomous are the cars, really? in the sense you already mentioned that Tesla, has these interventions. It’s like how you’re assessing the car’s safety or reliability, depending on how often a human sitting in it has to intervene. Waymo doesn’t have humans sitting in them. But there are still interventions for Waymos? There are. What does that look like. So we learned about this because Waymo was called to the Senate to testify. Are all of these human operators located in the United States? Are they all here? No, we have some in the US and some abroad. So how does that break down? What percent are abroad? Senator, I don’t have that number for you. We can get back to you. Is it a majority are abroad? I just don’t have that number. Well, that’s very curious that someone who’s running the program has no idea. So we got an inside look at this. Waymo says that what they have is remote assistance. So what that means is that it is not like someone playing a video game where they’ve got a fake steering wheel in front of them, and they jack into the car and then drive it, and then jack out and the car computer takes over. It’s more like laying digital breadcrumbs. The car isn’t sure what to do. It encounters a situation that is confusing to it because there’s a bunch of traffic cones, but a few of them are knocked over, and that’s sufficiently unusual that the car is not uncertain. So it calls a human remote assistant who looks at it and says, oh, it’s safe to proceed. Just don’t knock over that cone. Or even go so far as to say, here is I can see on your map, go to point A, then go to point B, then go to point C, and at point C you will no longer be confused. That’s what they call remote assistance. So is that driving people have differences of opinions on this. I say it’s not. I say that the remote assistance is what it says it is. It’s a human providing additional input to the computer to make its decisions. But yeah, there are cases where the computer cannot figure it out on its own, and it does need help. And the human in that situation, just to make the case that this is something more like driving, has the capacity to direct the car. Yes it’s giving an instruction to the computer. What is the passenger’s capacity to affect what the self-driving car does? Once you know you’ve bought your fare. It’s taking you to Fisherman’s Wharf or something. And you think it’s doing something wrong as the passenger. Is there anything you can do? Can you stop the car? Well, what you can do is you can press a button and speak to. It’s not one of those remote operators, but you can speak to a concierge, if I can use that term and explain what the situation is, that there’s an emergency or there’s something of concern. And then the remote operator is able to send messages to the car. The typical thing that we want a self-driving car to do in any situation is if it’s genuinely if it’s genuinely uncertain or there’s a problem to reach a safe position, which normally means pull over to the side of the road, come to a full and complete stop, and then wait for further directions. There are situations where you can imagine that would be a bad thing. If there’s an earthquake. Yep but under normal circumstances, that’s what it does. So you’ve got limited ability to can’t override, but you can talk to a human who has some capacity to override. But presumably the human owned self-driving car of 2035 would be sold with essentially a human override. It would be unlikely that people would be buying self-driving cars that didn’t promise that you can take control of this thing. That you would think so. I would assume so. I’m just trying to envision how this plays out. But Mr. Musk has said there is an absolute market for people to buy a car with. That is entirely self-driving and doesn’t have a human interface. So is he right? If what he says comes to pass, we’ll be able to test your hypothesis within months. Interesting O.K, that definitely cuts against my own intuitions. Let’s talk about liability, which you’ve already mentioned as a bigger issue than cost in terms of making personal sales commercially viable. Would you say that? I would say it is the single issue that is most in need of clarity that we need to solve, because it’s what’s going to hold back this sector. If we don’t. So why is it such a hard issue if as you suggested at the outset of the conversation, these cars will be so much safer. Well, from my point of view, there shouldn’t be. We should take manufacturers at their word and we should say to them, classic American fashion, put up or shut up. If you think that this is so safe. You assume 100 percent of the liability if there is an incident. While what we call the A.D.S, the automated driving system is in control, and it is later shown that the A.D.S is at fault. Then you’ve got to take on the liability. I think that is a clear bright line. I think it’s very easy to argue for. It would be easy to implement, and it would be if we had that we would be able to move forward very clearly. The problem is there is reluctance among the carmakers to live up to that standard, and that’s a problem. What is Waymo’s liability right now if you get hit by Waymo taxi in LA? Who is liable? Well it’s Waymo is. So it’s so they’ve accepted it for their current fleet. So Waymo has done. So. Tesla I think to their discredit has suggested that they might not. Certainly with regards to their driver assist systems they’ve been reluctant to assert that responsibility because I think the potential for lawsuits is so vast. They are trying to protect themselves. And what I think regulators need to do is say you need to have the courage of your convictions. So we’re going to hold you to that standard. We’re going to insist upon it. But this is a pretty radically different setup than the entire liability setup we have right now Yeah, liability is tricky. The American liability is based on the idea that no consumer can hope to stand up to a big company. So we put all of the weight and legal proceedings on the customer side. And that’s led to a jurisprudential culture, if I can use that word, where the cost of getting anything wrong from a manufacturers side is vast. It’s existentially vast. So I told you earlier that there were three big companies in this space. There’s Waymo, there’s Zoox, and there’s Tesla. There used to be a fourth. It was called Cruise and it was an arm of General Motors. So it was involved in an accident a few years ago where someone hit someone who was jaywalking, and they threw the human jaywalker into the path of a Cruise vehicle, which ran them over, and then the Cruise vehicle because it didn’t know what to do. It moved to the safe position. It pulled to the stop dragging that poor unfortunate soul with them. And they weren’t killed, but they were severely injured. That was their injury was much worse because of the car. Because the car did the extra thing Yeah a human driver would never have made that mistake. A human driver might have hit the person, but wouldn’t have dragged them Yeah, yeah, a responsible human driver I think would absolutely have hit them, but would have known there was a human under the car and would have stayed put. But the car didn’t have a sensor underneath and by dragging that person exacerbated their injuries. That incident ended up killing the company. It was not just the lawsuit, but they were a bit squirrely with the regulators who removed their license to operate. And General Motors said, we can’t fund this anymore. So it all got shut down. One incident. So I understand why the firms are being very gun shy of assuming liability here, but we need to insist upon it. But does that mean that essentially you have to achieve not just a higher level of safety than a human driver, but some extraordinarily higher level, because you will be liable in the way that a normal auto manufacturer wouldn’t be so regular. Because this is a new technology. Regulators are absolutely holding a self-driving car to a much higher standard than a human piloted or a human operated car. Some people find that obnoxious, which is like you’d save lives on that as soon as it’s better than average. Let it rip because you’d be saving lives on net. That’s not how lawmakers think. They don’t think about how do we get the best outcomes on net. We get a situation of like, no one can be blamed. So they insist that it’s got to be a safe as reasonably possible, what an engineer calls six nines. 99.9999 I don’t think that’s an unreasonable standard. Sure, it’s going to slow down reaching scale with these things, but there is so much distrust of big tech and of self-driving cars generally. I think that the appropriate strategy of going slow, being safe, and insist showing that what you are, you’re not harmful and you’re not cavalier is so important. If we’re going to get the good outcomes that I think this technology can give us. So in practice, how many people could a self-driving fleet kill to be viable. Would you say? Is it like one? Well, I mean, it’s important to note that one Cruise incident and that was a severe injury. It wasn’t a death. But there are very few self-driving cars on the road even. I mean, they’re in many cities, they’re coming, et cetera. But we’re not talking about millions and millions of cars or hundreds of thousands of cars. We’re talking about a small number Yeah, but what we have is a courtesy of the state of California. And I hope this is something that the federal government, they’re being encouraged to adopt it. I hope they do. There are very strong transparency requirements. So we know about every incident that Waymo has been involved in. And we’ve combed through them and we know that Waymo is safer than human drivers already. You could argue the denominator isn’t there, compared to the hundreds of millions of miles that humans drive in the United States every year versus the relatively small fleet. So we can’t know. But looking at where that data is coming from San Francisco is not an easy city to drive in it is a complex environment if it’s achieving safety there. I find it hard to believe that it would find Topeka to be a much more difficult place to work. But I just want to stay with the weirdness factor for a minute, because I think that’s an important hurdle here for people. Again, in the example that you gave of the Cruise disaster, it was the car doing a weird, inhuman thing after it hit someone. And there have been other examples where Teslas in autopilot mode were involved in similar accidents in Florida, where they collided with the side of white tractor trailers crossing highways because their cameras, as I understand it, just couldn’t see the white against the sky. Again, it’s not the kind of accident that human beings are used to getting into. And I just wonder isn’t that part of the hurdle that people will have to get over to accept these cars, that when you do have accidents, it’s not just the number of accidents, it’s that when they do happen, they will feel weirder and more random maybe than just a guy running a red light and hitting someone. Well, I was writing about this on my newsletter, that Waymo had an incident a few months back where they killed a bodega cat in San Francisco. You’re right. Would a human have made that mistake? I’m not sure. But every time one of these vehicles makes a mistake, we notice it. And because it’s an inhuman thing where we’re used to only having human activity, it does weird us out. It does make us nervous. So regulators, I think, are responding to that. And the two Waymo’s credit and Zoox’s credit, they’re moving slowly and carefully to avoid sparking concern that we’ve unleashed robots on our streets that are unaccountable. They don’t want us to think about it that way. And there was a case in Santa Monica where a child was hit, not killed. And in that case, I think Waymo said, well, a human driver would have been much more likely to hit her at a higher speed, right? And Waymo successfully, the Waymo car successfully braked at a speed a human driver wouldn’t have. But I’m not sure if that applies to this case, but you could imagine a scenario where a Waymo enters a crowded area and drives faster than a normal human would because it isn’t picking up on weirder things going on in that area. Maybe there’s a fire in a building and everyone is slowing down to rubberneck, and the Waymo doesn’t see it, but then it successfully slams on the brakes. It’s like a different way of seeing the road. So the thing to say about that is just like other kinds of sophisticated A.I. systems, data is what it needs. I can only speculate that the Santa Monica incident happened because it was insufficiently aware that at this particular time of day near a school, it should be behaving even more cautiously than normal. Well, it knows that now. And so we’ll have fewer incidents like this every month that passes. The data sets of all these companies get richer. These sorts of incidents should get fewer, which is another reason why I approve of the strategy of going slow and being humble and being safe, because that’s how we win. That’s how we thread this needle. Is there a self-driving car equivalent like a ChatGPT hallucination? Like, are there scenarios where the car just does something and you don’t know why it did it? Oh, absolutely. I mean, you can find videos on YouTube if you’ve got the stomach for it, of Tesla because they’ve got the most sophisticated driver assist systems where it’s just moving along in the lane, then does a hard left and goes right off through opposing traffic. But right off the road and you struggle in vain to know what possibly encouraged it to do that. So it does happen. Just like hallucinations with ChatGPT. They’re getting better all the time, but it’s not perfect. So again, if I was a regulator, I would say, given this scenario, if you’re going to operate in public spaces, you had certainly better stand 100 percent behind it because otherwise it’d be irresponsible. And what are the political obstacles to universal Waymo? It’s interesting because it does scramble traditional Democrat Republican right left lines. On the one side, you’ve got labor interests and you’ve got Democratic lawmakers who are sensitive to sensitive to labor concerns, wanting to go slow. But you’ve also got Democratic lawmakers who are sensitive to the plight of the most vulnerable, and they identify Uber drivers as one of those classes that is worthy of protection. But on the other hand, you also have people who are concerned about spying the nature of a modern vehicle, and certainly a self-driving car. As we’ve already talked about it, it’s got sensors going all the time. It’s collecting data of everywhere it goes, all the time. Who has access to that data? Certainly the operator of the vehicle, the Waymos or the Teslas or the Zooxs of this world do. And that means that a sufficiently motivated bad actor could get them as well. Or I mean, General Motors, just with conventional vehicles, was selling all the data of everyone driving a GM car onto third parties, arguing that, well, we collected this data. It’s ours now we can sell it. So with Waymo or a self-driving car, it’s so much richer. There’s so much more potential for data capture. And so civil libertarians and people with national security concerns have got well, they’ve got questions. And in terms of security. So how much like fears of terrorism, for instance. Someone who used superintelligent A.I. to hack into Waymo’s system would presumably have the capacity to take over hundreds or thousands of cars at once. Is that that’s just in terms of scenarios that people are reasonably afraid of. So in that scenario, yeah, certainly the advent of LLMs means that we’ve unleashed super hacking. There’s the two points to make are is one you’d have to hack. You couldn’t control every car. You’d have to hack into every one. And as previously mentioned, the car’s driving itself. So you’d need to find a very sophisticated way to confuse the car about its environment. I don’t, I’m no technical expert. I think it could be done, but I think it would be really hard to do. Which means the second point, which is in the language of security, Waymo is a hard target. They’ve got all this cybersecurity behind them. If I was a bad actor, America’s power grids, America’s utilities, there are so many softer targets out there where you can do more havoc with less effort. I’m not going to say more. That’s true. No that’s true. Well, we don’t want to sketch out terrorist plans on this show. But I do think there’s a connection to these psychological elements that I’m interested in where the idea of having the automobile you’re in be taken over is because it’s unfamiliar and novel and tied to personal privacy and personal control, in a way, just seems like a more terrorizing act than a blackout. And people have lived through blackouts before. The Naked Gun movie, the opening of the new Naked Gun movie features a murder committed with a self-driving car as the weapon. There’s a long history of this in our popular culture, this is a obvious place for our fears go to. So you’re on to something that this is weird and strange, but in a way that triggers us to be afraid. So then how does the sale happen. When we started this conversation, you made a very strong case that there’s these huge benefits, in terms of just a much, much safer road Yeah but that accumulates slowly and in patchwork. And you don’t have the data for a while or a long time. Most people don’t get into car accidents as a regular thing. It’s a rare thing as many car accidents as there are in the US. Most people go through a year or five years without getting in one. So how do as an advocate for this technology or some version of this technology, see it getting over the hump of different forms of public resistance. So if you watch Mad Men in the first season of Mad Men, Don Draper, there’s an elevator operator that takes you up from the lobby up to the Sterling Cooper offices. By the end of this, there’s no elevator operator. Within a few years, because the elevator operators were on their way out in the mid 60s, I am sure the first time someone wrote in an automatic elevator where they just pressed a button, and then it whisked them to their floor without a human there to intervene. It felt strange, but I imagine the fifth time it happened, it didn’t feel strange at all. That’s certainly everyone’s reported experience with Waymo’s and similar self-driving cars. The first time you do it, it’s either eerie or magical. The second time you do it, you don’t notice. You pull out your phone and you’re doing whatever it is that you’re doing on that, and it’s just like someone is driving you pay no attention to it any more than you pay attention to your Uber. So one of the advantages of Waymo introducing very small fleets, but into many cities is to inoculate us against this idea that it is strange. So the more people that get to ride even once, it will the spell will be broken and we’ll see. Oh, of course, this is driving something a machine should be good at. Why shouldn’t I have a machine do it? And that’s a world, as you’ve alluded to which will be safer, but it requires us to be comfortable with it. So I hope that everyone listening to this podcast the next time they are, perhaps you’re traveling for business or pleasure in a city where Waymo or Zoox or Tesla is operating tries it out. And I think they will see that this is like they say about other A.I., just another technology, a normal, boring technology. Normal and boring. Give me then go forward from that. Give me the good timeline, because you’re an optimist about this tech, but you have a couple of different scenarios for the future, one of which is better than the other. So give me the good scenario for 2035 and beyond. The way this technology gets adopted and how the world changes. So the good scenario would be Waymo and Zoox and Tesla have all despite their different approaches. They’ve all reached scale. So there’s healthy competition in the robotaxi market and every major metro. Everyone is using them. It’s 40 to 50 percent less costly, which means that you travel more or you’ve got more discretionary income to spend on other things. People are giving up their cars. Every household that used to own two cars in an urban environment now owns one. Every household that owned one car now owns none. They use robotaxis to fulfill the space of one of those cars. Consequently, we’ve got less need for parking. We’ve got all the parking infrastructure and parking space can be re returned to other uses, higher and better uses than just vehicle storage. And people are safer. Fewer people are dying in road incidents. And they get a certain number of hours back every week that they can put to whatever purposes they want to. So they are richer, but they’re also freer in the sense they can more exercise those different parts of themselves. And there’s less pollution or less lower energy costs. We haven’t talked about energy and climate change much, but that’s part of the story too, right Every automated vehicle in development that I’m aware of is electric. So to the extent that you want to see a transition away from internal combustion engine cars, which I do, then that’s a better world to yes, there’s going to be more demand for electricity, but it seems that that’s going to happen because of A.I., no matter what happens in this sector. So we’ll have to solve that problem anyway. And well, and in your good scenario, there’s people own fewer cars. Everything is more efficient. People are more accustomed maybe to sharing cars and so on. So there might even be less electricity used. It could be. I think the Jevons paradox suggests that we’ll just use more of it. We’ll just use more. Yes, that’s true. The car, if it’s cheaper, will use more of it. O.K, well that’s a good bridge to what’s the bad scenario? again where self-driving cars spread and become ubiquitous. But the outcome isn’t as happy for society. Congestion is much worse. Trip times get longer. If you’re sitting there playing Candy Crush, maybe you don’t notice, but pity the poor soul who doesn’t have access to this and has to drive. And they’re driving gets worse all the time. It’s easy to imagine a world where we have enough Waymo’s to really increase congestion, but not enough to really put a dent in private car ownership. So it isn’t rational on the margin to get rid of a lot of parking. So we have more congestion, but we don’t get to reclaim space. But worse than that, public transit goes into a death spiral because in a world where robotaxis make ride hail half the cost that it is now get so many people defecting to robotaxis, which means that public transit gets worse. And at the same time that it costs more money to operate and more and more cities can’t afford it. So they pull back, leading to a greater defection to robotaxis. So people that cannot afford even cheaper robotaxi fares now have a worse transit experience or no transit experience, so they experience less mobility. That’s a bad world. In many ways, it’s worse than the one we live in now. So what is the fundamental place where the fork happens. I would say there’s two inflection points and they’re related to one another. The good scenario depends on Waymo being available quickly and cheaply to everyone. If there’s a hard cap on the number of Waymo’s. You don’t get there. So regulators need to be willing to say no. A future where every other car is a robotaxi is a good thing, and they don’t try to prevent that outcome. And so I say it’s related because the other side of it is what do public transit agencies do. Do they see robotaxis as the enemy that has to be kept out, or do they go with what they called the 20th century, the soft embrace and say, we’re going to bring these in. We don’t run long feeder buses anymore that come twice an hour and take 35 minutes to get to the nearest hub. Instead, we replace that with we own some robotaxis or we license some robotaxis, and anyone can get a robotaxi trip that takes them to or from the nearest higher order station. So we begin to bring automated driving into our transit. Our buses. Buses would be robo buses, right Yeah that is going. That’s a really hard row to hoe, because public transit agencies are some of the most unionized environments in this country. They’re going to see this as a threat to their livelihoods, which it is. So what I hope we can do then is instead of we shouldn’t just throw them out masse. I’m a transit advocate. I want there to be good transit systems, but I also want transit to have benefit of the best technology available. If that means doing a big buyout package one time, we should do that. We should take that deal. But it might be a hard sell in an era of limited budgets. I don’t know. I think there’s going to be so much money to be made on the robotaxi side that there’s got to be some deal that can be made to make some of the people who are going to lose out whole. So those obstacles to the better future that you’ve just sketched are kind of left coded. There are obstacles associated with regulatory environments and big cities with how mass transit works, things like that. I’m also interested in obstacles to your happy future, though that might be right coded, right. And above all, the willingness of people in a country like the United States to actually own substantially fewer cars because it seems like your good future depends on that, too. It’s not just people are willing to take robotaxis, Waymo’s and so on. It’s also that as they get willing to do that, they just decide they don’t need to have their own car available. And that does, I think, pretty clearly cut against cultural and behavioral norms in a place like the United States. Now, we’ve seen in urban spaces, because it is. Owning a car in a place like Manhattan is such a pain in the neck. More and more younger people are choosing to forgo a car. They’re not even getting driver’s licenses. There are always going to be people who want to own their own car. I think young parents will always want their own car to move their kids around. Workers like who they’re going to want tools to carry from the job, they’re going to want their own vehicle to do that. The objective is not a world where no one doesn’t. It’s just where you don’t need to own as many as you do now. How is it sustainable, though, to have that kind of persistent car, private car ownership. If self-driving is so much safer than regular driving? Like we talked earlier about the challenge of liability and how figuring out liability is How you figure this out. But isn’t there a certain point where that issue flips and everyone looks around and says, my God, a Waymo is 1,000 times safer than Ross Douthat behind the wheel of a Toyota Sienna. Terror, terror of greater New Haven. And therefore, my insurance premiums for owning a Toyota Sienna that I need to fill with gear for my oversized family, go up and up and up and effectively non self-driving starts getting priced out. Isn’t that a plausible corollary of your optimistic vision for self-driving future? I think it is a plausible corollary. Corollary. I don’t think it’s in the near or even the medium term, but this century, assuming we don’t have some catastrophe, could that happen. Absolutely it could. But I think it would be so gradual because Tesla’s ambitions aside, I think private cars are going to have steering wheels for decades to come. They’re just going to have sophisticated driver assist systems or even self-driving, but only in only on the highway. I think what will happen is that you will be expected to use such systems when you can. And if you choose not to and you get an accident, your insurance might say, well, our policy says that you have to rely on these systems in situations where it’s appropriate. So it’s not going to go away overnight. It’ll be incremental. And I still think that’s to the good as those systems get better and better. Once it reaches the point where it can drive better than us in all scenarios, why wouldn’t we want that. Let’s talk about that. Do you like to drive? I cannot say that I do. I like to drive. I’m not a car person. I’ve never bought an old car and tinkered with it. And I’m not any kind of car brand fanatic. I drive, as I said, minivans right now, but I have always enjoyed driving. It was a pretty big deal to me, learning to drive in the middle of my teens as both an assertion of independent separation from parents, and also just as kind of a way of understanding and mastering the world like a step into adulthood. And this it is distinctively American in certain ways, but it’s American in a way that fits our geography. We’re a big country where there’s lots of places where mass transit doesn’t work. And driving has always made sense. It makes sense that we have this kind of culture, and this form of adult being in the world isn’t something lost. If that is all given up. Well, some of what is lost is what you’ve just described. It is a very American thing. The romance of the road. Freedom independence. The ability to go where you want and be in control of it. There’s another angle to it we don’t have in contemporary liberal, liberal America rites of passage for young people anymore. We don’t have many of them. One of them used to be learning to drive. It was a sign that you are an adult. We trust you with this very dangerous piece of machinery. And when you can do it, we know that you’ve arrived. And it’s also what I suppose a philosopher would call embodied knowledge. You aren’t just a brain you’re also moving this thing, and you have to pay attention. You’ve got to have good reflexes. These are valuable things. And yeah, we are on track to see them. Probably not in our lifetimes, but sometimes in this century we’re on track to see them disappear or become very minor. And there’s our I mean, I should state the driver’s license as a rite of passage phenomenon has already weakened in parts of the United States. And it’s a famous part of the larger story of American teenagers being more risk averse and going around less in the age of the iPhone, that teens are more likely to postpone getting their license. That’s already diminished to some degree. So you can fold this story into the larger story of the kind of screen ification, safety focused screen ification of American youth. And bigger than that the death of embodied knowledge, where it’s not just screen ification. It’s like I’m a writer, which means I spend most of my time looking at a screen and writing. I’m not working with my hands. But that’s the trend. Not just of youth. That’s the trend of American life. So we need to solve this somehow. But it shouldn’t be regarded as a special burden of our cars. To solve it for us. We need rights of passage. We need more opportunities to live in our bodies and learn embodied skills. But let’s not say that we’re going to draw the line at driving cars. That seems the wrong place to draw it when they can offer us so many offsetting benefits. But what is the right place to draw it. It just seems like people are going to say that about every step along the road to disembodied existence. Because at every stage you’re going to say, well, this new situation is much more efficient. It’s much safer. You don’t want your kid to die in a car accident. Obviously, I don’t want my kid to die in a car accident, but that sales pitch is going to be true for any form of embodied knowledge. Doesn’t embodied knowledge by its nature contain risk and peril? Isn’t that what embodiment is all about. It absolutely is. And all I can say is, if we want driving to make us have full and healthy relationships to the world and to ourselves, I think we’re asking too much of driving. You asked me where we should draw the line. I have to say, I’m not a minister and I’m not a philosopher, so I can’t tell you that. All I can tell you is that if we have a tool that can save lives while also giving people their time back. I think we would be a fool not to pick it up and then use that time and money. We save to invest that into solving this problem. But I’m not the one to solve it for you. But it will be a political profit. Then though, just for a minute, if the scenario you’re describing comes to pass. Wouldn’t you expect this to be potentially just a vast culture war issue to where you have blue states in the United States, liberal states, having one set of insurance rules for driving your own car and red states having another sense and cross over into the free state of Montana and it’s much easier to get a driver’s license or it’s much easier to own, own a car. I mean, it doesn’t. It seems like what you’re describing is a potential political, cultural fault line that could actually define American politics in an interesting way. Oh, yes. I mean, there’s nothing Americans can’t turn into a culture war battle if they try. That is. Well, that’s because we care. We care so much, Andrew. But the interesting thing about it is that right now it goes the other way. Now, Texas and Tennessee are much more open to self-driving than blue states like California is a big exception because it’s the home of the industry, but Washington and Massachusetts and right here in New York State, there’s much more friction for the arrival of self-driving cars. So it seems like it’s. No, that’s the fascinating thing the libertarian states are, building the gallows on which human agency and independence will eventually be hanged. That seems like a total possibility Yeah it was history. It’ll surprise you. The ironies, the ironies run deep. Yes no, that’s a really good point. You live in Toronto. Have you ever driven to Vancouver? Oh, no. No, no, never. No I’ve driven to Montreal several times. I’ve driven as far out as Halifax. It’s several days drive. Several days drive. O.K, O.K. I drove across the country with my family a few years ago. And whenever you do things in life that you come to with a set of philosophical priors, obviously it tends to confirm them, but Yeah, I left that experience feeling very grateful that I have the right and the freedom to get behind the wheel of a car and steer it over giant, vast mountain ranges and so on. So really, my takeaway from the end of this conversation is I want to get the New York Times’ to pay you to rent a large American automobile and drive it from Toronto to Vancouver, and see if it makes you any more inclined to defend one’s God given right to drive a car. I’d be happy to. I’d be happy to run that experiment. All right, we’ll talk about it off camera. Andrew Miller, thank you so much for joining me. Well, thank you very much. It’s been a pleasure to be here.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Opinions

    Opinion | Building a World ‘Quite Unlike Our Own’

    April 30, 2026
    Opinions

    Opinion | Why Texas Is Winning the Housing War

    April 29, 2026
    Opinions

    Opinion | What Body Cam Footage Reveals About ICE’s Tactics

    April 29, 2026
    Opinions

    Opinion | What We Got Right — and Wrong — in ‘Abundance’

    April 28, 2026
    Opinions

    Opinion | The Economy, Immigration and Regret: 12 Trump Voters Discuss

    April 28, 2026
    Opinions

    Opinion | Maybe Trump Was Never a Deal Maker

    April 27, 2026
    Editors Picks

    Bengals’ Joe Flacco can make NFL history against Packers

    October 12, 2025

    Can you survive inside a tornado? This scientist did by accident—he’s lucky to be alive

    March 29, 2026

    Shai Gilgeous-Alexander breaks insane Wilt Chamberlain record

    March 13, 2026

    Macron Makes Move To Replace US In Europe

    March 4, 2026

    Why Kendra Divorcing Joseph Duggar Is ‘The Best Thing To Do’

    March 31, 2026
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Harry & Meghan’s Cold Move Shows They Don’t ‘Need The Palace’

    April 30, 2026

    Timeline of attacks on the Jewish community in London

    April 30, 2026

    $25bn or $1 trillion: How much has Iran war really cost the US? | US-Israel war on Iran News

    April 30, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.