BOSTON — Imagine you're behind the wheel when your brakes fail. As you speed toward a crowded crosswalk, you're confronted with an impossible choice: veer right and mow down a large group of elderly people or veer left into a woman pushing a stroller.

Now imagine you're riding in the back of a self-driving car. How would it decide?

Researchers at the Massachusetts Institute of Technology are asking people worldwide how they think a robot car should handle such life-or-death decisions. Their findings so far show people prefer a self-driving car to act in the greater good, sacrificing its passenger if it can save a crowd of pedestrians. They just don't want to get into that car.

The findings present a dilemma for car makers and governments eager to introduce self-driving vehicles on the promise that they'll be safer than human-controlled cars.

“There is a real risk that if we don't understand those psychological barriers and address them through regulation and public outreach, we may undermine the entire enterprise,” said Iyad Rahwan, an associate professor at the MIT Media Lab. “People will say they're not comfortable with this. It would stifle what I think will be a very good thing for humanity.”

Rahwan worries that progress could be stalled without a new social compact that addresses moral trade-offs. Current traffic laws and human behavioral norms have created “trust that this entire system functions in a way that works in our interests, which is why we're willing to fit into large pieces of metal moving at high speeds,” Rahwan said.

“The problem with the new system is it has a very distinctive feature: algorithms are making decisions that have very important consequences on human life.”