| in Community

By Molly Doherty (CAS`29)

Boston University computer science alum Dr. Gonca Gürsun (GRS’13) has been named Inventor of the Year by the Bosch Center for Artificial Intelligence (BCAI), a highly selective honor recognizing exceptional innovation among Bosch’s more than 85,000 researchers and engineers worldwide.

Gürsun, who earned her PhD in computer science from BU’s Graduate School of Arts & Sciences, specializes in behavior learning for complex, real‑world systems, developing models that enable intelligent decision‑making in dynamic environments. Her research has played a key role in advancing autonomous driving technologies and has been deployed by major automotive manufacturers across 16 European countries and the United States—supporting safer, more adaptive, and more natural driving behavior. In recognition of this impact, she was selected as Bosch’s 2025 Inventor of the Year from a global pool of researchers and developers.

Arts × Sciences sat down with Dr. Gürsun to discuss her her experience at BU, what its like to be an AI researcher amidst the rapidly developing technology, and her outlook on the future of AI.


What experiences at BU do you feel best prepared you for your work at Bosch Center for Artificial Intelligence (BCAI)?

Looking back, what prepared me most at Boston University was the research culture in the computer science department. I spent much of my time as a research assistant, and BU created an environment where research wasn’t about following a predefined track. It was about exploring ideas, challenging assumptions, and gradually taking ownership of your own direction. 

The department had a very open and welcoming intellectual culture. Ideas were discussed freely, and questioning assumptions was encouraged. Through that environment and my research, I learned to think in terms of hypotheses, experimentation, and iteration, rather than perfect answers. That experience helped me become comfortable with ambiguity, which is essential when you’re innovating new technologies and making decisions under uncertainty.

I believe BU gave me both the intellectual confidence and the freedom to grow into an independent thinker. That foundation continues to shape how I approach leadership and innovation in the field of AI.

How has the recent acceleration of development of AI (as well as the public’s understanding of it) affected your current work, if at all?

The biggest impact of the recent acceleration in AI has really been on two things: the pace of the work and the expectations around it. Everything moves much faster, and the distance between research and application is much shorter. Because feedback comes so quickly, you’re constantly forced to ask whether what you’re building is truly adding value.

At the same time, expectations have shifted. There’s a stronger expectation that AI systems actually work in real situations, not just in demos. The acceleration has made the work more intense, but also more intentional. It’s pushed judgment and decision-making to the center of what I do.

Can you explain the basic mechanics behind autonomous driving and how you train AI models to analyze these complex traffic patterns?

At a high level, autonomous driving works a lot like how humans drive: by seeing the world, understanding what’s happening, deciding what to do next, and then acting. 

The vehicle uses sensors such as cameras, radar, and lidar to perceive its surroundings. AI models analyze this sensor data to identify things like cars, pedestrians, lanes, traffic lights, and obstacles.

Next comes understanding and prediction. The tries to understand how objects might move. For example, is a pedestrian about to cross the street? Is the car ahead slowing down? AI models are trained on large amounts of driving data to recognize these patterns and anticipate likely future behaviors.

Then comes decision-making. Based on what the car sees and predicts, another set of models decides how the vehicle should respond: whether to slow down, change lanes, or stop. This is similar to how a human driver weighs different options before acting. Those decisions are translated into precise control commands, such as steering, braking, and acceleration, so the vehicle can safely execute the chosen action.

To train these AI models, we use vast amounts of real and simulated driving data, and test them extensively on rare or difficult scenarios to ensure robustness. The models learn by seeing many examples of traffic situations and gradually improving their ability to recognize patterns and make safe decisions.

Do you have a favorite project you’ve worked on?

My favorite project at BCAI was leading the development of AI-based behavior prediction models for automated driving. We were trying to answer a very human question: can a car anticipate what others are about to do a few seconds before it actually happens?

That ability to look a few seconds ahead makes a huge difference in real traffic. Those extra seconds give the vehicle more time to respond smoothly instead of reacting at the last moment. That means a larger safety buffer, fewer abrupt maneuvers, and driving behavior that feels more natural and predictable. This is a very challenging problem because traffic is dynamic. Vehicles constantly influence each other, so the system has to continuously analyze the whole scene and update its predictions in real time.

My role focused on translating research ideas into systems that could work under real-world conditions… constantly weighing trade-offs and shaping how research could move beyond prototypes.

What makes this project especially meaningful is that it moved far beyond the lab. It was adopted by major car manufacturers and is now running in vehicles on the road. I was honored that this work was recognized with Bosch’s Inventor of the Year award; but the real reward is knowing that it’s contributing to safer and more human-like driving every day.

What are you most looking forward to in the future of AI use?

What I’m most looking forward to is AI helping people become better decision-makers. Even with all the technology we have today, it’s still surprisingly hard to find the right information and turn it into the right action quickly. AI has the potential to change that.

As these systems get better at understanding context and individual needs, they can help surface the most relevant information, highlight trade-offs, and support people in making more informed decisions, especially when time or complexity makes things overwhelming. The exciting future of AI isn’t about replacing human judgment, but strengthening it.

What warnings do you have on the use of AI?

AI is moving into everyday life much faster than our ability to fully understand its long-term effects. Once systems scale and become part of daily routines, it becomes very slow and difficult to correct problems that show up later. That’s why it’s so important to ask early on who is affected when these systems fail, and who remains responsible?

As AI becomes more capable and more natural to interact with, people may start to rely on it too much. AI should support human judgment, not quietly replace it. As we integrate AI more deeply into society, we have to be intentional about keeping humans in the loop. We have to think carefully about real-world consequences, and make sure these systems are designed to serve people, not the other way around.