Jeffrey Considine


Jeffrey Considine is an Adjunct Associate Professor in the Faculty of Computing & Data Sciences (CDS) at Boston University. Professor Considine completed his Ph.D. at Boston University, specializing in distributed randomized algorithms and data structures. His research on approximate aggregation techniques has won multiple awards. He spent 19 years in industry, including a 15-year stint at Cogo Labs incubating and selling several data-driven startups where he rose to the position of Chief Scientist. Prof. Considine recently returned to Boston University to join the faculty in 2024 and is also affiliated with the Computer Science department.


Can you share a little about your academic background and what led you to specialize in your current field?

My academic background is in computer science, and my thesis focused on distributed systems that used randomized algorithms and data structures to efficiently coordinate. At the time, we were trying to coordinate algorithms while sending as little data as possible, which was the opposite of how we operated after I settled in industry. We collected all the data in one place and threw as much compute power as we could manage at it. While in industry, I mostly worked at or around startups, so we all took on many roles. Specialization in data science was not a conscious decision; I don't think the job title “data scientist” was even coined then. I ended up taking over big chunks of our performance marketing for a while.

That needed a lot of data from different sources - our Google search campaigns, web site logs, and partner monetization data, so I got in the habit of collecting all the useful data in one place and making it easy to query and analyze. Eventually, I ended up building what I called a data ops team which was essentially a mix of data scientists and ML ops folks. One thing I think we did well there was tracking every little data-driven prediction made, and checking if they panned out. We learned a lot from tracking our own predictions and seeing which ones worked and which did not.

What is your involvement with the OMDS program?

I am responsible for both academic modules in the first semester of the program, Mathematical Foundations of Data Science and Programming Toolkit for Data Science. These two modules are challenging since we want this program to be open to people with a wide range of backgrounds. We are not assuming or requiring an undergraduate background in data science, computer science, or STEM, so we have to start from scratch and build a strong foundation for everyone. This inevitably means tradeoffs in what we have time to cover; we will rarely cover proofs, but we will give the students broad exposure to all the necessary concepts, and there will be plenty of hands-on practice.

At the end of the first semester, I expect students to be able to build a variety of basic models on their own, and to be able to assess whether those models are working to solve the problem at hand. To be clear, these won't be toy models, but models of types that I've seen generate a lot of value in industry if deployed at the right time and place. The students will not be full fledged data scientists after just a semester; there will still be a lot of what, when and where to learn in the following modules. But I think they will learn quite a bit in the first semester that we've prepared for them, and I'm proud of that. That said, I will also be the faculty for one of these modules each semester and will be looking forward to hearing from the students how they are actually doing, and what we can do to help them learn more.

What are the main focuses of your current research, and what impact do you hope to achieve in your field?

My most recent research has leaned into theoretical computer science with a technique that I'm calling compressed parallel execution. I've been using it for game solving, and empirical testing of certain mathematical hypothesis. Next, I'm looking to collaborate with biochemists and linguists to work on identifying bioactive compounds and phrases that are poorly classified; those sound pretty different, but I think some of the techniques that I've been working on will be useful extracting information from their existing models. Broadly, I think we are just scratching the surfaces of what kinds of models we can build, and how we can make good use of them, and there is a lot of impact in going deeper there.

Which courses do you teach at BU Computing & Data Sciences, and what do you enjoy most about teaching these subjects?

I'm looking forward to teaching our initial math courses – there are magical moments when intuition starts coalescing and you start to really get it. I'll also be teaching graduate algorithms courses in the fall for the computer science department which you will see leaking into my teaching of other subjects. In my head, the algorithmic computation and the qualitative descriptions of the result are often intertwined, and the more that you can simplify the descriptions of both, the better you understand them.

Which recent developments in your field do you find particularly exciting? How have they influenced your work and teaching?

This might seem niche, but I think neural fields, also called neural implicit representations, are quite exciting. The most popular use of these was NeRF, which lets you take a couple pictures of a scene and render it in 3D from any angle. I've done some research trying to build on top of neural fields to get a more structured, controllable version of generative AI, but nothing publishable yet. They have shaped my understanding of how to tilt nature vs nurture in neural network design, so I expect some of that will come through when I teach a deep learning course.

Learn More About the Online MS in Data Science