Advancing Machine Learning Research at Northeastern

Max Torop, PhD’27, electrical engineering, pursued his PhD to further his knowledge of machine learning. Since 2020, Torop has been conducting research at the Machine Learning Lab, seeking solutions for improving large language models and machine learning to enhance the future of these technologies.


Max Torop began his academic career as a data science major at the University of Rochester. While furthering his education towards a master’s degree in computer science, Torop worked to develop an algorithm that can efficiently take the data from an MRI and transform it into an R2* map. His thesis on MRI processing was published in Magnetic Resonance in Medicine upon completion of his master’s.

This was all preamble to his current pursuit of a PhD in electrical engineering at Northeastern University. Torop has been conducting research with COE Distinguished Professor Jennifer Dy at Spiral Processing Imaging Reasoning and Learning (SPIRAL) since 2020. SPIRAL operates multiple labs and researches a vast number of topics from signal processing to machine learning to distributed computing and optimization.

Torop’s decision to pursue a PhD program came from his zeal for research and desire to delve into subjects he is fascinated with, more deeply. He was interested in Professor Dy’s lab because of her research in machine learning, a subject Torop has dedicated the bulk of his career exploring. Torop specifically works at the Machine Learning Lab within SPIRAL.

Torop says one of the unique things about Professor Dy’s lab is that the scope of the lab is broad. There are a lot of different research projects going on at any given time, and the lab collaborates with a vast number of industry partners. Due to an emphasis on multidisciplinary work and openness to collaboration, Torop says, “You have the flexibility to really find what you’re interested in and kind of work on that and learn about that.”

Research at the Machine Learning Lab

At Professor Dy’s lab, Torop researches machine learning. More specifically, he works in what is called “representation engineering.” Representation engineering is a technique for steering AI model behavior by directly manipulating the internal numerical representations that models use to process information. When a large language model (LLM) processes text, it converts words and concepts into high-dimensional vectors (lists of numbers) that capture semantic meaning. Representation engineering works by identifying and modifying these internal representations to promote desired behaviors or responses, rather than relying solely on prompting or fine-tuning.

This work in managing LLMs has had an incredible impact on Torop, and he and his fellow researchers, wrote a paper on this project which has been accepted to NeurIPS 2025, the annual conference on Neural Information Processing Systems; a conference that includes research topics involving machine learning, artificial intelligence, and computational neuroscience.

Torop’s paper, titled “DISCO: Disentangled Communication Steering for Large Language Models,” discusses an existing method to guide and control LLMs. Torop explores the effects of steering specific representation spaces that hadn’t been studied yet within that paradigm. This method works by injecting ‘steering vectors’ directly into the model’s internal processing layers. These steering vectors are mathematical directions in the model’s representation space that correspond to specific concepts or behaviors. When added to the model’s internal representations during processing, they can amplify or diminish the influence of particular concepts, thereby promoting or suppressing corresponding behaviors in the model’s responses.

Torop is excited for this research to be shared because he believes it can be foundational to future research explorations on this topic. He believes that this work is important in providing context and details that can help other researchers to control LLMs to fit their needs and preferences at a lower cost, since training LLMs can become expensive. This research can meaningfully impact the future of using LLMs and make managing them more accessible and efficient.

Reflections

Torop said during his PhD program, he has “had a really nice time exploring different subtopics and finding ways to be creative. In general, seeing the creativity you can do as a researcher has been really fun.” After completing his PhD, Torop plans to continue his research work in machine learning and large language models.

Related Faculty: Jennifer Dy

Related Departments:Electrical & Computer Engineering