BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical &amp; Computer Engineering - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ece.northeastern.edu
X-WR-CALDESC:Events for Department of Electrical &amp; Computer Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/Phoenix
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:20250101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260331T153000
DTEND;TZID=America/New_York:20260331T170000
DTSTAMP:20260405T051933
CREATED:20260323T174858Z
LAST-MODIFIED:20260323T174858Z
UID:8349-1774971000-1774976400@ece.northeastern.edu
SUMMARY:ECE Distinguished Lecture: Ubiquitous Active Surfaces
DESCRIPTION:ECE DISTINGUISHED LECTURE: Ubiquitous Active Surfaces \nProf. Vladimir Bulović\nProfessor of Emerging Technologies\, MIT\nTuesday\, March 31\n3:30-5:00 PM (ET)\n102 ISEC Auditorium or Teams \nWhat if any surface could generate light\, harvest solar energy\, sense motion\, or emit sound? Paper-thin devices are making this possible — turning walls\, windows\, and everyday objects into active technology. Prof. Bulović will showcase newly invented MIT technologies and the startups bringing them to market. \nAbout the speaker: Founding Director of MIT.nano\, holder of 120+ U.S. patents\, and author of 300+ research articles (cited 70\,000+ times). His lab’s spinouts — including QD Vision\, Ubiquitous Energy\, and Swift Solar — have brought thin-film technology to millions of users worldwide.
URL:https://ece.northeastern.edu/event/ece-distinguished-lecture-ubiquitous-active-surfaces/
LOCATION:102 ISEC\, 360 Huntington Ave\, 102 ISEC\, Boston\, MA\, 02115\, United States
GEO:42.3377335;-71.0869121
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=102 ISEC 360 Huntington Ave 102 ISEC Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 102 ISEC:geo:-71.0869121,42.3377335
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Phoenix:20260224T133000
DTEND;TZID=America/Phoenix:20260224T163000
DTSTAMP:20260405T051933
CREATED:20260110T020817Z
LAST-MODIFIED:20260110T020817Z
UID:8222-1771939800-1771950600@ece.northeastern.edu
SUMMARY:Digital Twins for Printed Electronics: How Can AI Learn FHE Printing
DESCRIPTION:Benyamin Davaji\, Assistant Professor in the College of Engineering\, alongside Haiyang Yun\, Senior PhD Student\, will instruct a professional course titled “Digital Twins for Printed Electronics: How Can AI Learn FHE Printing” on February 24\, 2026\, from 1:30–4:30 p.m. MT. at FLEX 2026\, the premier international conference for Flexible and Hybrid Electronics (FHE)\, taking place in Phoenix\, Arizona. \nDigital Twin is a virtual representation of the structure\, context\, and behavior of physical systems or a process\, with a live link to a physical system serving as a key enabler for predictive and data-driven optimization. In Printed and Flexible Hybrid Electronics (FHE)\, manufacturing involves multiple interdependent variables—different printing technologies\, inks\, substrates\, and process conditions—each introducing its own complexity. In practice\, additional challenges such as equipment drift\, batch-to-batch variations\, and environmental fluctuations further impact process consistency and yield. Changing a process or transferring it between tools is often difficult\, as each setup is highly customized and sensitive to local conditions. To address these challenges\, Digital Twin frameworks connect data from design\, fabrication\, and metrology into continuously learning digital models. They enable early detection of process drifts\, virtual experimentation for process development\, and data-driven optimization that reduces time\, cost\, and waste. \nThis course introduces Digital Twin frameworks for FHE\, focusing on Deep Neural Network (DNN)-based predictive models. Participants will learn how to integrate design\, fabrication\, and metrology data into continuously learning virtual twins that detect process drifts\, enable virtual experimentation\, and optimize manufacturing. The program covers the full workflow—from image processing and virtual metrology to AI model training\, validation\, and hyperparameter tuning—using real datasets. A hands-on “Build Your Own Digital Twin” module in Google Colab will provide practical experience in training and refining models for printed electronics applications\, equipping attendees with both theoretical insight and applied skills for process optimization and performance prediction. \nFor more information\, visit the FLEX 2026 course page.
URL:https://ece.northeastern.edu/event/digital-twins-for-printed-electronics-how-can-ai-learn-fhe-printing/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T100000
DTEND;TZID=America/New_York:20251105T120000
DTSTAMP:20260405T051933
CREATED:20251016T175047Z
LAST-MODIFIED:20251016T175047Z
UID:8077-1762336800-1762344000@ece.northeastern.edu
SUMMARY:Digital Twin Models for Semiconductor Manufacturing Unit Process
DESCRIPTION:Join ECE Assistant Professor Benyamin Davaji\, head of the AIMS Lab\, as he explores how digital twin models are redefining semiconductor manufacturing. \n\nSEMI Master Class #27 will feature two thought leaders redefining the future of semiconductor process innovation — Dr. Benyamin Davaji of Northeastern University and Dr. Peter Doerschuk of Cornell University. \nDr. Davaji\, Assistant Professor of Electrical and Computer Engineering at Northeastern and head of the Autonomous Integrated Microsystems (AIMS) Lab\, merges data science\, physics\, and nanofabrication to revolutionize semiconductor manufacturing. His work on digital twin models is reshaping how we simulate\, optimize\, and scale unit processes—from lab-scale experimentation to high-volume production. \nDr. Doerschuk\, Professor of Electrical and Computer Engineering at Cornell University\, brings decades of experience in computational modeling and systems analysis. With degrees in electrical engineering from MIT and an M.D. from Harvard Medical School\, his research bridges computation and biology\, developing advanced models that drive new approaches to sensor signal processing\, pattern recognition\, and data-driven system design. \nTogether\, these speakers will explore how digital twins and computational models are transforming semiconductor process development and enabling smarter\, faster\, and more efficient manufacturing. \nIn this session\, you’ll learn:\n• How digital twins are transforming semiconductor process development\n• Real-world applications of AI and virtual metrology in printed electronics\n• Strategies for integrating computational modeling with physical systems to boost reliability and throughput \nWhether you’re an engineer\, executive\, or technologist\, this Master Class will deliver actionable insights on bridging data\, modeling\, and manufacturing. \nRegister
URL:https://ece.northeastern.edu/event/digital-twin-models-for-semiconductor-manufacturing-unit-process/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251008T110000
DTEND;TZID=America/New_York:20251008T120000
DTSTAMP:20260405T051933
CREATED:20250917T000745Z
LAST-MODIFIED:20250917T000745Z
UID:8027-1759921200-1759924800@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Programs Overview
DESCRIPTION:During Wonder Week\, you’ll have the chance to learn how the top-ranked Graduate School of Engineering at Northeastern University combines rigorous academics with experiential learning and convergent research. Register for a variety of program-specific webinars throughout the week tailored to your career aspirations and get direct insights from faculty members and current students. Each session includes a 30-minute presentation followed by a Q&A session\, allowing you to directly connect with panelists and presenters. \nThis session will highlight our Electrical and Computer Engineering Programs Overview \nWednesday October 8th\, 2025 at 11AM ET
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-programs-overview/
LOCATION:Virtual
ORGANIZER;CN="Graduate School of Engineering":MAILTO:coe-gradadmissions@northeastern.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251007T080000
DTEND;TZID=America/New_York:20251007T090000
DTSTAMP:20260405T051933
CREATED:20250917T000657Z
LAST-MODIFIED:20250917T000657Z
UID:8023-1759824000-1759827600@ece.northeastern.edu
SUMMARY:Disciplinary Engineering Programs Co-op Overview
DESCRIPTION:During Wonder Week\, you’ll have the chance to learn how the top-ranked Graduate School of Engineering at Northeastern University combines rigorous academics with experiential learning and convergent research. Register for a variety of program-specific webinars throughout the week tailored to your career aspirations and get direct insights from faculty members and current students. Each session includes a 30-minute presentation followed by a Q&A session\, allowing you to directly connect with panelists and presenters. \nThis session will highlight our Disciplinary Engineering Programs Co-op Overview \nTuesday October 7th\, 2025 at 8:00AM ET
URL:https://ece.northeastern.edu/event/disciplinary-engineering-programs-co-op-overview/
LOCATION:Virtual
ORGANIZER;CN="Graduate School of Engineering":MAILTO:coe-gradadmissions@northeastern.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241010T170000
DTEND;TZID=America/New_York:20241010T190000
DTSTAMP:20260405T051933
CREATED:20240806T175016Z
LAST-MODIFIED:20240806T175016Z
UID:7150-1728579600-1728586800@ece.northeastern.edu
SUMMARY:SOURCE\, the Showcase of Opportunities for Undergraduate Research and Creative Endeavor
DESCRIPTION:Learn more about what cutting-edge research and creative endeavors look like at Northeastern. Talk one-on-one with faculty from across the colleges about their work – and learn how you can get involved in projects during your time at Northeastern. \nSOURCE is a collaboration between Bouvé College of Health Sciences; College of Arts\, Media and Design; College of Engineering; College of Science; College of Social Sciences and Humanities; D’Amore-McKim School of Business; and Khoury College of Computer Science. It is coordinated by Undergraduate Research and Fellowships on behalf of the Office of the Chancellor. \nPlease write to URF@Northeastern.edu with any questions.
URL:https://ece.northeastern.edu/event/source-the-showcase-of-opportunities-for-undergraduate-research-and-creative-endeavor-2/
LOCATION:Curry Student Center\, 360 Huntington Ave.\, Boston\, MA\, 02115\, United States
GEO:42.3394629;-71.0885286
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=Curry Student Center 360 Huntington Ave. Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave.:geo:-71.0885286,42.3394629
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241007T103000
DTEND;TZID=America/New_York:20241007T130000
DTSTAMP:20260405T051933
CREATED:20240903T234950Z
LAST-MODIFIED:20240910T010233Z
UID:7261-1728297000-1728306000@ece.northeastern.edu
SUMMARY:Women in Engineering at Northeastern
DESCRIPTION:Women in Engineering (WIE) at Northeastern College of Engineering (CoE)\, in collaboration with the Electrical and Computer Engineering (ECE) and Mechanical and Industrial Engineering (MIE) Departments\, ECE PhD Student Association (EPSA) and MIE Diversity\, Equity\, and Inclusion (DEI) student groups\, is hosting a special gathering for female-identifying PhD students\, postdocs\, and faculty from the ECE and MIE departments. The purpose of this event is to check in with our students\, hear their thoughts and concerns\, and ensure they feel supported—not only to strengthen their sense of belonging but also to equip them with the necessary skills for their future careers. \nThe gathering will take place on Monday\, October 7th\, from 10:30 am to 12:30 pm at the Curry Student Center Ballroom\, with lunch provided. \nTo better understand the experiences\, motivations\, and challenges faced by female-identifying PhD students and Postdocs at Northeastern University in the MIE and ECE departments\, we’ve prepared the following questions\, which will be answered anonymously: https://docs.google.com/forms/d/e/1FAIpQLSd64OWZK3yxEQmzmn3yCXNYJyBi5GaiarxGJjUnTKWKtBFUTg/viewform?usp=sf_link \nYour feedback will play a crucial role in shaping college programming\, departmental support\, and potential admissions changes to better support female-identifying and non-binary individuals in the College of Engineering. While the survey is focused on driving meaningful changes within the CoE\, some areas\, like university-wide policies\, may be beyond its direct influence. Nonetheless\, your input will contribute to broader discussions and advocacy efforts across the university. We will report the cumulative results of this survey to your departments\, COE\, as well as Northeastern University.
URL:https://ece.northeastern.edu/event/women-in-engineering-at-northeastern/
LOCATION:Curry Student Center\, 360 Huntington Ave.\, Boston\, MA\, 02115\, United States
GEO:42.3394629;-71.0885286
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=Curry Student Center 360 Huntington Ave. Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave.:geo:-71.0885286,42.3394629
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T100000
DTEND;TZID=America/New_York:20240920T170000
DTSTAMP:20260405T051933
CREATED:20240819T185214Z
LAST-MODIFIED:20240819T185214Z
UID:7169-1726826400-1726851600@ece.northeastern.edu
SUMMARY:Visual AI Hackathon
DESCRIPTION:ECE Associate Professor Sarah Ostadabbas\, in collaboration with Voxel51\, is hosting an exciting hackathon on September 20\, 2024\, from 10 AM to 5 PM Eastern at the Raytheon Amphitheater – 240 Egan Building. This event offers an immersive experience for machine learning enthusiasts and college students\, featuring cash prizes\, a collaborative environment\, refreshments\, and swag for participants. Whether you’re a beginner or looking to sharpen your skills\, there’s something for everyone. \nLink: https://voxel51.com/computer-vision-events/visual-ai-hackathon-sept-20-2024/ \nThe hackathon coincides with Prof. Ostadabbas’s new course\, “Machine Learning with Small Data\,” which is being offered for the first time on the Boston campus this Fall. Industry experts will judge submissions\, with prizes awarded to the most innovative solutions. Don’t miss this opportunity to collaborate\, learn\, and make your mark in the AI community!
URL:https://ece.northeastern.edu/event/visual-ai-hackathon/
LOCATION:Raytheon Amphitheater (240 Egan)\, 360 Huntington Ave\, 240 Egan\, Boston\, MA\, 02115\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240909T090000
DTEND;TZID=America/New_York:20240909T100000
DTSTAMP:20260405T051933
CREATED:20240820T181643Z
LAST-MODIFIED:20240822T181919Z
UID:7174-1725872400-1725876000@ece.northeastern.edu
SUMMARY:Rajiv Singh PhD Dissertation Defense
DESCRIPTION:Name:\nRajiv Singh \nTitle:\nInterpolation and Convexification Methods for Tractable Learning of Dynamic Systems \nDate:\n9/9/2024 \nTime:\n9:00:00 AM \nLocation: https://northeastern.zoom.us/j/97729968899?pwd=WfExrC0k60ocNpzCrkJXK3HyJDptMK.1 \nCommittee Members:\nProf. Mario Sznaier (Advisor) \nProf. Lennart Ljung \nProf. Octavia Camps\,\nProf. Stratis Ioannidis \nAbstract:\nIn this thesis\, we present interpolation and convexification based system identification techniques that are geared towards producing\, practical\, engineering-friendly models. The models are either linear\, or close to being linear – they are either weakly nonlinear\, are described by a switching among linear models\, or linear models whose parameters are allowed to depend upon certain states or inputs or the system. A common objective in all the proposed approaches is to determine the lowest-order models that are consistent with the information available in the form of data and available priors. We leverage ideas from the rational interpolation community in order to create tractable algorithms that are efficient and often scale well with the amount of data. In addition\, we present control-oriented learning methods extend the basic approaches by directly incorporating the closed-loop objectives. The resulting models are self-certified in that they produce certificates of guaranteed closed-loop behavior. \nA summary of the essential ideas presented in this thesis follows next.\n1. Using the rank-revealing properties of Loewner and Hankel matrices\, we develop a convex algorithm for identification of low order stable transfer functions using time and frequency domain data. This results are guaranteed to meet prescribed worst case bounds. \n2. We propose a set of techniques geared towards control-oriented identification of potentially unstable linear models using open-loop data. These models come with a certification of robust stabilizability which greatly aids the control design procedure. The first technique leverages the concept of coprime factors of a linear system while the second technique uses robust identification of a system’s predictor as a vehicle towards identification of the plant model. The latter technique also directly incorporates the closed-loop objective of νgap minimization into the identification procedure. \n3. We present convex approaches to identification of nonlinear polynomial models with time-varying coefficients. The model coefficients evolution is governed by scheduling maps that are described by low-order linear differential equations. A first approach uses a Hankel matrix rank minimization technique towards a joint identification of the model’s parameters and the scheduling map. A second approach leverages the atomic norm minimization framework to extend the first approach to bilinear systems\, and also support easy incorporation of scheduling priors. \n4. We present some results regarding sparse identification of Nonlinear ARX models incorporating bounded nonlinear maps. We present approaches to achieve sparsity with respect to the number of regressors used\, and with respect to the maximum lag employed by any of the contributing regressors. The proposed algorithm leverages ideas from sparse learning and ensemble learning for sparse NARX models. \n5. We present a new framework for identification of switched linear and parameter-varying systems based on rational interpolation. We develop multivariate interpolation procedures based on the recent “block-AAA” algorithms. We demonstrate that this modeling framework leads to fast\, accurate\, and scalable algorithms that can be used in various settings where the data domain is described by correlations\, frequency\, or scheduling variables.
URL:https://ece.northeastern.edu/event/rajiv-singh-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240903T100000
DTEND;TZID=America/New_York:20240903T110000
DTSTAMP:20260405T051933
CREATED:20240820T215639Z
LAST-MODIFIED:20240820T215639Z
UID:7178-1725357600-1725361200@ece.northeastern.edu
SUMMARY:Yuhui Bao PhD Dissertation Defense
DESCRIPTION:Name:\nYuhui Bao \nTitle:\nA Design Methodology for Producing Highly-Adaptable and High-Performance Simulation Frameworks \nDate:\n9/3/2024 \nTime:\n10:00:00 AM\nCommittee Members:\nProf. David Kaeli (Advisor)\nProf. Ningfang Mi\nProf. Yifan Sun (William and Mary) \nAbstract:\nComputer architecture simulators play an essential role in the development and optimization of computer hardware. A variety of simulators have been developed to explore the design space of CPUs\, GPUs\, and customer accelerators. As GPUs continue to grow in popularity for accelerating demanding applications\, such as high-performance computing and machine learning\, GPU architects have been pushing the envelope of GPU performance in every new GPU generation. GPU vendors (e.g.\, NVIDIA and AMD) have been introducing subsequent generations of GPU architectures and products with updated instruction set architectures (ISAs) and new microarchitectural features every 2-3 years. Modeling the state-of-the-art architecture is a crucial feature of GPU simulators\, which are used to characterize and accelerate challenging workloads facilitating performance evaluation and design exploration. However\, the effort required to design and construct an accurate and performant simulator is huge. Due to the rapid rate of innovation in GPU technology\, any simulator that is over-customized to capture the design of a specific architecture will quickly become outdated. Thus\, we need to develop a design methodology for simulators that can guard against this trend\, embracing future architectures. \nIn this dissertation\, we propose a design methodology for producing highly-adaptable and high-performance simulation frameworks. We aim to design simulators featuring high adaptability\, being able to accommodate future alterations or extensions\, high performance and high fidelity. We leverage the Akita simulator framework to enable the modular and extensible design of various GPU components. To fulfill the goal of high fidelity\, we design a set of microbenchmarks to evaluate individual GPU subsystems. We demonstrate how we follow our design methodology to achieve a highly-adaptable and accurate simulator — NaviSim\, which provides the flexibility to support simulation of three different ISAs. To demonstrate the full utility of the NaviSim simulator\, we conduct a performance study of the impact of individual architecture features revealing the high flexibility and configurability of NaviSim. In addition\, we showcase how NaviSim’s high adaptability contributes to design space exploration\, offering solutions to enhance the performance of real-world demanding applications. \nFast simulation speed is one of the key requirements of any simulators. NaviSim is designed to support multi-threaded execution\, which is able to leverage the parallel capabilities offered by today’s multi-core CPUs\, enabling parallel simulation. In this thesis we identify key performance bottlenecks in terms of both serial and parallel simulation execute modes and optimize simulation speed. We also present lessons learned about efficient simulator design and provide guidance for future simulator developers.
URL:https://ece.northeastern.edu/event/yuhui-bao-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240822T130000
DTEND;TZID=America/New_York:20240822T140000
DTSTAMP:20260405T051933
CREATED:20240820T215132Z
LAST-MODIFIED:20240820T215132Z
UID:7176-1724331600-1724335200@ece.northeastern.edu
SUMMARY:Zohreh Azizi PhD Proposal Review
DESCRIPTION:Name:\nZohreh Azizi \nTitle:\nExploring SIM for monomer orientation \nDate:\n8/22/2024 \nTime:\n1:00:00 PM \nLocation: https://northeastern.zoom.us/j/7318775019\nMeeting ID: 731 877 5019 \nCommittee Members:\n1. Prof. Charles DiMarzio (Advisor)\n2. Prof. Carey Rappaport\n3. Dr. Sangyeon (Fred) Cho \nAbstract:\nCollagen fibrils\, the most abundant protein polymers in animals\, protect cells from mechanical forces such as stress\, tension\, compression\, and shear. Each collagen molecule\, approximately 300 nm long and 1.5 nm in diameter\, consists of three polypeptide chains forming a supercoiled triple helix. These fibrils self-assemble\, with diameters ranging from 20 nm to several hundred nanometers. Large collagen fibrils are visible with scanning electron microscopy (SEM) and optical microscopy. Electron microscopy damages samples during preparation\, limiting observations to static\, non-living conditions. The natural self-assembly behavior\, spatial arrangement\, and dimensions of collagen fibrils play a vital role in shaping the structure\, strength\, and function of tissues. These factors are essential in determining how tissues are organized\, how they withstand physical forces\, and how effectively they perform their biological functions. \nDetecting the orientation and location of collagen monomers is essential for understanding their role in collagen spontaneous formation and their interactions with fibril surfaces. To study collagen monomers in a dynamic\, living state\, high-resolution optical microscopy is preferred\, as it allows for detailed imaging beyond the diffraction limit of light.We introduce a new technique using structured illumination to determine the spatial separation of punctuate objects to super–resolution limits that is amenable to both scattering and fluorescent objects. We call the technique Structured–Illumination Point–Separation (SIPS) Microscopy. We apply it to determine the orientation of a collagen monomer by imaging two fluorescent tags at different locations on the monomer. Experimentally\, we show that our approach effectively resolves the orientation of collagen monomers with a resolution surpassing the diffraction limit. \nIn this illumination technique we are employing time-multiplexed binary patterns with a DMD based SIM while the camera shutter is open\, mitigating undesired diffractions from the DMD. Three different phases of sinusoidal patterns generated by the DMD 3000 DLP series are projected onto the fluorescent sample in a 4f system. After data acquisition of the effect of the structured patterns on sample in three different phases images multiplied by a phase factor (1\, 𝑒−𝑖2𝜋/3\,𝑒−𝑖4𝜋/3 ) and then combined. By implementing the Radon transform of Fourier transform of phase of complex image\, and evaluate the Radon transform in center of x’=0 \, direction of pairs obtain. we enhance image reconstruction and analysis\, utilizing their characteristics for detailed imaging and data extraction in Structured–Illumination Point–Separation. With this strategy a non-destructive and effective technique that allows researchers to measure orientation of collagen monomer in a way that preserves the sample and offers high-resolution imaging\, which is crucial for studying the structure and properties of collagen.
URL:https://ece.northeastern.edu/event/zohreh-azizi-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240819T100000
DTEND;TZID=America/New_York:20240819T110000
DTSTAMP:20260405T051933
CREATED:20240820T181408Z
LAST-MODIFIED:20240820T181408Z
UID:7172-1724061600-1724065200@ece.northeastern.edu
SUMMARY:Yuexi Zhang PhD Dissertation Defense
DESCRIPTION:Name:\nYuexi Zhang \nTitle:\nHuman Action and Event Detection by Leveraging Multi-modality Techniques \nDate:\n8/19/2024 \nTime:\n10:00:00 AM \nCommittee Members:\nProf. Octavia Camps (Advisor) \nProf. Mario Sznaier \nProf. Sarah Ostadabbas \nAbstract:\nHuman Action and Event Analysis with multi-modalities has emerged as a critical area of research in computer vision and machine learning\, driven by the need to understand complex human behaviors in diverse environments. \nA significant advantage of multi-modal analysis is its application in cross-view action recognition\, where activities are observed from different viewpoints. To tackle such a problem\, we propose a flexible frame which is able to integrate diverse modalities(RGB pixels\, 2D/3D key points\, etc.) to overcome the limitations of single-modal approaches. It consists of two branches where a Dynamic Invariant Representation branch (DIR) concentrates on identifying view-invariant properties through key points trajectories while Context Invariant Representation branch(CIR) is to capture the pixel-level view-invariant features. In the meantime\, our approach leverages contrastive learning techniques to enhance the effectiveness of recognition accuracy\, where it enables the model to learn more discriminative and view-invariant features by contrastive positive pairs against negative pairs. The fusion of multi-modal data\, coupled with contrastive learning\, leads to improved accuracy in recognizing actions across various views and environments. Extensive experiments demonstrate the effectiveness of our approach on diverse modalities. Furthermore\, another promising application with multi-modal techniques is zero-shot action detection\, which aims to recognize actions that the model has not been explicitly trained on. Recently\, with language models are quickly developed\, leveraging LLMs in this context has shown significant potentials\, as these models can bridge the gap between seen and unseen actions by understanding and generalizing from textual descriptions. To further explore the problem\, we propose a transformer encoder-decoder architecture with global and local text prompt\, which allowing the model to infer the characteristics of unseen actions based on different textual attributes. We evaluate our approach on different benchmarks to demonstrate advantages.
URL:https://ece.northeastern.edu/event/yuexi-zhang-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240816T140000
DTEND;TZID=America/New_York:20240816T160000
DTSTAMP:20260405T051933
CREATED:20240820T222221Z
LAST-MODIFIED:20240820T222221Z
UID:7207-1723816800-1723824000@ece.northeastern.edu
SUMMARY:Shuo Jiang PhD Dissertation Defense
DESCRIPTION:Name:\nShuo Jiang \nTitle:\nTactile Intelligence in Robotics \nDate:\n8/16/2024 \nTime:\n2:00:00 PM \nLocation:\nEXP-701A \nCommittee Members:\nProf. Lawson Wong (Advisor)\nProf. Robert Platt\nProf. Alireza Ramezani\nProf. Taskin Padir \nAbstract:\nIn recent years\, the evolution of robot electronic skin technology has introduced a novel avenue for robots to perceive their external environment and internal state. In contrast to conventional visual perception methods\, tactile perception enables the discernment of additional physical properties of objects\, such as friction and mass distribution\, or even observes contact with higher resolution. Importantly\, tactile perception is resilient to challenges posed by inadequate illumination or environmental occlusion. However\, it presents inherent challenges\, including a limited sensing range\, compulsory physical interaction with the environment\, and intricate coupling with robot control\, rendering data collection and utilization challenging. Addressing these challenges and devising effective\, efficient\, and interpretable methods for processing tactile signals have emerged as pivotal issues in robot tactile perception. \nWith the development of artificial intelligence technology\, we are now able to interpret tactile information from a new perspective beyond traditional sensor technology and signal processing methods\, thereby expanding a wider range of robotic applications. With our continuous efforts over the past few years\, we have comprehensively addressed the following challenges in enhancing robot tactile perception through the application of advanced artificial intelligence and control methods: enabling robots to explore object shapes through tactile feedback; developing tactile-based safety mechanisms for human-robot collaboration; enhancing the locomotion adaptability of snake robots on irregular terrains through tactile perception; utilizing whole-body exteroceptors and proprioceptors for accurate body schema estimation; and implementing tactile gesture recognition in human-robot interactions. At the same time\, we developed a modular full-body electronic skin system for robots and its accompanying software\, which can accurately detect forces applied to the robot’s entire body and perform high-speed tracking of the real-time kinematics of the robot’s sensor array. \nIn conclusion\, this dissertation explores how robot tactile perception can accomplish complex tasks in various scenarios or achieve performance improvements in traditional tasks through the integration of sensor technology\, machine learning\, control theory\, and robotics. Through extensive theoretical and experimental analysis\, we have demonstrated the critical role of tactile perception in embodied intelligence for robots and established a fundamental knowledge framework for future academic research in this field.
URL:https://ece.northeastern.edu/event/shuo-jiang-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240816T110000
DTEND;TZID=America/New_York:20240816T130000
DTSTAMP:20260405T051933
CREATED:20240820T215800Z
LAST-MODIFIED:20240820T215800Z
UID:7180-1723806000-1723813200@ece.northeastern.edu
SUMMARY:Yanyu Li PhD Dissertation Defense
DESCRIPTION:Name:\nYanyu Li \nTitle:\nAccelerating Large Scale Generative AI: a Comprehensive Study \nDate:\n8/16/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Yanzhi Wang (Advisor)\nProf. David Kaeli\nProf. Kaushik Chowdhury \nAbstract:\nWe have witnessed the great success of deep learning in various domains\, such as the emerging large language models (LLMs) and Artificial General Intelligence (AGI)\, diffusion models for image and video generation\, and classic vision tasks including classification\, segmentation\, detection\, etc. Built with linear\, convolution\, and attention blocks\, Deep Neural Networks (DNNs) play a vital role in the performance revolution. However\, powerful DNNs often call for tremendous computation and storage size\, which hinders their wide adoption. For instance\, LLMs and diffusion models generally have billions of parameters and hundreds of GMACs\, which is prohibitive for edge deployment. As a result\, Efficient AI has become a hot research area. In this work\, with algorithm optimizations and co-designs with hardware platform\, we pursue the appealing features of edge or user-end AI\, where we cut down energy consumption\, shorten response latency\, shrink model storage size\, eliminate the need for cloud server access and protect user privacy. Firstly\, we systematically investigate quantization\, pruning\, and architecture search techniques for efficient vision backbones. We do a comprehensive study on quantization number system and precision\, and propose a novel mix-scheme mix-precision quantization technique to maximize hardware utilization and minimize performance loss. Regarding network pruning\, we propose a novel indicator-based approach\, named Pruning-as-Search\, that is fully differentiable and automatically decides pruning policies\, outperforming human tuning methods in terms of performance and efficiency. Further\, we address the long-existing issue of rigid network width design\, proposing a family of flexible-width pruned networks with minimal per-layer redundancy. As for architecture search\, we formulate a joint optimization objective of both size and latency\, releasing a series of efficient Vision Transformers\, named EfficientFormer (V1 and V2)\, to serve as strong vision backbones with MobileNet-level size and millisecond-level latency on mobile phones. \nSecondly\, we make dedicated optimizations for large-scale generative tasks\, i.e.\, Stable Diffusion (SD) for text-to-image generation\, which serves as pioneer work to enable their mobile deployment. With the proposed efficient architecture design and novel step distillation\, we shrink the generation latency of SD by a magnitude\, from more than 1 minute to generate a 512$\times$512 image to 1~2 seconds\, while preserving the stunning generative quality. We extend our work to the even more challenging video generation task\, enabling 2-bit inference and single step adversarial distillation to speedup video diffusion models by a magnitude.
URL:https://ece.northeastern.edu/event/yanyu-li-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240813T140000
DTEND;TZID=America/New_York:20240813T150000
DTSTAMP:20260405T051933
CREATED:20240820T215923Z
LAST-MODIFIED:20240820T215923Z
UID:7182-1723557600-1723561200@ece.northeastern.edu
SUMMARY:Yufei Feng MS Thesis Defense
DESCRIPTION:Name:\nYufei Feng \nTitle:\nBeam Management in Operational 5G mmWave Networks \nDate:\n8/13/2024 \nTime:\n2:00:00 PM \nCommittee Members:\nProf. Dimitrios Koutsonikolas (Advisor)\nProf. Josep Jornet\nProf. Mallesham Dasari \nAbstract:\nDue to the directional nature of mmWave signal propagation\, beam management plays a critical role in the performance of 5G mmWave deployments. However\, the details of beam management in commercial deployments and its performance in real-world scenarios remain largely unknown. In this paper\, we fill this gap by performing a comparative measurement study of the beam management procedure of two major US operator in Boston\, MA. We study a number of beamforming parameters including beamwidth\, number of beams\, beam switching delay\, and their impact on performance\, and we explore the interplay between beam management and rate adaptation. We also investigate for first time Rx beam management on the UE side. Finally\, we study the beam tracking performance and the quality of the selected beams for two operators.
URL:https://ece.northeastern.edu/event/yufei-feng-ms-thesis-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240812T100000
DTEND;TZID=America/New_York:20240812T110000
DTSTAMP:20260405T051933
CREATED:20240820T220016Z
LAST-MODIFIED:20240820T220016Z
UID:7184-1723456800-1723460400@ece.northeastern.edu
SUMMARY:Gözde Özcan PhD Dissertation Defense
DESCRIPTION:Name:\nGözde Özcan \nTitle:\nLearning and Optimizing Set Functions \nDate:\n8/12/2024 \nTime:\n10:00:00 AM \nLocation:\nEXP 601\nCommittee Members:\nProf. Stratis Ioannidis (Advisor)\nProf. Jennifer Dy\nProf. Evimaria Terzi \nAbstract:\nLearning and optimizing set functions play a crucial role in the artificial intelligence research as various problems of interest can be characterized with set inputs and/or outputs. Submodular functions\, i.e.\, set functions with a diminishing returns property\, are an important subcategory of such functions. They naturally present themselves in applications such as sensor placement\, data summarization\, feature selection\, influence maximization\, hyper-parameter optimization\, and facility location\, to name a few. In a lot of these compelling problems\, the objective is to maximize a submodular function subject to matroid constraints\, which is known to be NP-hard. For problems of this nature\, the continuous greedy algorithm provides a (1 − 1/e)-approximation guarantee in polynomial-time. It does so by estimating the gradient of the so-called multilinear relaxation of the objective function via sampling. However\, for the general class of submodular functions\, the number of samples required to achieve this theoretical guarantee can be computationally prohibitive. \nIn this dissertation\, we address deterministic submodular maximization problems with matroid constraints\, specifically those with objectives expressed through compositions of analytic and multilinear functions. We introduce a novel polynomial series estimator to approximate the multilinear relaxation of such functions and demonstrate that the sub-optimality introduced by our polynomial expansion can be minimized by increasing the polynomial order. By utilizing this estimator\, a variant of the continuous greedy algorithm achieves an approximation ratio close to (1 − 1/e) ≈ 0.63 through deterministic gradient estimation. In numerical experiments\, our polynomial estimator outperforms the sampling estimator\, offering reduced errors in less time. \nWe extend our study to the stochastic submodular maximization setting with general matroid constraints\, where objectives are defined as expectations over submodular functions with an unknown distribution. Adapting polynomial estimators to this context reduces the variance of the gradient estimation while introducing a controlled bias term. For several notable stochastic submodular maximization problems\, we demonstrate that this bias decays exponentially with the degree of our polynomial approximators. Furthermore\, for monotone functions\, a stochastic variant of the continuous greedy algorithm attains an approximation ratio (in expectation) close to (1 − 1/e) ≈ 0.63 using these polynomial estimators. Our experimental results validate the advantages of our approach across synthetic and real-life datasets. \nFinally\, we turn our attention to the learning set functions under a so-called optimal subset oracle setting. A recent approach approximates the underlying utility function with an energy-based model. Approximating this energy-based model yields iterations of fixed-point update steps during mean-field variational inference. However\, these fixed-point iterations are not guaranteed to converge and as the number of iterations increases\, automatic differentiation quickly becomes computationally prohibitive due to the size of the Jacobians that are stacked during backpropagation. We address these challenges by examining the convergence conditions for the fixed-point iterations and utilizing implicit differentiation over automatic differentiation. We empirically demonstrate the efficiency of our method on synthetic and real-world subset selection applications.
URL:https://ece.northeastern.edu/event/gozde-ozcan-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240809T110000
DTEND;TZID=America/New_York:20240809T120000
DTSTAMP:20260405T051933
CREATED:20240820T221215Z
LAST-MODIFIED:20240820T221215Z
UID:7192-1723201200-1723204800@ece.northeastern.edu
SUMMARY:Yifan Gong PhD Dissertation Defense
DESCRIPTION:Name:\nYifan Gong \nTitle:\nTowards Energy-Efficient Deep Learning for Sustainable AI \nDate:\n8/9/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Yanzhi Wang (Advisor) \nProf. David R. Kaeli \nProf. Xue Lin \nProf.  Huaizu Jiang\nProf. Stratis Ioannidis \nAbstract:\nThe rapid advancements in deep learning (DL) and artificial intelligence (AI) have led to transformative applications across various domains\, such as community virtual reality experiences\, autonomous systems\, and climate change prediction. Edge devices including mobile and embedded systems play a vital role in carrying these applications\, facilitating the widespread adoption of machine intelligence. Along with the great success of DL and AI is the huge energy consumption for both training and inference. With the breakthrough of large-scale models for AI-generated content (AIGC) such as large language models and diffusion models\, the energy consumption issue intensifies\, causing the urgent need for sustainable AI solutions. In this talk\, I will talk about how to facilitate deep learning on various edge devices in an energy-efficient manner for the goal of sustainable AI. Specifically\, I will start by introducing my two system-level approaches to tackling the challenge. The first approach is named bottom-up\, which conducts AI algorithm-aware efficient system design. The second approach is a top-down approach that achieves hardware-driven efficient AI algorithm design. Then\, I will share my recent works addressing the efficiency issues for large-scale models. Finally\, I will show the applications of my methods and pointers to the future direction. \n 
URL:https://ece.northeastern.edu/event/yifan-gong-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240808T150000
DTEND;TZID=America/New_York:20240808T160000
DTSTAMP:20260405T051933
CREATED:20240820T221121Z
LAST-MODIFIED:20240820T221121Z
UID:7190-1723129200-1723132800@ece.northeastern.edu
SUMMARY:Peiyan Dong PhD Dissertation Defense
DESCRIPTION:Name:\nPeiyan Dong \nTitle:\nSoftware-Hardware Co-Design: Towards Ultimate Efficiency in Deep Learning Acceleration \nDate:\n8/8/2024 \nTime:\n3:00:00 PM \nCommittee Members:\nProf. Yanzhi Wang (Advisor) \nProf. David R. Kaeli \nProf. Devesh Tiwari\nProf. Cheng Tan \nAbstract:\nAs AI techniques continue to advance\, the efficient deployment of deep neural networks on resource-constrained devices becomes increasingly appealing yet challenging. Simultaneously\, the proliferation of powerful AI technologies has raised significant concerns about sustainability and fairness\, demanding increased attention from the community. This talk presents two novel software-hardware co-designs for improving the efficiency and sustainability of deep learning models. The first part introduces a hardware-efficient adaptive token pruning framework for Vision Transformers (ViTs) on embedded FPGA\, HeatViT\, which achieves significant speedup under similar model accuracy compared to the state-of-the-art. HeatViT is the first end-to-end accelerator for ViT on embedded FPGA and also achieve practical speedup by data-level compression for the first time. The second presents PackQViT and Agile-Quant\, a paradigm of the efficient implementation for transformer-based models by sub-8-bit packed quantization and SIMD-based optimization for computing kernels. Our framework can achieve better task performance than state-of-the-art ViTs and LLMs with significant acceleration on edge processors\, such as mobile CPU\, Raspberry Pi and RISC-V. This work not only marks the first successful implementation of the LLM on the edge but also addresses the previous limitation where edge processors struggled to efficiently handle sub-8-bit computations. At the conclusion of the presentation\, the speaker will discuss today’s challenges related to AI sustainability and fairness and outline her research plans aimed at addressing these issues. \n 
URL:https://ece.northeastern.edu/event/peiyan-dong-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240807T110000
DTEND;TZID=America/New_York:20240807T120000
DTSTAMP:20260405T051933
CREATED:20240820T220332Z
LAST-MODIFIED:20240820T220332Z
UID:7188-1723028400-1723032000@ece.northeastern.edu
SUMMARY:Cobra Alemdar PhD Dissertation Defense
DESCRIPTION:Name:\nKubra Alemdar \nTitle:\nOvercoming and Engineering Wireless Signals for Communication and  Computation \nDate:\n8/7/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Kaushik Chowdhury (Advisor)\nProf. Josep Jornet\nProf. Marvin Onabajo \nAbstract:\nThe phenomenal growth of connected devices\, especially rapid expansion of IoT networks and the increasing demand for wireless services are the main driving forces for the evolution of wireless technologies. However\, the realization of such technologies requires a radical transformation of existing infrastructures to satisfy the needs of changing wireless environments. The main limitation in delivering these systems stems from a vast diversity in their demands and constraints. To address this limitation\, this dissertation shows how wireless signals and their interaction with and within the wireless propagation domain can be used as communication or computational tools that enable us to achieve certain novel tasks. Specifically\, we build i) cross-functionality architectures to engineer the wireless channel to a) enable the operation of emerging technologies\, and b) demonstrate a new paradigm for computing with wireless signals\, and ii) intelligently shape the wireless channel to create reliable communication links. This dissertation presents an experimentally validated software-hardware systems with thorough analysis\, delivering the following key advancements with distinct contributions: \nFirst\, We present an innovative physical layer solution for distributed networks that provides over-the-air (OTA) clock synchronization\, known as RFCLOCK\, to overcome the hurdle of implementing fine-grained synchronization for emerging technologies. We first develop the theory for such precision synchronization\, and second implement it in a custom-design\, compatible with commercial-off-the-shelf (COTS) software-defined radios (SDRs). We compare the performance of RFClock with popular wired and GPS-based hardware solutions\, both in terms of clock performance as well as impact on distributed beamforming. \nNext\, we propose two novel approaches\, utilizing reconfigurable intelligent surfaces (RISs) to ensure reliable connectivity in wireless networks by controlling the propagation environment: i) we present RIS-based spatio-temporal approach to enhance the link reliability for IoTs where sensors are small-factor designs with single-antenna in a rich multipath environment. We demonstrate the design of RIS and how it can effectively perturb the environment\, generating multiple wireless propagation channels and achieving the performance of a multi-antenna receiver in a Single-Input Single-Output (SISO) link. We compare the performance of the system with a multi-antenna receiver in terms of channel hardening and outage probability. ii) We introduce REMARKABLE\, an online learning based adaptive beam selection strategy for robot connectivity that trains kernelized multi-armed bandit (MAB) model directly in real-world settings of a factory floor. We show how RISs with passive reflective elements can create beamforming towards target robots\, and provide a solution to the problem of adaptive beam selection in dynamic channel conditions. We experimentally demonstrate that REMARKABLE can achieve a significant reduction in beam selection time compared to classical approaches and adaptive beam selection in mobility settings. \nFinally\, we introduce AirFC\, a system harnessing the capability of OTA computation to run inference on a neural network (NN) consisting of a set of fully connected layers (FC) by leveraging multi-antenna systems. We experimentally demonstrate and validate that such computation is accurate enough when compared to its digital counterpart. \n 
URL:https://ece.northeastern.edu/event/cobra-alemdar-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240807T110000
DTEND;TZID=America/New_York:20240807T120000
DTSTAMP:20260405T051933
CREATED:20240820T220213Z
LAST-MODIFIED:20240820T220213Z
UID:7186-1723028400-1723032000@ece.northeastern.edu
SUMMARY:Kubra Alemdar PhD Dissertation Defense
DESCRIPTION:Name:\nKubra Alemdar \nTitle:\nOvercoming and Engineering Wireless Signals for Communication and  Computation \nDate:\n8/7/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Kaushik Chowdhury (Advisor)\nProf. Josep Jornet\nProf. Marvin Onabajo \nAbstract:\nThe phenomenal growth of connected devices\, especially rapid expansion of IoT networks and the increasing demand for wireless services are the main driving forces for the evolution of wireless technologies. However\, the realization of such technologies requires a radical transformation of existing infrastructures to satisfy the needs of changing wireless environments. The main limitation in delivering these systems stems from a vast diversity in their demands and constraints. To address this limitation\, this dissertation shows how wireless signals and their interaction with and within the wireless propagation domain can be used as communication or computational tools that enable us to achieve certain novel tasks. Specifically\, we build i) cross-functionality architectures to engineer the wireless channel to a) enable the operation of emerging technologies\, and b) demonstrate a new paradigm for computing with wireless signals\, and ii) intelligently shape the wireless channel to create reliable communication links. This dissertation presents an experimentally validated software-hardware systems with thorough analysis\, delivering the following key advancements with distinct contributions: \nFirst\, We present an innovative physical layer solution for distributed networks that provides over-the-air (OTA) clock synchronization\, known as RFCLOCK\, to overcome the hurdle of implementing fine-grained synchronization for emerging technologies. We first develop the theory for such precision synchronization\, and second implement it in a custom-design\, compatible with commercial-off-the-shelf (COTS) software-defined radios (SDRs). We compare the performance of RFClock with popular wired and GPS-based hardware solutions\, both in terms of clock performance as well as impact on distributed beamforming. \nNext\, we propose two novel approaches\, utilizing reconfigurable intelligent surfaces (RISs) to ensure reliable connectivity in wireless networks by controlling the propagation environment: i) we present RIS-based spatio-temporal approach to enhance the link reliability for IoTs where sensors are small-factor designs with single-antenna in a rich multipath environment. We demonstrate the design of RIS and how it can effectively perturb the environment\, generating multiple wireless propagation channels and achieving the performance of a multi-antenna receiver in a Single-Input Single-Output (SISO) link. We compare the performance of the system with a multi-antenna receiver in terms of channel hardening and outage probability. ii) We introduce REMARKABLE\, an online learning based adaptive beam selection strategy for robot connectivity that trains kernelized  multi-armed bandit (MAB) model directly in real-world settings of a factory floor. We show how RISs with passive reflective elements can create beamforming towards target robots\, and provide a solution to the problem of adaptive beam selection in dynamic channel conditions. We experimentally demonstrate that REMARKABLE can achieve a significant reduction in beam selection time compared to classical approaches and adaptive beam selection in mobility settings. \nFinally\, we introduce AirFC\, a system harnessing the capability of OTA computation to run inference on a neural network (NN) consisting of a set of fully connected layers (FC) by leveraging multi-antenna systems. We experimentally demonstrate and validate that such computation is accurate enough when compared to its digital counterpart. \n 
URL:https://ece.northeastern.edu/event/kubra-alemdar-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240806T100000
DTEND;TZID=America/New_York:20240806T110000
DTSTAMP:20260405T051933
CREATED:20240820T221311Z
LAST-MODIFIED:20240820T221324Z
UID:7194-1722938400-1722942000@ece.northeastern.edu
SUMMARY:Malith Jayaweera PhD Dissertation Defense
DESCRIPTION:Name:\nMalith Jayaweera \nTitle:\nEnergy-Aware Transformations for Affine Programs on GPUs \nDate:\n8/6/2024 \nTime:\n10:00:00 AM\nCommittee Members:\nProf. David Kaeli (Co-advisor)\nProf. Yanzhi Wang (Co-advisor)\nDr. Norman Rubin\nProf. Martin Kong (Ohio State University) \nAbstract:\nGraphics Processing Units (GPUs) have been increasingly used to accelerate workloads ranging from high performance computing to machine learning. Development of high-level programming languages\, improved compilers\, and runtime drivers have helped to accelerate the widespread adoption of GPUs. Given the wider adoption and ever-increasing computing capabilities\, the power consumption of GPUs is quickly becoming a critical factor. Furthermore\, the GPU micro-architecture differs from vendor to vendor\, and even between hardware generations of the same vendor. Also\, program variants with similar performance could differ in energy consumption due to the difference in utilization of GPU resources such as Streaming Multiprocessors (SMs) or memory. Despite performance improvements in compilation techniques\, energy-aware code generation for heterogeneous GPUs has not been aggressively explored. \nIn this dissertation\, we first identify the potential for energy-aware compilation techniques for GPUs. Next\, we use these insights to study loop tiling\, which is a popular loop transformation that has been successfully applied to computational domains such as linear algebra\, deep neural networks and iterative stencils. We then propose an energy-aware tile size selection for affine programs to generate energy-efficient code targeting GPUs. \nWe also investigate the challenging problem of optimizing the scheduling of complex sparse tensor algebra and expressions on GPUs\, with a focus on maximizing parallelism utilization to unlock optimal performance. We perform a comprehensive examination of the search space for sparse tensor expression scheduling\, seeking to characterize the intricate inter-relationships between kernel characteristics\, GPU architecture\, and hardware constraints such as memory bandwidth limitations\, to inform optimal scheduling decisions.
URL:https://ece.northeastern.edu/event/malith-jayaweera-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240805T140000
DTEND;TZID=America/New_York:20240805T150000
DTSTAMP:20260405T051933
CREATED:20240820T221432Z
LAST-MODIFIED:20240820T221432Z
UID:7197-1722866400-1722870000@ece.northeastern.edu
SUMMARY:Joshua Groen PhD Proposal Review
DESCRIPTION:Name:\nJoshua Groen \n\nTitle:\nOptimizing and Securing Open RAN with Experimental System Validation \nDate:\n8/5/2024 \nTime:\n2:00:00 PM \nLocation:\nISEC232; \nCommittee Members:\nProf. Kaushik Chowdhury (Advisor)\nProf. Stratis Ioannidis\nProf Engin Kirda\nDr. Christopher Morrell \nAbstract:\n5G and beyond cellular networks promise remarkable advancements in bandwidth\, latency\, and connectivity\, with the emergence of Open Radio Access Network (Open RAN) representing a pivotal direction. O-RAN inherently supports machine learning (ML) for network operation control\, with RAN Intelligence Controllers (RICs) utilizing ML models developed by third-party vendors based on key performance indicators (KPIs) from geographically dispersed base stations or user equipment (UE). Realistic and robust datasets are crucial for developing these ML models. We collect a comprehensive 5G dataset using real-world cell phones across diverse scenarios and replicate this traffic within a full-stack srsRAN-based O-RAN framework on Colosseum\, the world’s largest radio frequency (RF) emulator. This process produces a robust\, O-RAN compliant KPI dataset reflecting real-world conditions\, enabling the training of ML models for traffic slice classification with high accuracy. \nThe O-RAN paradigm introduces cloud-based\, multi-vendor\, open\, and intelligent architectures\, enhancing network observability and reconfigurability. However\, this also expands the threat surface\, exposing components and ML infrastructure to cyberattacks. We examine O-RAN security\, focusing on specifications\, architectures\, and intelligence proposed by the O-RAN Alliance. We identify threats\, propose solutions\, and experimentally demonstrate their effectiveness in defending O-RAN systems against cyberattacks\, offering a holistic and practical perspective on O-RAN security. \nWe investigate the impact of encryption on two key O-RAN interfaces: the E2 interface and the Open Fronthaul\, using a full-stack O-RAN ALLIANCE compliant implementation within the Colosseum network emulator and a production-ready Open RAN and 5G-compliant private cellular network. Our findings provide quantitative insights into the latency and throughput impacts of encryption protocols\, and we propose four fundamental principles for security by design within Open RAN systems. \nFinally\, we address the security of Time-Sensitive Networking (TSN) in O-RAN. The O-RAN framework encourages multi-vendor solutions but increases the exposure of the open fronthaul (FH) to security risks\, especially when deployed over third-party networks. Synchronization is crucial for reliable 5G links\, with attacks on synchronization mechanisms posing significant threats. We demonstrate the impact of spoofing and replay attacks on Precision Time Protocol (PTP) synchronization\, causing catastrophic failures in a production-ready O-RAN and 5G-compliant private cellular network. To counter these threats\, we design an ML-based monitoring solution detecting various malicious attacks with over 97.5% accuracy\, and outline additional security measures for the O-RAN environment.
URL:https://ece.northeastern.edu/event/joshua-groen-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240730T133000
DTEND;TZID=America/New_York:20240730T143000
DTSTAMP:20260405T051933
CREATED:20240820T221611Z
LAST-MODIFIED:20240820T221611Z
UID:7199-1722346200-1722349800@ece.northeastern.edu
SUMMARY:Kyle Lockwood PhD Dissertation Defense
DESCRIPTION:Name:\nKyle Lockwood \nTitle:\nLeveraging Submovements for Prediction and Trajectory Planning in  Human-Robot Handover \nDate:\n7/30/2024 \nTime:\n1:30:00 PM \nLocation:\nISEC 532 – \nCommittee Members:\nProf. Deniz Erdogmus (Advisor)\nProf. Eugene Tunik (Co-Advisor)\nProf. Mathew Yarossi\nProf. Tales Imbiriba \nAbstract:\nCollaborative physical interactions between humans and robots pose difficult modeling challenges. To create natural interactions\, engineers must consider human inference of intent\, anticipation of action\, and coordination of movement. Humans can handle these challenges effortlessly when interacting with one another\, but they are very difficult to overcome in robot implementations. Although human-human handover is a seemingly simple task\, it requires a complex perception-action coupling to determine when and where the handover will happen\, as well as choosing an appropriate trajectory to receive the object. Critically\, modeling human-robot handover requires incorporating knowledge about human inference and trajectory planning to obtain seamless interactions. Despite recent advancements in sensing and control\, human-robot handovers are far from approaching the fluidity and flexibility of human-human collaboration. Existing predictive models applied to human-robot handover often utilize classification methods and other approaches that suffer in accuracy when encountering noisy human trajectories that are not captured during their training. To address these challenges\, this work presents two models that act as robotic surrogates for human inference and trajectory planning in a handover task. This approach delivers promising results while remaining grounded in a physiologically meaningful feature of human motion: Gaussian-shaped submovements in velocity profiles. This thesis analyzes human-human handover kinematics to establish a baseline for model evaluation and investigate the influence of handover role\, it presents models for human inference and trajectory planning\, and it applies the inference model in human-robot handover experiments. \n 
URL:https://ece.northeastern.edu/event/kyle-lockwood-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240729T150000
DTEND;TZID=America/New_York:20240729T170000
DTSTAMP:20260405T051933
CREATED:20240820T222030Z
LAST-MODIFIED:20240820T222030Z
UID:7203-1722265200-1722272400@ece.northeastern.edu
SUMMARY:Yunus Bicer PhD Dissertation Defense
DESCRIPTION:Name:\nYunus Bicer \nTitle:\nNovel Methods for Electromyographic Hand Gesture Recognition: Expressive Gestures Sets with Minimal Calibration \nDate:\n7/29/2024 \nTime:\n3:00:00 PM \nLocation:\nISEC 632 –\nCommittee Members:\nProf. Deniz Erdogmus (Advisor)\nProf. Mathew Yarossi (Co-Advisor)\nProf. Eugene Tunik\nProf. Tales Imbiriba \nAbstract:\nGesture recognition\, the process of interpreting hand gestures through computational algorithms and devices\, is essenatial for enhancing human-computer interaction(HCI). This thesis focuses on surface electromyography (sEMG)-based gesture recognition\, where the signals generated by muscles are analyzed to identify hand gestures. sEMG systems provides more natural and intuitive interactions compared to traditional input methods and hold significant potential in assistive technology\, prosthetics\, and immersive environments such as virtual and augmented reality. Despite these advantages\, sEMG-based methods face challenges including user-specific variability in signals\, limited gesture expressivity\, and the need for extensive calibration time. This research aims to address these issues by proposing novel methods for minimizing calibration time and expanding expressivity of gesture recognition capabilities. Key innovations include a real-time probability feedback mechanism to facilitate user adaptation and techniques to recognize a wider range of gestures with minimal training data. This work seeks to enhance the usability and versatility of sEMG-based systems\, making them more accessible and effective for various applications.
URL:https://ece.northeastern.edu/event/yunus-bicer-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240729T120000
DTEND;TZID=America/New_York:20240729T133000
DTSTAMP:20260405T051933
CREATED:20240820T221948Z
LAST-MODIFIED:20240820T221948Z
UID:7201-1722254400-1722259800@ece.northeastern.edu
SUMMARY:Shijie Yan PhD Proposal Review
DESCRIPTION:Name:\nShijie Yan \nTitle:\nEfficient Monte Carlo light transport algorithms in complex scattering media \nDate:\n7/29/2024 \nTime:\n12:00:00 PM \nCommittee Members:\nProf. Qianqian Fang (Advisor)\nProf. Steven Jacques\nProf. David Kaeli\nProf. Edwin Marengo \nAbstract:\nModeling light-tissue interactions is crucial for many optical imaging modalities\, for which the Monte Carlo (MC) method has been widely recognized as the gold-standard. Despite dramatic speed improvements gained via the use of graphics processing units (GPUs)\, MC simulations remain computationally intensive. Efficient and accurate MC algorithms are needed to further consider physiologically realistic tissue models\, especially for emerging optical imaging techniques. Voxel-based MC (VMC) and mesh-based MC (MMC) are two major MC methods for modeling complex tissues with their respective strengths and weaknesses. While VMC offers higher computational efficiency due to the simple data structure\, its accuracy suffers from the terraced boundary shape especially in low-scattering medium; on the other side\, MMC offers improved boundary fidelity but can be slow and memory-intensive\, particularly at high mesh density. Furthermore\, emerging wide-field diffuse optical imaging systems using structured light require more efficient modeling to handle numerous illumination patterns. Additionally\, niche applications such as polarized light imaging could also benefit from many of the recent advances from modern MC simulations such as GPU acceleration and handling of complex heterogeneous media. \nThis proposal is aimed to push the frontiers of modern MC simulation algorithms to fundamentally enhance their utilities in diverse applications. To reduce the staircase effect in VMC\, we have developed a hybrid MC algorithm\, named split-voxel MC (SVMC)\, where sub-voxel oblique surfaces are extracted using a marching-cubes algorithm and are incorporated into a memory-efficient voxelated data structure. SVMC allows VMC to handle curved surfaces while remaining computationally efficient. A GPU-accelerated marching-cubes algorithm was also developed to further accelerate SVMC domain preprocessing. On the other hand\, to further improve MMC computational efficiency\, a dual-grid MMC (DMMC) algorithm was developed to perform fast ray-tracing inside a coarse tetrahedral mesh while saving fluence data over a dense voxelated grid\, simultaneously achieving improved speed and output accuracy. To accommodate increasing needs of modeling wide-field pattern based sources\, we have developed a “photon sharing’’ MC algorithm that performs simulations of all illumination and detection patterns in parallel\, improving computational speed by an order of magnitude. Additionally\, we have developed a GPU-accelerated massively-parallel algorithm capable of modeling Mie scattering of sphere particles in three-dimensional media for polarized light imaging\, achieving nearly 1000$\times$ speed acceleration compared to sequential implementation. \nLastly\, we have also investigated a hardware-accelerated MMC algorithm using the NVIDIA OptiX ray-tracing framework\, leveraging modern GPU ray-tracing (RT) cores extensively optimized for graphics rendering. Preliminary results demonstrate comparable accuracy and significantly improved simulation speed compared to conventional tetrahedral MMC. \n 
URL:https://ece.northeastern.edu/event/shijie-yan-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240729T090000
DTEND;TZID=America/New_York:20240729T103000
DTSTAMP:20260405T051933
CREATED:20240820T222120Z
LAST-MODIFIED:20240820T222120Z
UID:7205-1722243600-1722249000@ece.northeastern.edu
SUMMARY:Ruyi Ding PhD Proposal Review
DESCRIPTION:Name:\nRuyi Ding \nTitle:\nTowards Robust and Secure Deep Learning: From Training through Deployment to Inference \nDate:\n7/29/2024 \nTime:\n9:00:00 AM \nCommittee Members:\nProf. Yunsi Fei (Advisor)\nProf. Aidong Ding\nProf. Lili Su \nAbstract:\nIn recent years\, deep learning has experienced rapid advancement\, leading to the development of numerous commercial deep neural network (DNN) models across diverse fields such as autonomous driving\, healthcare\, and recommendation systems. However\, this wide adoption has intensified concerns about AI security throughout a neural network’s lifecycle — from training to deployment\, and inference. Various vulnerabilities have emerged\, threatening confidentiality\, privacy\, and intellectual property (IP) rights: poisoned training datasets facilitate privacy leakage and backdoor injection; after deployment\, models may be misused through unauthorized transfer learning\, a new form of IP infringement\, and weights and parameters are subject to side-channel assisted model extraction attacks; during inference\, adversarial attacks may compromise DNN functionality\, causing misclassifications.\nThis dissertation addresses new security challenges across the neural network lifecycle through several novel contributions. We identify a new poisoning vulnerability in graph neural networks\, where injecting poisoned nodes exacerbates link privacy leakage\, allowing attackers to steal adjacent information from private training data\, highlighting the necessity of robust AI training. To prevent model misuse after deployment\, we introduce EncoderLock and Non-transferable Pruning\, employing innovative training schemes and pruning methods to restrict the malicious use of pre-trained models through transfer learning\, effectively implementing applicability authorization. Towards secure deep learning implementations\, we adopt a software-hardware co-design approach to address DNN vulnerabilities. Specifically\, we leverage the electromagnetic emanations from DNN accelerators in a new approach called EMShepherd\, which detects adversarial examples (AE) on edge devices in a ‘black-box’ manner. To protect deployed DNNs against side-channel-based weight-stealing attacks\, we develop PixelMask\, which leverages the characteristics of DNN for side-channel defense by masking out unimportant inputs and dropping related operations to obfuscate side-channel signals. Lastly\, we explore the use of Trusted Execution Environments (TEE) to safeguard model weights and data privacy against model stealing and membership inference attacks.\nThis proposal identifies key challenges of robust and secure deep learning\,  tackles vulnerabilities at various stages of the AI lifecycle\, and provides comprehensive protection mechanisms\, from securing the training process to safeguarding deployed models\, paving the way for more resilient and reliable AI technologies in real-world applications.
URL:https://ece.northeastern.edu/event/ruyi-ding-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240725T140000
DTEND;TZID=America/New_York:20240725T153000
DTSTAMP:20260405T051933
CREATED:20240820T222301Z
LAST-MODIFIED:20240820T222301Z
UID:7209-1721916000-1721921400@ece.northeastern.edu
SUMMARY:Rui Lou PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nRui Luo \nTitle:\nShared Assistance Methods for Human-in-the-loop Robot Systems \nDate:\n7/25/2024 \nTime:\n2:00:00 PM \nLocation:\nEXP 701A. \nCommittee Members:\nProf. Taskin Padir (Advisor)\nProf. John Peter Whitney\nProf. Yanzhi Wang\nDr. Mark Zolotas \nAbstract:\nFully autonomous robot systems\, though highly desired\, face substantial theoretical and practical challenges when being deployed into a dynamic environment where human co-exists. To tackle this challenge\, this thesis investigates the concept of human-in-the-loop (HITL) systems\, which incorporate human input to enhance robot functionality. HITL systems offer a pragmatic alternative\, combining human versatility with robotic precision. \nThis research aims to address critical questions in one specific HITL system which  prioritizes the dominant role of human within the system\, positioning the robot primarily in an assistive capacity that adheres to human commands to facilitate the achievement of a shared goal. It explores two primary paradigms of shared assistance methods—Shared Control (SC) and Shared Autonomy (SA)—and discuss the system designs as well as specific algorithms to implement the three critical components in a HITL systems: human intention estimation\, modulation of human inputs and robot autonomy\, and the human-robot communication channel. \nDue to the variety of use cases and their specific challenges\, four distinct HITL systems are developed and analyzed to exemplify how shared assistance methods could be incorporated to assist human operators: an assistive wheelchair for indoor navigation\, a human-centered robot system design for industrial tasks\, a mobile bi-manual robot for tele-manipulation\, and a VR-based customizable shared control system for fine teleopeartion.  Although each system represents a comprehensive robotic solution\, the research contributions for each work vary. \nIn the assistive wheelchair navigation system\, the focus was on human intent estimation via low-throughput interface utilizing a recursive Bayesian filter\, with significant efforts dedicated to developing a real-time user interface serving as the communication channel. In the human-robot collaboration system for industrial setting\, the emphasis was on human state estimation through camera-based posture tracking and exploring the interplay between robot behavior and human ergonomics. For the two teleoperation systems\, the primary focus was on the real-time modulation of human inputs and robot autonomy to aid in achieving dexterous manipulation tasks. A novel VR-based user interface was developed to enable users to customize the level of robotic autonomous assistance. Each system was validated through a pilot study involving 10-20 human subjects\, accompanied by extensive data analysis to provide insights into designing HITL systems for various applications. \nIn conclusion\, this thesis contributes to a deeper understanding of HITL systems\, highlighting their potential to enhance human productivity\, ergonomics\, and quality of life in various applications through concrete examples. The integration of human intent estimation and real-time shared control methods into robotic systems demonstrates the feasibility and benefits of HITL approaches. Our extensive experimental analysis underscores the critical role of human feedback in designing practical HITL systems that can be deployed in real-world scenarios.
URL:https://ece.northeastern.edu/event/rui-lou-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240723T210000
DTEND;TZID=America/New_York:20240723T220000
DTSTAMP:20260405T051933
CREATED:20240820T222749Z
LAST-MODIFIED:20240820T222749Z
UID:7217-1721768400-1721772000@ece.northeastern.edu
SUMMARY:Zhenglun Kong PhD Dissertation Defense
DESCRIPTION:Name:\nZhenglun Kong \nTitle:\nTowards Efficient Deep Learning for Vision and Language Applications \nDate:\n7/23/2024 \nTime:\n9:00:00 PM \nCommittee Members:\nProf. Yanzhi Wang (Advisor)\nProf. David Kaeli\nProf. Dakuo Wang\nProf. Weiyan Shi \nAbstract:\nMachine learning and AI have been advancing rapidly in recent years\, leading to numerous applications across diverse fields such as autonomous vehicles\, entertainment\, science\, healthcare\, and assistive technologies—significantly enhancing daily life. However\, this advancement has been accompanied by a significant increase in the size of deep neural network (DNN) models\, which poses considerable economic challenges. The substantial costs associated with the training\, inference\, and deployment of large vision and language models require extensive computational resources and time\, proving especially taxing for smaller entities and individuals. This also complicates deployment on resource-constrained devices and in areas with limited infrastructure. \nA major challenge is deploying AI models on devices with limited capacity\, such as wearables\, sensors\, and mobile phones. These edge devices\, often operating offline and requiring real-time processing\, are critical for many applications but struggle to support large models. My dissertation research addresses these pressing issues with the aim of enabling the practical implementation of AI. We ensure the effectiveness of AI models while adapting them for use in constrained environments by tackling fundamental AI challenges from four angles: \n1. Managing Massive Computation: We introduce a novel token pruning framework that reduces the latency of Vision Transformers (ViT) by up to 41% compared to existing works on mobile devices. Additionally\, we propose a quantization framework for large language models (LLMs)\, achieving an on-device speedup of up to 2.55x compared to FP16 counterparts across multiple edge devices. \n2. Mitigating Training Costs: We develop fast\, accurate\, and memory-efficient training methods by utilizing a hierarchical data redundancy reduction scheme\, which achieves up to a 40% speedup in ViT pre-training with minimal accuracy loss. \n3. Merging Multiple Models: We propose an efficient way to merge multiple LLMS\, yielding a more advanced and robust LLM while maintaining the model  size\, as well as  reducing knowledge interference. \n4. Co-designing Speed-aware Deep Neural Networks: We consider memory access cost\, the degree of parallelism\, and practical latency in the design of 2D and 3D object detection models for practical deployment.  By addressing these areas\, my research aims to enable the effective and efficient use of AI models in constrained environments\, ensuring their practical implementation across various applications. \n 
URL:https://ece.northeastern.edu/event/zhenglun-kong-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240723T113000
DTEND;TZID=America/New_York:20240723T123000
DTSTAMP:20260405T051933
CREATED:20240820T222406Z
LAST-MODIFIED:20240820T222406Z
UID:7211-1721734200-1721737800@ece.northeastern.edu
SUMMARY:Andrea Lacava PhD Proposal Review on 7/23
DESCRIPTION:Name:\nAndrea Lacava \nTitle:\nEnabling Intelligent nextG Cellular Networks through the Open RAN  Architecture \nDate:\n7/23/2024 \nTime:\n11:30:00 AM \nLocation:\nEXP 501 \nCommittee Members:\nProf. Tommaso Melodia (Advisor)\nProf. Francesca Cuomo (Advisor)\nProf. Stefano Basagni\nProf. Ioannis Chatzigiannakis \nAbstract:\nThe 5th generation (5G) and beyond of cellular networks will support heterogeneous use cases at an unprecedented scale\, thus demanding automated control and optimization of network functionalities\, customized to the needs of individual users. However\, achieving such fine-grained control over the Radio Access Network (RAN) is unfeasible with the current cellular architecture. \nTo bridge this gap\, the Open RAN paradigm and its specification introduce an “open” architecture with abstractions that facilitate closed-loop control and enable data-driven\, intelligent optimization of the RAN at the user-level. This thesis focuses on the design and development of system-level solutions to enable intelligent control in the next generation of cellular networks through the Open RAN architecture. The main research areas explored in this thesis include (i) the design and evaluation of platforms for the creation\, datasets generation and testing of the Open RAN architecture solutions; (ii) the development of Artificial Intelligence (AI)/Machine Learning (ML) models for various deployments and networking scenarios; and (iii) innovative methodologies for agile spectrum\, infrastructure\, and AI management within Open RAN. Among the significant contributions of this thesis are ns-O-RAN\, the first open-source simulation platform that integrates a functional 5G protocol stack in Network Simulator 3 (ns-3) with an O-RAN-compliant E2 interface\, and the pioneering architectural design and implementation of the dApps\, the real-time controllers for the O-RAN architecture. Furthermore\, the solutions proposed in this thesis are leveraged to investigate various network optimization use cases deemed critical in cellular networks. The results demonstrate that our approach outperforms traditional Radio Resource Management (RRM) heuristics\, enhancing overall RAN conditions at scale in both simulations and state-of-the-art experimental testbeds. \n 
URL:https://ece.northeastern.edu/event/andrea-lacava-phd-proposal-review-on-7-23/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240722T130000
DTEND;TZID=America/New_York:20240722T143000
DTSTAMP:20260405T051933
CREATED:20240820T222500Z
LAST-MODIFIED:20240820T222500Z
UID:7213-1721653200-1721658600@ece.northeastern.edu
SUMMARY:Miead Tehrani Moayyed PhD Dissertation Defense
DESCRIPTION:Name:\nMiead Tehrani Moayyed \nTitle:\nRF Channel Models for Static and Mobile Scenarios: From Simulations to Models for Large-scale Emulations and Digital Twins \nDate:\n7/22/2024 \nTime:\n1:00:00 PM \nLocation:\nRoom: EXP-601A \nCommittee Members:\nProf. Stefano Basagni (Advisor)\nProf. Tommaso Melodia\nProf. Milica Stojanovic \nAbstract:\nThe extremely high data rates provided by communications at higher frequency bands\, such as mmWave\, can address the unprecedented demands of next-generation wireless networks. However\, several impairments limit wireless coverage at higher frequencies\, necessitating accurate models of wireless scenarios and large-scale testing to test and realize the potential of these new technologies. Large-scale accurate simulations and wireless network emulators now offer a time- and cost-effective solution for performing these tests in a lab before field deployment. This dissertation focuses on modeling\, calibration\, and validation of realistic RF scenarios for wireless network emulation at scale. The contributions of this work include: (i) Investigating the characteristics of the wireless channel at higher frequencies (mmWave) and evaluating the performance of mmWave communications on top of the NR standard for 5G cellular networks; (ii) developing a streamlined framework to create realistic RF scenarios with mobility support for Finite Input Response (FIR)-based emulators like Colosseum\, starting from rich inputs such as precise ray tracing methods or real-field measurements\, and (iii) creating an accurate AI-assisted propagation model that integrates joint measurements and simulations\, achieving the desired accuracy and reasonable computational requirements for real-time Digital Twin (DT) wireless networks. Particularly: \n(i) We derive channel propagation models via ray tracing simulations for mmWave transmissions with applications to V2X communications. We analyze aspects related to blockage modeling\, the effects of antenna beamwidth\, beam alignment\, and multipath fading in urban scenarios\, emphasizing the importance of capturing diffuse scattered rays for improved large-scale and small-scale radio channel propagation models. Furthermore\, we compare the performance of mmWave 5G NR with the 4G Long-Term Evolution (LTE) standard in a realistic environment and demonstrate the impact of MIMO technology on improving the performance of 5G NR cellular networks. As transmitted radio signals are received as clusters of multipath rays\, identifying these clusters provides better spatial and temporal characteristics of the channel. We address the clustering process and its validation across a wide range of frequencies in the mmWave spectrum below 100 GHz. We analyze how the clustering solution changes with narrower-beam antennas and provide a comparison of the cluster characteristics for different types of antennas. \n(ii) Our framework for modeling wireless scenarios for large-scale emulators optimally scales down the large set of channel input to the fewer parameters allowed by the emulator using efficient clustering techniques and Channel-Impulse Response (CIR) re-sampling. We demonstrate the effectiveness of the proposed framework by modeling realistic scenarios for Colosseum\, starting with rich input from commercial-grade ray tracing software\, Wireless InSite (WI) by Remcom. To support mobility\, we implement a mobile channel simulator on top of the WI ray-tracer\, consisting of two steps: (a) spatially sampling the mobile channels using the ray-tracer\, and (b) parsing the ray tracing outputs to extract the channels for each time instant of emulation. We also develop a Software-Defined Radio (SDR)-based channel sounder to precisely characterize emulated RF channels. The sounder framework is fully containerized\, scalable\, and automated to capture the gains and delays of the channel CIR taps. \n(iii) We extend these efforts to develop the first Digital Twins for Mobile Networks (DTMN) on Colosseum\, using the RF testbed Arena as a use case. This use case demonstrates the scope and capabilities of Colosseum as a DT\, providing the research community with a set of tools to replicate real-world environments. We compare key network performance metrics\, namely throughput and SINR\, of the Arena/Colosseum DTMN to validate the fidelity of our twinning process. Furthermore\, we present an AI-assisted propagation model to generate realistic\, real-time\, and scalable scenarios for DTMNs. This model seamlessly integrates measurements with ray tracing\, providing a high-resolution\, realistic channel model. We study the computational complexity and configuration trade-offs associated with ray tracing for high-fidelity prediction\, generating a large dataset to train this enhanced AI model. Our proof of concept highlights the accuracy and generalization capabilities of our AI model across previously unseen transmitter (TX) locations and unfamiliar environments\, outperforming state-of-the-art approaches and achieving significant improvements in accuracy. We analyze the computational complexity of our AI model\, comparing it to high-fidelity ray tracing. Profiling reveals a three-order-of-magnitude acceleration\, enabling real-time propagation prediction with reasonable accuracy. We explore key ray tracing parameters contributing to the discrepancy between measurements and simulations and demonstrate the integration of measurements into channel prediction\, thereby calibrating the model.
URL:https://ece.northeastern.edu/event/miead-tehrani-moayyed-phd-dissertation-defense/
END:VEVENT
END:VCALENDAR