BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical &amp; Computer Engineering - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Electrical &amp; Computer Engineering
X-ORIGINAL-URL:https://ece.northeastern.edu
X-WR-CALDESC:Events for Department of Electrical &amp; Computer Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240808T150000
DTEND;TZID=America/New_York:20240808T160000
DTSTAMP:20260414T115913
CREATED:20240820T221121Z
LAST-MODIFIED:20240820T221121Z
UID:7190-1723129200-1723132800@ece.northeastern.edu
SUMMARY:Peiyan Dong PhD Dissertation Defense
DESCRIPTION:Name:\nPeiyan Dong \nTitle:\nSoftware-Hardware Co-Design: Towards Ultimate Efficiency in Deep Learning Acceleration \nDate:\n8/8/2024 \nTime:\n3:00:00 PM \nCommittee Members:\nProf. Yanzhi Wang (Advisor) \nProf. David R. Kaeli \nProf. Devesh Tiwari\nProf. Cheng Tan \nAbstract:\nAs AI techniques continue to advance\, the efficient deployment of deep neural networks on resource-constrained devices becomes increasingly appealing yet challenging. Simultaneously\, the proliferation of powerful AI technologies has raised significant concerns about sustainability and fairness\, demanding increased attention from the community. This talk presents two novel software-hardware co-designs for improving the efficiency and sustainability of deep learning models. The first part introduces a hardware-efficient adaptive token pruning framework for Vision Transformers (ViTs) on embedded FPGA\, HeatViT\, which achieves significant speedup under similar model accuracy compared to the state-of-the-art. HeatViT is the first end-to-end accelerator for ViT on embedded FPGA and also achieve practical speedup by data-level compression for the first time. The second presents PackQViT and Agile-Quant\, a paradigm of the efficient implementation for transformer-based models by sub-8-bit packed quantization and SIMD-based optimization for computing kernels. Our framework can achieve better task performance than state-of-the-art ViTs and LLMs with significant acceleration on edge processors\, such as mobile CPU\, Raspberry Pi and RISC-V. This work not only marks the first successful implementation of the LLM on the edge but also addresses the previous limitation where edge processors struggled to efficiently handle sub-8-bit computations. At the conclusion of the presentation\, the speaker will discuss today’s challenges related to AI sustainability and fairness and outline her research plans aimed at addressing these issues. \n 
URL:https://ece.northeastern.edu/event/peiyan-dong-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240807T110000
DTEND;TZID=America/New_York:20240807T120000
DTSTAMP:20260414T115913
CREATED:20240820T220332Z
LAST-MODIFIED:20240820T220332Z
UID:7188-1723028400-1723032000@ece.northeastern.edu
SUMMARY:Cobra Alemdar PhD Dissertation Defense
DESCRIPTION:Name:\nKubra Alemdar \nTitle:\nOvercoming and Engineering Wireless Signals for Communication and  Computation \nDate:\n8/7/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Kaushik Chowdhury (Advisor)\nProf. Josep Jornet\nProf. Marvin Onabajo \nAbstract:\nThe phenomenal growth of connected devices\, especially rapid expansion of IoT networks and the increasing demand for wireless services are the main driving forces for the evolution of wireless technologies. However\, the realization of such technologies requires a radical transformation of existing infrastructures to satisfy the needs of changing wireless environments. The main limitation in delivering these systems stems from a vast diversity in their demands and constraints. To address this limitation\, this dissertation shows how wireless signals and their interaction with and within the wireless propagation domain can be used as communication or computational tools that enable us to achieve certain novel tasks. Specifically\, we build i) cross-functionality architectures to engineer the wireless channel to a) enable the operation of emerging technologies\, and b) demonstrate a new paradigm for computing with wireless signals\, and ii) intelligently shape the wireless channel to create reliable communication links. This dissertation presents an experimentally validated software-hardware systems with thorough analysis\, delivering the following key advancements with distinct contributions: \nFirst\, We present an innovative physical layer solution for distributed networks that provides over-the-air (OTA) clock synchronization\, known as RFCLOCK\, to overcome the hurdle of implementing fine-grained synchronization for emerging technologies. We first develop the theory for such precision synchronization\, and second implement it in a custom-design\, compatible with commercial-off-the-shelf (COTS) software-defined radios (SDRs). We compare the performance of RFClock with popular wired and GPS-based hardware solutions\, both in terms of clock performance as well as impact on distributed beamforming. \nNext\, we propose two novel approaches\, utilizing reconfigurable intelligent surfaces (RISs) to ensure reliable connectivity in wireless networks by controlling the propagation environment: i) we present RIS-based spatio-temporal approach to enhance the link reliability for IoTs where sensors are small-factor designs with single-antenna in a rich multipath environment. We demonstrate the design of RIS and how it can effectively perturb the environment\, generating multiple wireless propagation channels and achieving the performance of a multi-antenna receiver in a Single-Input Single-Output (SISO) link. We compare the performance of the system with a multi-antenna receiver in terms of channel hardening and outage probability. ii) We introduce REMARKABLE\, an online learning based adaptive beam selection strategy for robot connectivity that trains kernelized multi-armed bandit (MAB) model directly in real-world settings of a factory floor. We show how RISs with passive reflective elements can create beamforming towards target robots\, and provide a solution to the problem of adaptive beam selection in dynamic channel conditions. We experimentally demonstrate that REMARKABLE can achieve a significant reduction in beam selection time compared to classical approaches and adaptive beam selection in mobility settings. \nFinally\, we introduce AirFC\, a system harnessing the capability of OTA computation to run inference on a neural network (NN) consisting of a set of fully connected layers (FC) by leveraging multi-antenna systems. We experimentally demonstrate and validate that such computation is accurate enough when compared to its digital counterpart. \n 
URL:https://ece.northeastern.edu/event/cobra-alemdar-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240807T110000
DTEND;TZID=America/New_York:20240807T120000
DTSTAMP:20260414T115913
CREATED:20240820T220213Z
LAST-MODIFIED:20240820T220213Z
UID:7186-1723028400-1723032000@ece.northeastern.edu
SUMMARY:Kubra Alemdar PhD Dissertation Defense
DESCRIPTION:Name:\nKubra Alemdar \nTitle:\nOvercoming and Engineering Wireless Signals for Communication and  Computation \nDate:\n8/7/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Kaushik Chowdhury (Advisor)\nProf. Josep Jornet\nProf. Marvin Onabajo \nAbstract:\nThe phenomenal growth of connected devices\, especially rapid expansion of IoT networks and the increasing demand for wireless services are the main driving forces for the evolution of wireless technologies. However\, the realization of such technologies requires a radical transformation of existing infrastructures to satisfy the needs of changing wireless environments. The main limitation in delivering these systems stems from a vast diversity in their demands and constraints. To address this limitation\, this dissertation shows how wireless signals and their interaction with and within the wireless propagation domain can be used as communication or computational tools that enable us to achieve certain novel tasks. Specifically\, we build i) cross-functionality architectures to engineer the wireless channel to a) enable the operation of emerging technologies\, and b) demonstrate a new paradigm for computing with wireless signals\, and ii) intelligently shape the wireless channel to create reliable communication links. This dissertation presents an experimentally validated software-hardware systems with thorough analysis\, delivering the following key advancements with distinct contributions: \nFirst\, We present an innovative physical layer solution for distributed networks that provides over-the-air (OTA) clock synchronization\, known as RFCLOCK\, to overcome the hurdle of implementing fine-grained synchronization for emerging technologies. We first develop the theory for such precision synchronization\, and second implement it in a custom-design\, compatible with commercial-off-the-shelf (COTS) software-defined radios (SDRs). We compare the performance of RFClock with popular wired and GPS-based hardware solutions\, both in terms of clock performance as well as impact on distributed beamforming. \nNext\, we propose two novel approaches\, utilizing reconfigurable intelligent surfaces (RISs) to ensure reliable connectivity in wireless networks by controlling the propagation environment: i) we present RIS-based spatio-temporal approach to enhance the link reliability for IoTs where sensors are small-factor designs with single-antenna in a rich multipath environment. We demonstrate the design of RIS and how it can effectively perturb the environment\, generating multiple wireless propagation channels and achieving the performance of a multi-antenna receiver in a Single-Input Single-Output (SISO) link. We compare the performance of the system with a multi-antenna receiver in terms of channel hardening and outage probability. ii) We introduce REMARKABLE\, an online learning based adaptive beam selection strategy for robot connectivity that trains kernelized  multi-armed bandit (MAB) model directly in real-world settings of a factory floor. We show how RISs with passive reflective elements can create beamforming towards target robots\, and provide a solution to the problem of adaptive beam selection in dynamic channel conditions. We experimentally demonstrate that REMARKABLE can achieve a significant reduction in beam selection time compared to classical approaches and adaptive beam selection in mobility settings. \nFinally\, we introduce AirFC\, a system harnessing the capability of OTA computation to run inference on a neural network (NN) consisting of a set of fully connected layers (FC) by leveraging multi-antenna systems. We experimentally demonstrate and validate that such computation is accurate enough when compared to its digital counterpart. \n 
URL:https://ece.northeastern.edu/event/kubra-alemdar-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240806T100000
DTEND;TZID=America/New_York:20240806T110000
DTSTAMP:20260414T115913
CREATED:20240820T221311Z
LAST-MODIFIED:20240820T221324Z
UID:7194-1722938400-1722942000@ece.northeastern.edu
SUMMARY:Malith Jayaweera PhD Dissertation Defense
DESCRIPTION:Name:\nMalith Jayaweera \nTitle:\nEnergy-Aware Transformations for Affine Programs on GPUs \nDate:\n8/6/2024 \nTime:\n10:00:00 AM\nCommittee Members:\nProf. David Kaeli (Co-advisor)\nProf. Yanzhi Wang (Co-advisor)\nDr. Norman Rubin\nProf. Martin Kong (Ohio State University) \nAbstract:\nGraphics Processing Units (GPUs) have been increasingly used to accelerate workloads ranging from high performance computing to machine learning. Development of high-level programming languages\, improved compilers\, and runtime drivers have helped to accelerate the widespread adoption of GPUs. Given the wider adoption and ever-increasing computing capabilities\, the power consumption of GPUs is quickly becoming a critical factor. Furthermore\, the GPU micro-architecture differs from vendor to vendor\, and even between hardware generations of the same vendor. Also\, program variants with similar performance could differ in energy consumption due to the difference in utilization of GPU resources such as Streaming Multiprocessors (SMs) or memory. Despite performance improvements in compilation techniques\, energy-aware code generation for heterogeneous GPUs has not been aggressively explored. \nIn this dissertation\, we first identify the potential for energy-aware compilation techniques for GPUs. Next\, we use these insights to study loop tiling\, which is a popular loop transformation that has been successfully applied to computational domains such as linear algebra\, deep neural networks and iterative stencils. We then propose an energy-aware tile size selection for affine programs to generate energy-efficient code targeting GPUs. \nWe also investigate the challenging problem of optimizing the scheduling of complex sparse tensor algebra and expressions on GPUs\, with a focus on maximizing parallelism utilization to unlock optimal performance. We perform a comprehensive examination of the search space for sparse tensor expression scheduling\, seeking to characterize the intricate inter-relationships between kernel characteristics\, GPU architecture\, and hardware constraints such as memory bandwidth limitations\, to inform optimal scheduling decisions.
URL:https://ece.northeastern.edu/event/malith-jayaweera-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240805T140000
DTEND;TZID=America/New_York:20240805T150000
DTSTAMP:20260414T115913
CREATED:20240820T221432Z
LAST-MODIFIED:20240820T221432Z
UID:7197-1722866400-1722870000@ece.northeastern.edu
SUMMARY:Joshua Groen PhD Proposal Review
DESCRIPTION:Name:\nJoshua Groen \n\nTitle:\nOptimizing and Securing Open RAN with Experimental System Validation \nDate:\n8/5/2024 \nTime:\n2:00:00 PM \nLocation:\nISEC232; \nCommittee Members:\nProf. Kaushik Chowdhury (Advisor)\nProf. Stratis Ioannidis\nProf Engin Kirda\nDr. Christopher Morrell \nAbstract:\n5G and beyond cellular networks promise remarkable advancements in bandwidth\, latency\, and connectivity\, with the emergence of Open Radio Access Network (Open RAN) representing a pivotal direction. O-RAN inherently supports machine learning (ML) for network operation control\, with RAN Intelligence Controllers (RICs) utilizing ML models developed by third-party vendors based on key performance indicators (KPIs) from geographically dispersed base stations or user equipment (UE). Realistic and robust datasets are crucial for developing these ML models. We collect a comprehensive 5G dataset using real-world cell phones across diverse scenarios and replicate this traffic within a full-stack srsRAN-based O-RAN framework on Colosseum\, the world’s largest radio frequency (RF) emulator. This process produces a robust\, O-RAN compliant KPI dataset reflecting real-world conditions\, enabling the training of ML models for traffic slice classification with high accuracy. \nThe O-RAN paradigm introduces cloud-based\, multi-vendor\, open\, and intelligent architectures\, enhancing network observability and reconfigurability. However\, this also expands the threat surface\, exposing components and ML infrastructure to cyberattacks. We examine O-RAN security\, focusing on specifications\, architectures\, and intelligence proposed by the O-RAN Alliance. We identify threats\, propose solutions\, and experimentally demonstrate their effectiveness in defending O-RAN systems against cyberattacks\, offering a holistic and practical perspective on O-RAN security. \nWe investigate the impact of encryption on two key O-RAN interfaces: the E2 interface and the Open Fronthaul\, using a full-stack O-RAN ALLIANCE compliant implementation within the Colosseum network emulator and a production-ready Open RAN and 5G-compliant private cellular network. Our findings provide quantitative insights into the latency and throughput impacts of encryption protocols\, and we propose four fundamental principles for security by design within Open RAN systems. \nFinally\, we address the security of Time-Sensitive Networking (TSN) in O-RAN. The O-RAN framework encourages multi-vendor solutions but increases the exposure of the open fronthaul (FH) to security risks\, especially when deployed over third-party networks. Synchronization is crucial for reliable 5G links\, with attacks on synchronization mechanisms posing significant threats. We demonstrate the impact of spoofing and replay attacks on Precision Time Protocol (PTP) synchronization\, causing catastrophic failures in a production-ready O-RAN and 5G-compliant private cellular network. To counter these threats\, we design an ML-based monitoring solution detecting various malicious attacks with over 97.5% accuracy\, and outline additional security measures for the O-RAN environment.
URL:https://ece.northeastern.edu/event/joshua-groen-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240730T133000
DTEND;TZID=America/New_York:20240730T143000
DTSTAMP:20260414T115913
CREATED:20240820T221611Z
LAST-MODIFIED:20240820T221611Z
UID:7199-1722346200-1722349800@ece.northeastern.edu
SUMMARY:Kyle Lockwood PhD Dissertation Defense
DESCRIPTION:Name:\nKyle Lockwood \nTitle:\nLeveraging Submovements for Prediction and Trajectory Planning in  Human-Robot Handover \nDate:\n7/30/2024 \nTime:\n1:30:00 PM \nLocation:\nISEC 532 – \nCommittee Members:\nProf. Deniz Erdogmus (Advisor)\nProf. Eugene Tunik (Co-Advisor)\nProf. Mathew Yarossi\nProf. Tales Imbiriba \nAbstract:\nCollaborative physical interactions between humans and robots pose difficult modeling challenges. To create natural interactions\, engineers must consider human inference of intent\, anticipation of action\, and coordination of movement. Humans can handle these challenges effortlessly when interacting with one another\, but they are very difficult to overcome in robot implementations. Although human-human handover is a seemingly simple task\, it requires a complex perception-action coupling to determine when and where the handover will happen\, as well as choosing an appropriate trajectory to receive the object. Critically\, modeling human-robot handover requires incorporating knowledge about human inference and trajectory planning to obtain seamless interactions. Despite recent advancements in sensing and control\, human-robot handovers are far from approaching the fluidity and flexibility of human-human collaboration. Existing predictive models applied to human-robot handover often utilize classification methods and other approaches that suffer in accuracy when encountering noisy human trajectories that are not captured during their training. To address these challenges\, this work presents two models that act as robotic surrogates for human inference and trajectory planning in a handover task. This approach delivers promising results while remaining grounded in a physiologically meaningful feature of human motion: Gaussian-shaped submovements in velocity profiles. This thesis analyzes human-human handover kinematics to establish a baseline for model evaluation and investigate the influence of handover role\, it presents models for human inference and trajectory planning\, and it applies the inference model in human-robot handover experiments. \n 
URL:https://ece.northeastern.edu/event/kyle-lockwood-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240729T150000
DTEND;TZID=America/New_York:20240729T170000
DTSTAMP:20260414T115913
CREATED:20240820T222030Z
LAST-MODIFIED:20240820T222030Z
UID:7203-1722265200-1722272400@ece.northeastern.edu
SUMMARY:Yunus Bicer PhD Dissertation Defense
DESCRIPTION:Name:\nYunus Bicer \nTitle:\nNovel Methods for Electromyographic Hand Gesture Recognition: Expressive Gestures Sets with Minimal Calibration \nDate:\n7/29/2024 \nTime:\n3:00:00 PM \nLocation:\nISEC 632 –\nCommittee Members:\nProf. Deniz Erdogmus (Advisor)\nProf. Mathew Yarossi (Co-Advisor)\nProf. Eugene Tunik\nProf. Tales Imbiriba \nAbstract:\nGesture recognition\, the process of interpreting hand gestures through computational algorithms and devices\, is essenatial for enhancing human-computer interaction(HCI). This thesis focuses on surface electromyography (sEMG)-based gesture recognition\, where the signals generated by muscles are analyzed to identify hand gestures. sEMG systems provides more natural and intuitive interactions compared to traditional input methods and hold significant potential in assistive technology\, prosthetics\, and immersive environments such as virtual and augmented reality. Despite these advantages\, sEMG-based methods face challenges including user-specific variability in signals\, limited gesture expressivity\, and the need for extensive calibration time. This research aims to address these issues by proposing novel methods for minimizing calibration time and expanding expressivity of gesture recognition capabilities. Key innovations include a real-time probability feedback mechanism to facilitate user adaptation and techniques to recognize a wider range of gestures with minimal training data. This work seeks to enhance the usability and versatility of sEMG-based systems\, making them more accessible and effective for various applications.
URL:https://ece.northeastern.edu/event/yunus-bicer-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240729T120000
DTEND;TZID=America/New_York:20240729T133000
DTSTAMP:20260414T115913
CREATED:20240820T221948Z
LAST-MODIFIED:20240820T221948Z
UID:7201-1722254400-1722259800@ece.northeastern.edu
SUMMARY:Shijie Yan PhD Proposal Review
DESCRIPTION:Name:\nShijie Yan \nTitle:\nEfficient Monte Carlo light transport algorithms in complex scattering media \nDate:\n7/29/2024 \nTime:\n12:00:00 PM \nCommittee Members:\nProf. Qianqian Fang (Advisor)\nProf. Steven Jacques\nProf. David Kaeli\nProf. Edwin Marengo \nAbstract:\nModeling light-tissue interactions is crucial for many optical imaging modalities\, for which the Monte Carlo (MC) method has been widely recognized as the gold-standard. Despite dramatic speed improvements gained via the use of graphics processing units (GPUs)\, MC simulations remain computationally intensive. Efficient and accurate MC algorithms are needed to further consider physiologically realistic tissue models\, especially for emerging optical imaging techniques. Voxel-based MC (VMC) and mesh-based MC (MMC) are two major MC methods for modeling complex tissues with their respective strengths and weaknesses. While VMC offers higher computational efficiency due to the simple data structure\, its accuracy suffers from the terraced boundary shape especially in low-scattering medium; on the other side\, MMC offers improved boundary fidelity but can be slow and memory-intensive\, particularly at high mesh density. Furthermore\, emerging wide-field diffuse optical imaging systems using structured light require more efficient modeling to handle numerous illumination patterns. Additionally\, niche applications such as polarized light imaging could also benefit from many of the recent advances from modern MC simulations such as GPU acceleration and handling of complex heterogeneous media. \nThis proposal is aimed to push the frontiers of modern MC simulation algorithms to fundamentally enhance their utilities in diverse applications. To reduce the staircase effect in VMC\, we have developed a hybrid MC algorithm\, named split-voxel MC (SVMC)\, where sub-voxel oblique surfaces are extracted using a marching-cubes algorithm and are incorporated into a memory-efficient voxelated data structure. SVMC allows VMC to handle curved surfaces while remaining computationally efficient. A GPU-accelerated marching-cubes algorithm was also developed to further accelerate SVMC domain preprocessing. On the other hand\, to further improve MMC computational efficiency\, a dual-grid MMC (DMMC) algorithm was developed to perform fast ray-tracing inside a coarse tetrahedral mesh while saving fluence data over a dense voxelated grid\, simultaneously achieving improved speed and output accuracy. To accommodate increasing needs of modeling wide-field pattern based sources\, we have developed a “photon sharing’’ MC algorithm that performs simulations of all illumination and detection patterns in parallel\, improving computational speed by an order of magnitude. Additionally\, we have developed a GPU-accelerated massively-parallel algorithm capable of modeling Mie scattering of sphere particles in three-dimensional media for polarized light imaging\, achieving nearly 1000$\times$ speed acceleration compared to sequential implementation. \nLastly\, we have also investigated a hardware-accelerated MMC algorithm using the NVIDIA OptiX ray-tracing framework\, leveraging modern GPU ray-tracing (RT) cores extensively optimized for graphics rendering. Preliminary results demonstrate comparable accuracy and significantly improved simulation speed compared to conventional tetrahedral MMC. \n 
URL:https://ece.northeastern.edu/event/shijie-yan-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240729T090000
DTEND;TZID=America/New_York:20240729T103000
DTSTAMP:20260414T115913
CREATED:20240820T222120Z
LAST-MODIFIED:20240820T222120Z
UID:7205-1722243600-1722249000@ece.northeastern.edu
SUMMARY:Ruyi Ding PhD Proposal Review
DESCRIPTION:Name:\nRuyi Ding \nTitle:\nTowards Robust and Secure Deep Learning: From Training through Deployment to Inference \nDate:\n7/29/2024 \nTime:\n9:00:00 AM \nCommittee Members:\nProf. Yunsi Fei (Advisor)\nProf. Aidong Ding\nProf. Lili Su \nAbstract:\nIn recent years\, deep learning has experienced rapid advancement\, leading to the development of numerous commercial deep neural network (DNN) models across diverse fields such as autonomous driving\, healthcare\, and recommendation systems. However\, this wide adoption has intensified concerns about AI security throughout a neural network’s lifecycle — from training to deployment\, and inference. Various vulnerabilities have emerged\, threatening confidentiality\, privacy\, and intellectual property (IP) rights: poisoned training datasets facilitate privacy leakage and backdoor injection; after deployment\, models may be misused through unauthorized transfer learning\, a new form of IP infringement\, and weights and parameters are subject to side-channel assisted model extraction attacks; during inference\, adversarial attacks may compromise DNN functionality\, causing misclassifications.\nThis dissertation addresses new security challenges across the neural network lifecycle through several novel contributions. We identify a new poisoning vulnerability in graph neural networks\, where injecting poisoned nodes exacerbates link privacy leakage\, allowing attackers to steal adjacent information from private training data\, highlighting the necessity of robust AI training. To prevent model misuse after deployment\, we introduce EncoderLock and Non-transferable Pruning\, employing innovative training schemes and pruning methods to restrict the malicious use of pre-trained models through transfer learning\, effectively implementing applicability authorization. Towards secure deep learning implementations\, we adopt a software-hardware co-design approach to address DNN vulnerabilities. Specifically\, we leverage the electromagnetic emanations from DNN accelerators in a new approach called EMShepherd\, which detects adversarial examples (AE) on edge devices in a ‘black-box’ manner. To protect deployed DNNs against side-channel-based weight-stealing attacks\, we develop PixelMask\, which leverages the characteristics of DNN for side-channel defense by masking out unimportant inputs and dropping related operations to obfuscate side-channel signals. Lastly\, we explore the use of Trusted Execution Environments (TEE) to safeguard model weights and data privacy against model stealing and membership inference attacks.\nThis proposal identifies key challenges of robust and secure deep learning\,  tackles vulnerabilities at various stages of the AI lifecycle\, and provides comprehensive protection mechanisms\, from securing the training process to safeguarding deployed models\, paving the way for more resilient and reliable AI technologies in real-world applications.
URL:https://ece.northeastern.edu/event/ruyi-ding-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240725T140000
DTEND;TZID=America/New_York:20240725T153000
DTSTAMP:20260414T115913
CREATED:20240820T222301Z
LAST-MODIFIED:20240820T222301Z
UID:7209-1721916000-1721921400@ece.northeastern.edu
SUMMARY:Rui Lou PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nRui Luo \nTitle:\nShared Assistance Methods for Human-in-the-loop Robot Systems \nDate:\n7/25/2024 \nTime:\n2:00:00 PM \nLocation:\nEXP 701A. \nCommittee Members:\nProf. Taskin Padir (Advisor)\nProf. John Peter Whitney\nProf. Yanzhi Wang\nDr. Mark Zolotas \nAbstract:\nFully autonomous robot systems\, though highly desired\, face substantial theoretical and practical challenges when being deployed into a dynamic environment where human co-exists. To tackle this challenge\, this thesis investigates the concept of human-in-the-loop (HITL) systems\, which incorporate human input to enhance robot functionality. HITL systems offer a pragmatic alternative\, combining human versatility with robotic precision. \nThis research aims to address critical questions in one specific HITL system which  prioritizes the dominant role of human within the system\, positioning the robot primarily in an assistive capacity that adheres to human commands to facilitate the achievement of a shared goal. It explores two primary paradigms of shared assistance methods—Shared Control (SC) and Shared Autonomy (SA)—and discuss the system designs as well as specific algorithms to implement the three critical components in a HITL systems: human intention estimation\, modulation of human inputs and robot autonomy\, and the human-robot communication channel. \nDue to the variety of use cases and their specific challenges\, four distinct HITL systems are developed and analyzed to exemplify how shared assistance methods could be incorporated to assist human operators: an assistive wheelchair for indoor navigation\, a human-centered robot system design for industrial tasks\, a mobile bi-manual robot for tele-manipulation\, and a VR-based customizable shared control system for fine teleopeartion.  Although each system represents a comprehensive robotic solution\, the research contributions for each work vary. \nIn the assistive wheelchair navigation system\, the focus was on human intent estimation via low-throughput interface utilizing a recursive Bayesian filter\, with significant efforts dedicated to developing a real-time user interface serving as the communication channel. In the human-robot collaboration system for industrial setting\, the emphasis was on human state estimation through camera-based posture tracking and exploring the interplay between robot behavior and human ergonomics. For the two teleoperation systems\, the primary focus was on the real-time modulation of human inputs and robot autonomy to aid in achieving dexterous manipulation tasks. A novel VR-based user interface was developed to enable users to customize the level of robotic autonomous assistance. Each system was validated through a pilot study involving 10-20 human subjects\, accompanied by extensive data analysis to provide insights into designing HITL systems for various applications. \nIn conclusion\, this thesis contributes to a deeper understanding of HITL systems\, highlighting their potential to enhance human productivity\, ergonomics\, and quality of life in various applications through concrete examples. The integration of human intent estimation and real-time shared control methods into robotic systems demonstrates the feasibility and benefits of HITL approaches. Our extensive experimental analysis underscores the critical role of human feedback in designing practical HITL systems that can be deployed in real-world scenarios.
URL:https://ece.northeastern.edu/event/rui-lou-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240723T210000
DTEND;TZID=America/New_York:20240723T220000
DTSTAMP:20260414T115913
CREATED:20240820T222749Z
LAST-MODIFIED:20240820T222749Z
UID:7217-1721768400-1721772000@ece.northeastern.edu
SUMMARY:Zhenglun Kong PhD Dissertation Defense
DESCRIPTION:Name:\nZhenglun Kong \nTitle:\nTowards Efficient Deep Learning for Vision and Language Applications \nDate:\n7/23/2024 \nTime:\n9:00:00 PM \nCommittee Members:\nProf. Yanzhi Wang (Advisor)\nProf. David Kaeli\nProf. Dakuo Wang\nProf. Weiyan Shi \nAbstract:\nMachine learning and AI have been advancing rapidly in recent years\, leading to numerous applications across diverse fields such as autonomous vehicles\, entertainment\, science\, healthcare\, and assistive technologies—significantly enhancing daily life. However\, this advancement has been accompanied by a significant increase in the size of deep neural network (DNN) models\, which poses considerable economic challenges. The substantial costs associated with the training\, inference\, and deployment of large vision and language models require extensive computational resources and time\, proving especially taxing for smaller entities and individuals. This also complicates deployment on resource-constrained devices and in areas with limited infrastructure. \nA major challenge is deploying AI models on devices with limited capacity\, such as wearables\, sensors\, and mobile phones. These edge devices\, often operating offline and requiring real-time processing\, are critical for many applications but struggle to support large models. My dissertation research addresses these pressing issues with the aim of enabling the practical implementation of AI. We ensure the effectiveness of AI models while adapting them for use in constrained environments by tackling fundamental AI challenges from four angles: \n1. Managing Massive Computation: We introduce a novel token pruning framework that reduces the latency of Vision Transformers (ViT) by up to 41% compared to existing works on mobile devices. Additionally\, we propose a quantization framework for large language models (LLMs)\, achieving an on-device speedup of up to 2.55x compared to FP16 counterparts across multiple edge devices. \n2. Mitigating Training Costs: We develop fast\, accurate\, and memory-efficient training methods by utilizing a hierarchical data redundancy reduction scheme\, which achieves up to a 40% speedup in ViT pre-training with minimal accuracy loss. \n3. Merging Multiple Models: We propose an efficient way to merge multiple LLMS\, yielding a more advanced and robust LLM while maintaining the model  size\, as well as  reducing knowledge interference. \n4. Co-designing Speed-aware Deep Neural Networks: We consider memory access cost\, the degree of parallelism\, and practical latency in the design of 2D and 3D object detection models for practical deployment.  By addressing these areas\, my research aims to enable the effective and efficient use of AI models in constrained environments\, ensuring their practical implementation across various applications. \n 
URL:https://ece.northeastern.edu/event/zhenglun-kong-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240723T113000
DTEND;TZID=America/New_York:20240723T123000
DTSTAMP:20260414T115913
CREATED:20240820T222406Z
LAST-MODIFIED:20240820T222406Z
UID:7211-1721734200-1721737800@ece.northeastern.edu
SUMMARY:Andrea Lacava PhD Proposal Review on 7/23
DESCRIPTION:Name:\nAndrea Lacava \nTitle:\nEnabling Intelligent nextG Cellular Networks through the Open RAN  Architecture \nDate:\n7/23/2024 \nTime:\n11:30:00 AM \nLocation:\nEXP 501 \nCommittee Members:\nProf. Tommaso Melodia (Advisor)\nProf. Francesca Cuomo (Advisor)\nProf. Stefano Basagni\nProf. Ioannis Chatzigiannakis \nAbstract:\nThe 5th generation (5G) and beyond of cellular networks will support heterogeneous use cases at an unprecedented scale\, thus demanding automated control and optimization of network functionalities\, customized to the needs of individual users. However\, achieving such fine-grained control over the Radio Access Network (RAN) is unfeasible with the current cellular architecture. \nTo bridge this gap\, the Open RAN paradigm and its specification introduce an “open” architecture with abstractions that facilitate closed-loop control and enable data-driven\, intelligent optimization of the RAN at the user-level. This thesis focuses on the design and development of system-level solutions to enable intelligent control in the next generation of cellular networks through the Open RAN architecture. The main research areas explored in this thesis include (i) the design and evaluation of platforms for the creation\, datasets generation and testing of the Open RAN architecture solutions; (ii) the development of Artificial Intelligence (AI)/Machine Learning (ML) models for various deployments and networking scenarios; and (iii) innovative methodologies for agile spectrum\, infrastructure\, and AI management within Open RAN. Among the significant contributions of this thesis are ns-O-RAN\, the first open-source simulation platform that integrates a functional 5G protocol stack in Network Simulator 3 (ns-3) with an O-RAN-compliant E2 interface\, and the pioneering architectural design and implementation of the dApps\, the real-time controllers for the O-RAN architecture. Furthermore\, the solutions proposed in this thesis are leveraged to investigate various network optimization use cases deemed critical in cellular networks. The results demonstrate that our approach outperforms traditional Radio Resource Management (RRM) heuristics\, enhancing overall RAN conditions at scale in both simulations and state-of-the-art experimental testbeds. \n 
URL:https://ece.northeastern.edu/event/andrea-lacava-phd-proposal-review-on-7-23/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240722T130000
DTEND;TZID=America/New_York:20240722T143000
DTSTAMP:20260414T115913
CREATED:20240820T222500Z
LAST-MODIFIED:20240820T222500Z
UID:7213-1721653200-1721658600@ece.northeastern.edu
SUMMARY:Miead Tehrani Moayyed PhD Dissertation Defense
DESCRIPTION:Name:\nMiead Tehrani Moayyed \nTitle:\nRF Channel Models for Static and Mobile Scenarios: From Simulations to Models for Large-scale Emulations and Digital Twins \nDate:\n7/22/2024 \nTime:\n1:00:00 PM \nLocation:\nRoom: EXP-601A \nCommittee Members:\nProf. Stefano Basagni (Advisor)\nProf. Tommaso Melodia\nProf. Milica Stojanovic \nAbstract:\nThe extremely high data rates provided by communications at higher frequency bands\, such as mmWave\, can address the unprecedented demands of next-generation wireless networks. However\, several impairments limit wireless coverage at higher frequencies\, necessitating accurate models of wireless scenarios and large-scale testing to test and realize the potential of these new technologies. Large-scale accurate simulations and wireless network emulators now offer a time- and cost-effective solution for performing these tests in a lab before field deployment. This dissertation focuses on modeling\, calibration\, and validation of realistic RF scenarios for wireless network emulation at scale. The contributions of this work include: (i) Investigating the characteristics of the wireless channel at higher frequencies (mmWave) and evaluating the performance of mmWave communications on top of the NR standard for 5G cellular networks; (ii) developing a streamlined framework to create realistic RF scenarios with mobility support for Finite Input Response (FIR)-based emulators like Colosseum\, starting from rich inputs such as precise ray tracing methods or real-field measurements\, and (iii) creating an accurate AI-assisted propagation model that integrates joint measurements and simulations\, achieving the desired accuracy and reasonable computational requirements for real-time Digital Twin (DT) wireless networks. Particularly: \n(i) We derive channel propagation models via ray tracing simulations for mmWave transmissions with applications to V2X communications. We analyze aspects related to blockage modeling\, the effects of antenna beamwidth\, beam alignment\, and multipath fading in urban scenarios\, emphasizing the importance of capturing diffuse scattered rays for improved large-scale and small-scale radio channel propagation models. Furthermore\, we compare the performance of mmWave 5G NR with the 4G Long-Term Evolution (LTE) standard in a realistic environment and demonstrate the impact of MIMO technology on improving the performance of 5G NR cellular networks. As transmitted radio signals are received as clusters of multipath rays\, identifying these clusters provides better spatial and temporal characteristics of the channel. We address the clustering process and its validation across a wide range of frequencies in the mmWave spectrum below 100 GHz. We analyze how the clustering solution changes with narrower-beam antennas and provide a comparison of the cluster characteristics for different types of antennas. \n(ii) Our framework for modeling wireless scenarios for large-scale emulators optimally scales down the large set of channel input to the fewer parameters allowed by the emulator using efficient clustering techniques and Channel-Impulse Response (CIR) re-sampling. We demonstrate the effectiveness of the proposed framework by modeling realistic scenarios for Colosseum\, starting with rich input from commercial-grade ray tracing software\, Wireless InSite (WI) by Remcom. To support mobility\, we implement a mobile channel simulator on top of the WI ray-tracer\, consisting of two steps: (a) spatially sampling the mobile channels using the ray-tracer\, and (b) parsing the ray tracing outputs to extract the channels for each time instant of emulation. We also develop a Software-Defined Radio (SDR)-based channel sounder to precisely characterize emulated RF channels. The sounder framework is fully containerized\, scalable\, and automated to capture the gains and delays of the channel CIR taps. \n(iii) We extend these efforts to develop the first Digital Twins for Mobile Networks (DTMN) on Colosseum\, using the RF testbed Arena as a use case. This use case demonstrates the scope and capabilities of Colosseum as a DT\, providing the research community with a set of tools to replicate real-world environments. We compare key network performance metrics\, namely throughput and SINR\, of the Arena/Colosseum DTMN to validate the fidelity of our twinning process. Furthermore\, we present an AI-assisted propagation model to generate realistic\, real-time\, and scalable scenarios for DTMNs. This model seamlessly integrates measurements with ray tracing\, providing a high-resolution\, realistic channel model. We study the computational complexity and configuration trade-offs associated with ray tracing for high-fidelity prediction\, generating a large dataset to train this enhanced AI model. Our proof of concept highlights the accuracy and generalization capabilities of our AI model across previously unseen transmitter (TX) locations and unfamiliar environments\, outperforming state-of-the-art approaches and achieving significant improvements in accuracy. We analyze the computational complexity of our AI model\, comparing it to high-fidelity ray tracing. Profiling reveals a three-order-of-magnitude acceleration\, enabling real-time propagation prediction with reasonable accuracy. We explore key ray tracing parameters contributing to the discrepancy between measurements and simulations and demonstrate the integration of measurements into channel prediction\, thereby calibrating the model.
URL:https://ece.northeastern.edu/event/miead-tehrani-moayyed-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240718T113000
DTEND;TZID=America/New_York:20240718T123000
DTSTAMP:20260414T115913
CREATED:20240820T222954Z
LAST-MODIFIED:20240820T222954Z
UID:7219-1721302200-1721305800@ece.northeastern.edu
SUMMARY:Mehrshad Zandigohar PhD Proposal Review
DESCRIPTION:Announcing:\nPhD Proposal Review \nName:\nMehrshad Zandigohar \nTitle:\nDeployable and Multimodal Human Grasp Intent Inference in Prosthetic Hand Control \nDate:\n7/18/2024 \nTime:\n11:30:00 AM \nLocation: https://teams.microsoft.com/l/meetup-join/19%3ameeting_N2QyNzc1MWMtOWJmMi00NGNmLThlNzctN2JlNjU2Y2I1MmI1%40thread.v2/0?context=%7b%22Tid%22%3a%22a8eec281-aaa3-4dae-ac9b-9a398b9215e7%22%2c%22Oid%22%3a%22de13c261-ac42-49d7-8950-6dec3adaca4e%22%7d\nISEC 532 – \nCommittee Members:\nProf. Gunar Schirner (Advisor)\nProf. Deniz Erdogmus\nProf. Mallesham Dasari\nProf. Mariusz P. Furmanek \nAbstract:\nFor transradial amputees\, robotic prosthetic hands promise to regain the capability to perform daily living activities. Among robotic control methods for prosthetic hand actuators\, coarse-grained grasp types are a common means of effortless yet effective control. However\, to advance next-generation prosthetic hand control design\, it is crucial to address current shortcomings in robustness to out of lab artifacts\, generalizability to new environments and deployment of such compute-intensive grasp estimators. \nFirst and foremost\, current control methods based on physiological modality such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts\, muscle fatigue\, and many more. Similarly\, methods based on visual modality are also susceptible to its own artifacts\, most often due to object occlusion\, lighting changes\, etc. To address such drawbacks of single modality approaches\, we present a multimodal evidence fusion framework for grasp intent inference using eye-view video\, eye-gaze\, and EMG from the forearm processed by neural network models. Given the lack of a synchronized multimodal dataset for evaluating multimodal grasp estimation\, we propose our own customized HANDSv2 dataset with the most complete EMG profile and visual data synchronized in time. Our experimental results indicate that fusing both modalities\, on average\, improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%\, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually\, resulting in an overall fusion accuracy of 95.3%. \nAlthough visual grasp classification has shown promising results\, the generalizability to unseen object classes remains a significant challenge within the research community. This limitation arises from the fixed number of grasp types available in existing models\, contrasted with the virtually infinite variety of objects encountered in the real world. The poor performance of grasp detection models on unseen objects negatively affects users’ independence and quality of life. To address this\, we propose Grasp Vision Language Model (Grasp-VLM). Grasp-VLM takes advantage of the zero-shotness capability of large vision language models and teach them to perform human-like reasoning to infer the suitable grasp type estimate based on the object’s physical characteristics suitable for previously unseen objects\, resulting in better generalizability in real-life scenarios. Our initial results show a significant 49% accuracy of Grasp-VLM over unseen object types compared to 15.3% accuracy of the current State-of-the-Art. \nLastly\, given the computational intensity of such models\, which often contain billions of parameters\, deploying them to edge devices poses a serious challenge. To mitigate this\, we investigate Hybrid Grasp Network (HGN)\, a deployment infrastructure that combines an edge-specialized model for low-latency operations with a cloud-based universal model ensuring high generalization\, effectively balancing performance and resource constraints. \nThe holistic approach presented in this dissertation tackles four essential areas of robotic prosthetic hand control design. Handsv2 provides a customized dataset filling the gap for a multimodal synchronized dataset. Our multimodal fusion approach effectively outperforms single modality approaches providing accurate and robust grasp type estimations during the entire grasping timeline. In addition\, Grasp-VLM addresses the lack of generalizability to new object types providing a more realistic grasp estimation. Lastly\, our HGN design aims at providing a real-time solution investigating both speed and accuracy objectives.
URL:https://ece.northeastern.edu/event/mehrshad-zandigohar-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240718T110000
DTEND;TZID=America/New_York:20240718T120000
DTSTAMP:20260414T115913
CREATED:20240820T222625Z
LAST-MODIFIED:20240820T222625Z
UID:7215-1721300400-1721304000@ece.northeastern.edu
SUMMARY:Jagatpreet Nir PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nJagatpreet Nir \nTitle:\nLow Contrast Visual Sensing and Inertial-Aided Navigation in GPS-Denied  Environments \nDate:\n7/18/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Hanumant Singh (Advisor)\nProf. Martin Ludvigsen\nProf. Michael Everett\nProf. Pau Closas \nAbstract:\nField robots perform complex tasks\, necessitating high autonomy and reliable navigation capabilities. Integrating complementary sensors at the hardware level is crucial to maintaining navigation estimates even during sensor failure. This work is motivated by the need for robust and accurate navigation systems for robotic field applications\, particularly in diverse and challenging environments. The development of such systems involves balancing design requirements with constraints such as size\, weight\, power\, computational capacity\, and cost. Underwater navigation exemplifies navigation in Visually Degraded Environments (VDEs)\, where Autonomous Underwater Vehicles (AUVs) and Remote Operated Vehicles ( ROVs) navigate in challenging conditions. This thesis focuses on exploring methods to enhance the robustness of visual-inertial odometry systems in VDEs. \nThe current state-of-the-art Visual Inertial Odometery (VIO) techniques provide high-accuracy navigation estimates in texture-rich scenes. However\, robots operating in harsh and unpredictable environments\, such as underwater\, often encounter VDEs due to low texture\, uneven illumination\, or backscatter. During prolonged visual degradation\, the Inertial Measurement Units (IMUs) become the primary sensor as visual measurements are unreliable. In this reserach\, we address the problem of designing an underwater VIO navigation system and algorithmic pipelines to ensure reliable navigation estimates during several seconds of visual degradation\, emphasizing the importance of selecting better Micro Electro Mechanical Systems (MEMS) IMUs for dependable performance within a cost budget. \nA robust VIO system designed for underwater settings is introduced. Our contributions include a general system design approach for underwater VIO\, an algorithmic formulation for fusing deep learning-based Visual Odometry (VO) with IMUs data. The underwater datasets depict visual degradation in real-world settings with a time-synchronized 8-bit grayscale camera and IMU. Our hybrid VIO pipeline integrates IMU measurements with VO estimates from a deep-learning VO engine\, combining deep learning with classical sensor fusion techniques to achieve accurate metric and gravity-aligned trajectory estimates even in visually degraded conditions. The proposed system outperforms traditional VIO methods\, demonstrating robustness with consistent trajectory estimates and minimal drift during complete visual outages. The extensible design allows for the incorporation of new sensors\, addressing various underwater navigation challenges. \nTo conclude\, this thesis focuses on environments where exteroceptive sensing\, like cameras\, is compromised for extended periods\, relying on proprioceptive sensors such as IMUs to navigate. The aim is to quantify navigation accuracy in harsh environments and improve system design at both hardware and software levels. Specifically\, underwater visual-inertial navigation for small vehicles is used to demonstrate the principles and algorithms developed. The outlined methodology showcases sensor selection\, sensor-fusion algorithms\, and individual improvements to build enhanced visual-inertial systems for VDEs and the applicability of the proposed approach from controlled settings to field tests. \n 
URL:https://ece.northeastern.edu/event/jagatpreet-nir-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240430
DTEND;VALUE=DATE:20240503
DTSTAMP:20260414T115913
CREATED:20240309T004325Z
LAST-MODIFIED:20240309T004325Z
UID:6821-1714435200-1714694399@ece.northeastern.edu
SUMMARY:2024 Archimedes Health Care Security Week
DESCRIPTION:April 30-May 2 New Orleans\, LA \nThe Archimedes Center for Healthcare and Device Security at Northeastern was founded by Professor Kevin Fu and is an industry group focused on ensuring the safety and cybersecurity of medical devices and healthcare.  Archimedes is hosting an exciting event in New Orleans bringing together the world’s best experts and thought leaders on healthcare and device security.  Students are encouraged to learn more about Archimedes and are welcome to attend the event!
URL:https://ece.northeastern.edu/event/2024-archimedes-health-care-security-week/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240425T093000
DTEND;TZID=America/New_York:20240425T123000
DTSTAMP:20260414T115913
CREATED:20240403T221910Z
LAST-MODIFIED:20240403T221934Z
UID:6857-1714037400-1714048200@ece.northeastern.edu
SUMMARY:Shweta Singh PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nShweta Singh \nTitle:\nA Qualitative Approach for Learning and Detection of Emergent Behaviors \nDate:\n4/25/2024 \nTime:\n9:30:00 AM \nCommittee Members:\nProf. Mieczyslaw M. Kokar (Advisor)\nProf. Taskin Padir\nDr. Paul Kogut \nAbstract:\nEmergence has been studied in various fields\, including engineering\, computer science\, and economics. However\, there is no agreed-upon definition of what emergence means. As a result\, it remains a debatable topic among researchers who want to understand and use emergence to engineer their systems. In this thesis\, we are targeting two issues related to the domain of emergence.\nFirst\, we introduce a framework that enables researchers to encode key aspects of emergence theories into an ontology using the Emergence Metaontology. This metaontology provides the basic vocabulary specialized to the domain of emergence. Researchers will be able to add their theories to those that are already encoded\, and then use queries to examine and compare these theories. OWL reasoners can infer new facts and possibly identify inconsistencies between conflicting theories. This will allow researchers to gain a greater understanding of the existing emergence theories. To the best of our knowledge\, this research is the first attempt in the emergence domain to use a query-supported ontological approach to encode and compare multiple theories of emergence.\nThe second contribution is algorithms for detecting the onset of emergence at the beginning of a possibly irreversible emergent behavior. The approach to accelerate the detection of emergence is based on transforming the values of the variables of a system into a different space and then running detection in that space. The first transformation relies on the property of self-similarity – when the values of system variables are in a specific relation. Such relations formalize hypersurfaces in the quantitative system space. The second kind of transformation utilizes qualitative abstraction of the general dynamical system using the $Q^2$ (Quantitative-Qualitative) approach. The quantitative variables of a general dynamical system are mapped to qualitative variables (hypersurfaces)\, leading to the representation of the monitored dynamical system as a Qualitative Dynamical system (QDS).  Detection of emergence is then implemented as a process of analyzing the behavior of the QDS. The efficiency of the approach has been validated on multiple simulations of dynamical systems.
URL:https://ece.northeastern.edu/event/shweta-singh-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240419T140000
DTEND;TZID=America/New_York:20240419T170000
DTSTAMP:20260414T115913
CREATED:20240403T221541Z
LAST-MODIFIED:20240403T221541Z
UID:6853-1713535200-1713546000@ece.northeastern.edu
SUMMARY:Ruopeng Jia MS Thesis Defense
DESCRIPTION:Announcing:\nMS Thesis Defense \nName:\nRuopeng Jia \nTitle:\nEngineering Super-modes of Coupled Ring Resonator Arrays \nDate:\n4/19/2024 \nTime:\n2:00:00 PM \nCommittee Members:\n1) Sunil Mittal (Advisor)\n2) Prof. Yongmin Liu\n3) Prof. Ghosh Siddhartha \nAbstract:\nIn the past year of learning and research\, we have mastered the ability to analyze and modulate various resonant ring structures using Hamiltonian operators. Using genetic optimization algorithms\, we have achieved a precisely controllable four wave mixing process under simulated conditions.
URL:https://ece.northeastern.edu/event/ruopeng-jia-ms-thesis-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240416T150000
DTEND;TZID=America/New_York:20240416T163000
DTSTAMP:20260414T115913
CREATED:20240411T005310Z
LAST-MODIFIED:20240411T010425Z
UID:6885-1713279600-1713285000@ece.northeastern.edu
SUMMARY:Zihan Wei MS Thesis Defense
DESCRIPTION:Name:\nZihan Wei \nTitle:\nSpatial Correlation Based Broadband Acoustic Beamforming \nDate:\n4/16/2024 \nTime:\n3:00:00 PM \nLocation:\nEXP-601A \nCommittee Members:\n1. Prof. Milica Stojanovic (Advisor)\n2. Prof. Stefano Basagni\n3. Prof. Josep Jornet \nAbstract:\nThis thesis presents a spatial correlation based broadband acoustic beamforming approach\, addressing significant challenges pertains to acoustic communication channels\, such as time-varying multipath propagation and volatile phase fluctuations due to surface reflections. The proposed beamforming approach utilizes the synchronization preamble\, a high-resolution pseudo-random sequence\, to estimate the spatial correlation matrices for each frequency bin and decompose these spatial correlation matrices using singular vector decomposition. The singular vectors are then applied to each carrier of the orthogonal frequency division multiplexing signals as beamforming weights. The proposed algorithm is demonstrated using simulations and an in-air acoustic communications testbed. Performance metrics such as the mean squared error and bit error rate are presented\, demonstrating excellent performance improvement over the angle-based beamforming approach.
URL:https://ece.northeastern.edu/event/zihan-wei-ms-thesis-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240416T140000
DTEND;TZID=America/New_York:20240416T153000
DTSTAMP:20260414T115913
CREATED:20240411T005747Z
LAST-MODIFIED:20240411T005814Z
UID:6891-1713276000-1713281400@ece.northeastern.edu
SUMMARY:Nolan Pearce MS Thesis Defense
DESCRIPTION:Name:\nNolan Pearce \nTitle:\nDownlink Transmit Beamforming: Single-Carrier Acoustic Communication in a Noisy Environment \nDate:\n4/16/2024 \nTime:\n2:00:00 PM \nLocation:\nEXP-601A: \nCommittee Members:\n1. Prof. MIlica Stojanovic (Advisor)\n2. Prof. Stefano Basagni\n3. Prof. Josep Jornet\n4. Dr. Dimitrios Koutsonikolas \nAbstract:\nNoisy wireless acoustic channels produce intersymbol interference (ISI) from multipath propagation. This interference may be reduced by equalization techniques but require computationally intensive receiver algorithms. Typically\, beamforming methods are implemented at the uplink receiver to reduce complexity of equalization methods. However\, these methods require a receiver array. Using angle estimation of a transmitter\, beamforming techniques can be applied in downlink signal transmission to reduce equalizer complexity. This hypothesis is supported through simulation and applied to an open-air acoustic channel for significant performance improvement. Improving downlink signals through beamforming enables less complex user design suitable for single-carrier communications.
URL:https://ece.northeastern.edu/event/nolan-pearce-ms-thesis-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240415T133000
DTEND;TZID=America/New_York:20240415T143000
DTSTAMP:20260414T115913
CREATED:20240411T010005Z
LAST-MODIFIED:20240411T010005Z
UID:6893-1713187800-1713191400@ece.northeastern.edu
SUMMARY:Lin Deng PhD Dissertation Defense
DESCRIPTION:Name:\nLin Deng \nTitle:\nFunction Capacity Expansion of Nano-Optics via Multiplexed Metasurfaces \nDate:\n4/15/2024 \nTime:\n1:30:00 PM \nLocation:\nSL 011 \nCommittee Members:\nProf. Yongmin Liu (advisor)\nProf. Hossein Mosallaei\nProf. Sunil Mittal \nAbstract:\nThroughout history\, the exploration of light has been fundamental to our understanding of the world and has driven advancements in technology and communication. Metasurfaces\, composed of rationally designed nanostructures\, offer a revolutionary means to control light in a prescribed manner. Metasurfaces can operate in conventional free space\, and the emerging integrated photonics domain. Maximizing functionality and degrees of freedom (DOFs) in both arenas is paramount. My thesis aims to push the limit of metasurface capabilities by leveraging multiplexing strategies across input/output parameters such as polarization\, incidence angle\, and waveguide mode. I will present three novel metasurfaces as follows. \n(1) We aim to expand nano-printing multiplexing capacity using the Polarization-Encoded Lenticular Nano-Printing (Pollen) method. When employing three input/output polarization pairs and varying detection angles\, a single metasurface device enables the observation of up to 49 high-resolution nano-printing images. \n(2) By integrating metasurfaces with waveguides\, we can couple guided modes to free space while controlling wavefront and polarization. Our research exploits the multiplexed on-chip metasurface\, which could generate multiple functions depending on the polarization states and waveguide mode propagation directions. \n(3) We investigated mode division multiplexing (MDM) for high-volume optical transmission\, enabling multiple waveguide modes to coexist without interference. By manipulating the orientations of individual nanoantennas\, we have achieved on-demand mode conversion and focusing effects\, demonstrating promising results in various scenarios.  \nIn conclusion\, my research seeks to push the boundaries of metasurface functionalities through innovative multiplexing approaches. The research findings allow us to unlock new possibilities in optical display\, communication\, manipulation\, and beyond by integrating multiple functionalities into single free-space and on-chip metasurfaces.
URL:https://ece.northeastern.edu/event/lin-deng-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240412T123000
DTEND;TZID=America/New_York:20240412T140000
DTSTAMP:20260414T115913
CREATED:20240403T221805Z
LAST-MODIFIED:20240403T221805Z
UID:6855-1712925000-1712930400@ece.northeastern.edu
SUMMARY:Peng Wu PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nPeng Wu \nTitle:\nBayesian Data Fusion for Distributed Learning \nDate:\n4/12/2024 \nTime:\n12:30:00 PM \nLocation:\nISEC 532 \nCommittee Members:\nProf. Pau Closas (Advisor)\nProf. Deniz Erdogmus\nProf. Lili Su \nAbstract:\nThe necessity for distributed data fusion arises from the increasing demand to integrate diverse and voluminous data sources\, especially in applications where large numbers of users are collaborating to perform inference and learning tasks. This integration is crucial when data is available in a distributed manner or originates from various sensor types\, aiming to deduce specific quantities of interest accurately. Moreover\, the importance of privacy cannot be overstated\, particularly in scenarios where sensitive information\, such as location data\, is involved. Federated learning emerges as a pivotal solution in this context\, enabling model training on local datasets without the need to exchange the data itself\, thus preserving user privacy. However\, the deployment of these technologies encounters significant challenges\, including the multiple counting problem in data fusion\, where data may be redundantly used across different estimations without user awareness\, and the non-IID problem in federated learning\, where the non-identically distributed nature of data across clients can severely hamper the model’s performance. \nTo address these challenges\, this dissertation explores the intersection of data fusion\, federated learning\, and Bayesian methods\, with a focus on applied problems in indoor localization\, satellite-based navigation\, and image processing that spans both theoretical analysis and practical application. In the realm of data fusion\, we delve into the Bayesian framework to offer a solution that not only facilitates the optimal integration of sensor data with prior knowledge but also navigates the intricacies of feature fusion effectively. This approach mitigates the multiple counting issue by ensuring that the fusion of local estimates accounts for the overuse of prior knowledge. In tackling the problems inherent to federated learning\, particularly the non-IID issue\, we introduce novel frameworks and algorithms designed to enhance model training and performance in a privacy-preserving manner. We explore personalized and clustered federated learning as methods to customize the learning process to individual client characteristics and to group clients with similar data traits\, respectively. A number of practical problems are explored using those federated methodologies\, including indoor fingerprinting\, jamming interference classification\, or image classification tasks. Noticeably\, this thesis proposes a novel Bayesian clustered federated learning framework that generalizes existing clustered federated learning schemes by leveraging Bayesian data association modeling. By implementing a Bayesian perspective within these frameworks\, the dissertation proposes practical algorithms that achieve a balance between performance and computational efficiency\, ultimately advancing the application of distributed data fusion and federated learning in privacy-sensitive fields.
URL:https://ece.northeastern.edu/event/peng-wu-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240412T120000
DTEND;TZID=America/New_York:20240412T130000
DTSTAMP:20260414T115913
CREATED:20240403T222325Z
LAST-MODIFIED:20240403T222325Z
UID:6861-1712923200-1712926800@ece.northeastern.edu
SUMMARY:Baolin Li PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nBaolin Li \nTitle:\nMaking Machine Learning on HPC Systems Cost-Effective and Carbon-Friendly \nDate:\n4/12/2024 \nTime:\n12:00:00 PM
URL:https://ece.northeastern.edu/event/baolin-li-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240408T153000
DTEND;TZID=America/New_York:20240408T170000
DTSTAMP:20260414T115913
CREATED:20240403T222632Z
LAST-MODIFIED:20240403T222632Z
UID:6865-1712590200-1712595600@ece.northeastern.edu
SUMMARY:Jinkun Zhang PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nJinkun Zhang \nTitle:\nLow-latency Forwarding\, Caching and Computation Placement in Data-centric Networks \nDate:\n4/8/2024 \nTime:\n3:30:00 PM \nLocation:\nEXP-459\, \nCommittee Members:\nProf. Edmund Yeh (Advisor)\nProf. Stratis Ioannidis\nProf. Kaushik Chowdhury \nAbstract:\nWith the exponential growth of data- and computation-intensive network applications\, such as real-time augmented reality/virtual reality rendering and large-scale language model training\, traditional cloud computing frameworks exhibit inherent limitations. To address these challenges\, dispersed computing has emerged as a promising next-generation networking paradigm. By enabling geographically distributed nodes with heterogeneous computation capabilities to collaborate\, dispersed computing overcomes the bottlenecks of traditional cloud computing and facilitates in-network computation tasks\, including the training of large models. In data-centric networks\, communication and computation are resolved around data names instead of host addresses. The deployment of network caches\, by enabling data reuse\, offers substantial benefits for data-centric networks. For instance\, consider a scenario where multiple machine learning applications seek to train different models simultaneously. This application could (partially) share data samples and/or computational results. Optimal caching of data and/or results can significantly reduce the overall training cost\, compared to each application independently gathering and transmitting data. \nThis dissertation aims to minimize average user delay in a general cache-enabled computing network. We introduce a low-latency framework that jointly optimizes packet forwarding\, storage deployment\, and computation placement. The proposed framework effectively supports data-intensive and latency-sensitive computation applications in data-centric computing networks with heterogeneous communication\, storage\, and computation capabilities. To minimize user latency in congestible networks\, we model delays caused by link transmissions and CPU computations using traffic-dependent nonlinear functions. We consider a series of related network resource allocation problems in a unified network model. \nWe first investigate the joint forwarding and computation placement problem\, then the joint forwarding and elastic caching problem. Despite the non-convexity of the former subproblem\, we provide a set of sufficient optimality conditions that lead to a distributed algorithm with polynomial-time convergence to the global optimum. For the latter subproblem\, we demonstrate its NP-hardness and non-submodularity\, even after continuous relaxation. We propose a set of conditions that provide a finite bound from the optimum. To the best of our knowledge\, our method represents the first analytical progress in addressing the joint caching and forwarding problem with arbitrary topology and non-linear costs. Upon solving the above two subproblems\, we formally propose the low-latency joint forwarding\, caching\, and computation placement framework. We formulate the mixed-integer NP-hard total cost minimization problem jointly over forwarding\, caching\, and computation offloading variables. Developing on the established result for both subproblems\, we propose two methods\, each with an analytical guarantee. The first method achieves a 1/2 approximation guarantee by exploiting the “submodular + concave” structure of the problem\, leading to an offline distributed algorithm. In real scenarios\, however\, request patterns and network status are not known prior and can be time-varying. To this end\, our second method leads to an online adaptive algorithm exploiting its “convex + geodesic-convex” nature\, with a proven bounded gap from the optimum. \nThe proposed solutions are followed by a few extension problems. Specifically\, we generalize the computation from “single-step” to “service chain” applications. We also generalize the solution to incorporate congestion control by considering an “extended graph”. Furthermore\, several network resource allocation optimization problems related to data-centric networking are introduced\, expanding the scope of this dissertation. For example\, we investigate joint caching and transmission power allocation in wireless heterogeneous networks\, where the total transmission energy is minimized subject to constraints for SINR lower bounds\, cache capacities\, and total power budget at each node. We also study the optimal multi-commodity pricing with finite menu length\, where novel asymptotic bounds on quantization errors are devised.
URL:https://ece.northeastern.edu/event/jinkun-zhang-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240404T153000
DTEND;TZID=America/New_York:20240404T170000
DTSTAMP:20260414T115913
CREATED:20240319T181622Z
LAST-MODIFIED:20240319T181622Z
UID:6843-1712244600-1712250000@ece.northeastern.edu
SUMMARY:Nicolas Bohm Agostini PhD Proposal Review
DESCRIPTION:Announcing:\nPhD Proposal Review \nName:\nNicolas Bohm Agostini \nTitle:\nHardware/Software Codesign and Compiler Techniques for Efficient Hardware Acceleration of Dense Linear Algebra Kernels and Machine Learning Applications \nDate:\n4/4/2024 \nTime:\n3:30:00 PM \nLocation: Zoom \nCommittee Members:\nProf. David Kaeli (Advisor)\nProf. Gunar Schirner\nProf. José Luis Abellán (University of Murcia)\nAntonino Tumeo (PNNL) \nAbstract:\nToday’s linear algebra and machine learning applications (ML) continue to grow in size and complexity\, placing rapidly increasing demands on the underlying hardware and software systems. To address these issues\, hardware designers have proposed using custom accelerators explicitly designed for accelerating these demanding workloads. What needs to be improved is the ability to perform efficient hardware/software (HW/SW) co-design in order to reap the full benefits from these platforms. This thesis presents an integrated solution to facilitate HW/SW accelerator design. We also address issues in accelerator deployment\, enabling rapid prototyping\, integrated benchmarking\, and comprehensive performance analysis of custom accelerators. \nIn this thesis\, we demonstrate the value of a lightweight system modeling library integrated into the build/execution environment\, leveraging TensorFlow~Lite for deployment. We also explore efficient design space exploration of different classes of accelerators while considering the impact of parameters. Secondly\, we employ the Multi-Level Intermediate Representation (MLIR) compiler framework to automatically partition host code from accelerator code\, pre-optimizing the latter for improved high-level synthesis designs and high-quality accelerated kernels. Lastly\, we propose compiler extensions to automate the generation and optimization of communication between the host CPU and AXI-based accelerators. \nWe present novel solutions that enable more efficient and effective design space exploration\, optimization\, and deployment of custom accelerators. The utility of these approaches is demonstrated through experiments with specific accelerator designs and key linear algebra and ML workloads. Most importantly\, these solutions empower high-level language users\, such as domain scientists\, to participate in the design of new accelerator features.
URL:https://ece.northeastern.edu/event/nicolas-bohm-agostini-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240404T130000
DTEND;TZID=America/New_York:20240404T140000
DTSTAMP:20260414T115913
CREATED:20240403T222208Z
LAST-MODIFIED:20240403T222208Z
UID:6859-1712235600-1712239200@ece.northeastern.edu
SUMMARY:Anu Jagannath PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nAnu Jagannath \nTitle:\nDeep Learning at the Edge for Future G Networks: RF Signal Intelligence for Comprehensive Spectrum Awareness \nDate:\n4/4/2024 \nTime:\n1:00:00 PM \nCommittee Members:\nProf. Tommaso Melodia (Advisor)\nProf. Kaushik Chowdhury\nProf. Yanzhi Wang \nAbstract:\nFuture communication networks must address the scarce spectrum to accommodate extensive growth of heterogeneous wireless devices. Efforts are underway to address spectrum coexistence\, enhance spectrum awareness\, and bolster authentication schemes. Wireless signal recognition is becoming increasingly more significant for spectrum monitoring\, spectrum management\, secure communications\, among others. Consequently\, comprehensive spectrum awareness at the edge has the potential to serve as a key enabler for the emerging beyond 5G (fifth generation) networks. State-of-the-art studies in this domain have (i) only focused on a single task – modulation or signal (protocol) classification or radio frequency fingerprinting – which in many cases is insufficient information for a system to act on\, (ii) consider either radar or communication waveforms (homogeneous waveform category)\, and (iii) does not address edge deployment during neural network design phase. In this dissertation\, deep learning is applied to the various signal recognition problems from  a multi-task perspective with an emphasis on edge deployment. To address edge deployment\, various techniques are applied to solve the signal recognition problem under consideration (modulation\, wireless protocol\, emitter fingerprint recognition) to design scalable and computationally efficient framework. While designing the edge deployable architectures\, the generalization capability of the architectures are evaluated under various circumstances to quantify their performance under real-world settings such as emissions from actual emitters (commercial emissions wherever applicable)\, training with a different propagation scenario and testing under a never-before-seen setting. \nThe study was sectioned into different stages where multi-task learning is first applied to solving wireless standard and modulation recognition\, followed by applying deep compression for CBRS radar waveform classification\, next radio frequency fingerprinting for commercial WiFi and Bluetooth emissions were studied utilizing novel multi-task attentional architectures\, and finally the multi-task learning together with deep compression was employed to deploy the architectures in a real-time streaming radio testbed for real-time inferencing of wireless standard and modulation recognition. The feasibility of employing deep compression techniques are carefully evaluated in a real-world deployment setting to quantify the performance from a computational and inference capacity perspective.
URL:https://ece.northeastern.edu/event/anu-jagannath-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240403T153000
DTEND;TZID=America/New_York:20240403T170000
DTSTAMP:20260414T115913
CREATED:20240319T181441Z
LAST-MODIFIED:20240319T181441Z
UID:6841-1712158200-1712163600@ece.northeastern.edu
SUMMARY:Kaustubh Shivdikar PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nKaustubh Shivdikar \nTitle:\nEnabling Accelerators for Graph Computing \nDate:\n4/3/2024 \nTime:\n3:30 PM \nLocation: Zoom \nCommittee Members:\nProf. David Kaeli (Advisor)\nProf. Devesh Tiwari\nProf. Ajay Joshi (Boston University)\nProf. John Kim (KAIST)\nProf. José Luis Abellán (University of Murcia) \nAbstract:\nThe advent of Graph Neural Networks (GNNs) has revolutionized the field of machine learning\, offering a novel paradigm for learning on graph-structured data. Unlike traditional neural networks\, GNNs are capable of capturing complex relationships and dependencies inherent in graph data\, making them particularly suited for a wide range of applications including social network analysis\, molecular chemistry\, and network security. The impact of GNNs in these domains is profound\, enabling more accurate models and predictions\, and thereby contributing significantly to advances in these fields. \nGNNs\, with their unique structure and operation\, present new computational challenges compared to conventional neural networks. This requires comprehensive benchmarking and a thorough characterization of GNNs to obtain insight into their computational requirements and to identify potential performance bottlenecks. In this thesis\, we aim to develop a better understanding of how GNNs interact with the underlying hardware and will leverage this knowledge as we design specialized accelerators and develop new optimizations\, leading to more efficient and faster GNN computations. \nA pivotal component within GNNs is the Sparse General Matrix-Matrix Multiplication (SpGEMM) kernel\, known for its computational intensity and irregular memory access patterns. In this thesis\, we address the challenges posed by SpGEMM by implementing a highly optimized hashing-based SpGEMM kernel tailored for a custom accelerator. This optimization is crucial to enhancing the performance of GNN workloads\, ensuring that the acceleration potential of custom hardware is fully realized. \nSynthesizing these insights and optimizations\, we design state-of-the-art hardware accel-erators capable of efficiently handling various GNN workloads. Our accelerator architectures are built on our characterization of GNN computational demands\, providing clear motivation for our approaches. Furthermore\, we extend our exploration to emerging GNN workloads in the domain of graph neural networks. This exploration into novel models underlines our comprehensive approach\, as we strive to enable accelerators that are not just performant\, but also versatile\, able to adapt to the evolving landscape of graph computing.
URL:https://ece.northeastern.edu/event/kaustubh-shivdikar-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240403T110000
DTEND;TZID=America/New_York:20240403T123000
DTSTAMP:20260414T115913
CREATED:20240403T222458Z
LAST-MODIFIED:20240403T222458Z
UID:6863-1712142000-1712147400@ece.northeastern.edu
SUMMARY:Batool Salehihikouei PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nBatool Salehihikouei \nTitle:\nLeveraging Deep Learning on Multimodal Sensor Data for Wireless Communication: From mmWave Beamforming to Digital Twins \nDate:\n4/3/2024 \nTime:\n11:00:00 AM \nLocation: EXP-601A \nCommittee Members:\nProf. Kaushik Chowdhury (Advisor)\nProf. Hanumant Singh\nProf. Josep Jornet\nDr. Mark Eisen \nAbstract:\nWith the widespread Internet of Things (IoT) devices\, a wide variety of sensors are now present in different environments. For example\, self-driving vehicles and automated warehouses depend on sensor information for navigation and management of the robots\, respectively. In this dissertation\, we present methods\, where these sensors are re-purposed to assist network management in wireless communication\, especially when classic approaches fall short to provide the required quality of service (QoS). This thesis presents data-driven and AI-based methods\, where the multimodal sensor information is used for two applications: (i) beamforming at the mmWave band and (ii) joint optimization of the navigation and network management in warehouse environments. In the first part\, we study multimodal beamforming methods for mmWave vehicular networks. First\, we present deep learning fusion algorithms\, where the inputs from a multitude of sensor modalities such as GPS (Global Positioning System)\, camera\, and LiDAR (Light Detection and Ranging) are combined towards predicting the optimum beam at the mmWave band. We prove that fusing the multimodal sensor data improves the prediction accuracy\, compared to using single modalities. Second\, we study the trade-off between the accuracy and cost of different learning strategies and demonstrate that federated learning is the most successful learning strategy\, with respect to the communication overhead. Third\, we propose algorithms to further optimize the communication overhead by incorporating a pruning strategy tailored to the disturbed nature of the federated learning systems. Fourth\, we propose a modality-agnostic deep learning paradigm that operates on any possible combination of sensor modalities. In part two\, we propose using digital twins to overcome the challenges of scarcity of data and close-world assumption in deep learning algorithms. A digital twin is a replica of a real world entity\, which is typically used for studying the impact of any configuration settings in a safe\, digital environment. In this dissertation\, we propose a framework that operates by harmonic usage of the DL models and running emulations in the twin. Moreover\, we use digital twins to generate training labels and fine-tune the models for unseen scenarios. Finally\, we study a robotic industrial setting\, where the path planning policy is continuously updated by monitoring the dynamics of the real world\, constructing the digital twin\, and updating the policy. The constructed twin captures the features of both physical and RF environments in the digital world and includes a reinforcement learning algorithm that jointly optimizes navigation and network resource management.
URL:https://ece.northeastern.edu/event/batool-salehihikouei-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240401T103000
DTEND;TZID=America/New_York:20240401T113000
DTSTAMP:20260414T115913
CREATED:20240319T180923Z
LAST-MODIFIED:20240319T180923Z
UID:6835-1711967400-1711971000@ece.northeastern.edu
SUMMARY:Reza Vafaee PhD Proposal Review
DESCRIPTION:Announcing:\nPhD Proposal Review \nName:\nReza Vafaee \nTitle:\nEfficient Algorithms for Sparse Sensor Scheduling in Large-Scale Dynamical Systems with Performance Guarantees \nDate:\n4/1/2024 \nTime:\n10:30:00 AM \nLocation: Zoom \nCommittee Members:\nProf. Milad Siami (Advisor)\nProf. Eduardo Sontag\nProf. Laurent Lessard\nProf. Alex Olshevsky (Boston University) \nAbstract:\nThis research proposal introduces innovative frameworks for sparse sensor scheduling in large-scale dynamical networks. The first framework addresses sensor scheduling in discrete-time linear time-invariant dynamical networks\, presenting a novel learning-based rounding method to convert weighted sensor schedules into sparse\, unweighted schedules while maintaining comparable observability performance. The second framework extends the approach to dynamically select sensors for linear time-varying systems\, utilizing an online sparse sensor scheduling framework with randomized algorithms to approximate fully-sensed systems with a constant average number of active sensors at each time step. Finally\, a myopic approach within a Kalman filtering framework is adopted in the third framework\, addressing non-submodular sensor scheduling in large-scale linear time-varying dynamics. A simple greedy algorithm is employed\, providing approximation bounds based on submodularity and curvature concepts. Simulation results validate the theoretical foundations and demonstrate the proposed approach’s superiority over existing methods.
URL:https://ece.northeastern.edu/event/reza-vafaee-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240329T100000
DTEND;TZID=America/New_York:20240329T120000
DTSTAMP:20260414T115913
CREATED:20240319T181258Z
LAST-MODIFIED:20240319T181258Z
UID:6839-1711706400-1711713600@ece.northeastern.edu
SUMMARY:Matthew Wallace MS Thesis Defense
DESCRIPTION:Announcing:\nMS Thesis Defense \nName:\nMatthew Wallace \nTitle:\nModel Predictive Planning \nDate:\n3/29/2024 \nTime:\n10:00:00 AM \nLocation:\nRoom: HS 204.  Link: Teams \nCommittee Members:\nProf. Laurent Lessard (Advisor)\nProf. Michael Everett\nProf. Derya Aksaray \nAbstract:\nThis thesis presents Model Predictive Planning (MPP)\, a trajectory planner for low-agility vehicles such as a fixed-wing aircraft to navigate obstacle-laden environments.  MPP consists of (1) a multi-path planning procedure that identifies candidate paths\, (2) a raytracing procedure that generates linear constraints around these paths that enforce obstacle avoidance\, and (3) a convex quadratic program that finds a feasible trajectory within these constraint if one exists. Low-agility aircraft cannot track arbitrary paths\, so refining a given path into a trajectory that respects the vehicle’s limited maneuverability and avoids obstacles often leads to an infeasible optimization problem. The critical feature of MPP is that it efficiently considers multiple candidate paths during the refinement process\, thereby greatly increasing the chance of finding a feasible and trackable trajectory. I begin by presenting a background on path planning\, trajectory optimization\, and Model Predictive Control.  This is followed by a presentation of the MPP algorithm.  Finally\, I demonstrate the effectiveness of MPP on both a longitudinal and 3D aircraft model.
URL:https://ece.northeastern.edu/event/matthew-wallace-ms-thesis-defense/
END:VEVENT
END:VCALENDAR