BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical &amp; Computer Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ece.northeastern.edu
X-WR-CALDESC:Events for Department of Electrical &amp; Computer Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210507T110000
DTEND;TZID=America/New_York:20210507T120000
DTSTAMP:20260515T143428
CREATED:20210506T233919Z
LAST-MODIFIED:20210506T233919Z
UID:4928-1620385200-1620388800@ece.northeastern.edu
SUMMARY:ECE Faculty Seminar: Sumientra Rampersad
DESCRIPTION:Faculty Seminar: Is temporal interference the key to noninvasive deep brain stimulation? Answers from simulation studies in mice and humans. \nSumientra Rampersad \nLocation: Zoom Link \nAbstract: Transcranial current stimulation (tCS) has been used for two decades to noninvasively investigate and influence brain function in both healthy volunteers and clinical populations. While many positive effects have been found\, the goals of high focality\, accurate targeting and deep stimulation are yet to be achieved. Transcranial temporal interference stimulation (tTIS) is a new form of tCS that might improve the method on all three fronts. tTIS uses two alternating currents to create an amplitude-modulated electric field that can peak deep in the brain. A recent murine study showed promising effects of tTIS and concluded that the technique may be used as a noninvasive form of deep brain stimulation in humans\, but results from human experiments have not yet been published. In this talk I will present results of finite element simulations with realistic head models to investigate the electric fields induced by tTIS in the brain\, comparing results in murine and human head models for tTIS and conventional tCS. Due to the nonlinear nature of tTIS\, conventional methods to optimize tCS fields for a specific brain target cannot be used. I will present two nonconvex optimization methods for tTIS and compare their efficiency and results. Finally\, I will discuss the implications of the results of these simulation and optimization studies for potential applications of tTIS in humans. \nBio: Sumientra Rampersad is an Assistant Research Professor in the Department of Electrical and Computer Engineering at Northeastern University in Boston\, where she leads the Brain Stimulation & Simulation Lab. Dr. Rampersad’s research aims to improve understanding of the working mechanisms behind neuromodulation and improve its application using computational methods and experiments with human subjects. She investigates invasive (ECoG\, sEEG) and noninvasive (tCS\, TMS) brain stimulation\, as well as peripheral stimulation\, and is especially interested in bridging the gap between modeling and experiments through model-based experimentation. Her research in collaboration with various academic and clinical partners has been awarded funding by NIA\, NINDS and NIMH. Dr. Rampersad was previously a research scientist in Northeastern’s Cognitive Systems Lab and obtained her PhD at the Radboud University Donders Institute in Nijmegen\, the Netherlands.
URL:https://ece.northeastern.edu/event/ece-faculty-seminar-sumientra-rampersad/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210513T140000
DTEND;TZID=America/New_York:20210513T150000
DTSTAMP:20260515T143428
CREATED:20210503T175624Z
LAST-MODIFIED:20210510T175607Z
UID:4880-1620914400-1620918000@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Siyue Wang
DESCRIPTION:PhD Proposal Review: Towards Robust and Secure Deep Learning Models and Beyond \nSiyue Wang \nLocation: Zoom Link \nAbstract: Modern science and technology witness the breakthroughs made by deep learning during the past decades. Fueled by rapid improvements of computational resources\, learning algorithms\, and massive amount of data\, deep neural networks (DNNs) have played a dominant role in more and more real-world applications. Nonetheless\, there is a spring of bitterness mingling with this remarkable success – recent studies reveals that there are two main security threats of DNNs which limit its widespread usage: 1) the robustness of DNN models under adversarial attacks\, and 2) the protection and verification of intellectual properties of well-trained DNN models. \nIn this dissertation\, we fist focus on the security problems of how to build robust DNNs under adversarial attacks\, where deliberately crafted small perturbations added to the clean input can lead to wrong prediction results with high confidence. We approach the solution by incorporating stochasticity into DNN models. We propose multiple schemes to harden the DNN models when facing adversarial threats\, including Defensive Dropout (DD)\, Hierarchical Random Switching (HRS)\, and Adversarially Trained Model Switching (AdvMS). \nThe second part of this dissertation focuses on how to effectively protect the intellectual property for DNNs and reliably identify their ownership. We propose Characteristic Examples (C-examples) for effectively fingerprinting DNN models\, featuring high-robustness to the well-trained DNN and its derived versions (e.g. pruned models) as well as low-transferability to unassociated models. The generation process of our fingerprints does not intervene with the training phase and no additional data are required from the training/testing set.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-siyue-wang/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210521T110000
DTEND;TZID=America/New_York:20210521T120000
DTSTAMP:20260515T143428
CREATED:20210503T175740Z
LAST-MODIFIED:20210503T175740Z
UID:4881-1621594800-1621598400@ece.northeastern.edu
SUMMARY:ECE MS Thesis Defense: Daniel Uvaydov
DESCRIPTION:MS Thesis Defense titled DeepSense: Fast Wideband Spectrum Sensing Through Real-Time In-the-Loop Deep Learning \nDaniel Uvaydov \nLocation: Microsoft Teams \nAbstract: Spectrum sharing will be a key technology to tackle spectrum scarcity in the sub-6 GHz bands. To fairly access the shared bandwidth\, wireless users will necessarily need to quickly sense large portions of spectrum and opportunistically access unutilized bands. The key unaddressed challenges of spectrum sensing are that (i) it has to be performed with extremely low latency over large bandwidths to detect tiny spectrum holes and to guarantee strict real-time digital signal processing (DSP) constraints; (ii) its underlying algorithms need to be extremely accurate\, and flexible enough to work with different wireless bands and protocols to find application in real-world settings. To the best of our knowledge\, the literature lacks spectrum sensing techniques able to accomplish both requirements. In this paper\, we propose DeepSense\, a software/hardware framework for real-time wideband spectrum sensing that relies on real-time deep learning tightly integrated into the transceiver’s baseband processing logic to detect and exploit unutilized spectrum bands. DeepSense uses a convolutional neural network (CNN) implemented in the wireless platform’s hardware fabric to analyze a small portion of the unprocessed baseband waveform to automatically extract the maximum amount of information with the least amount of I/Q samples. We extensively validate the accuracy\, latency and generality performance of DeepSense with (i) a 400 GB dataset containing hundreds of thousands of WiFi transmissions collected “in the wild” with different Signal-to-Noise-Ratio (SNR) conditions and over different days; (ii) a dataset of transmissions collected using our own software-defined radio testbed; and (iii) a synthetic dataset of LTE transmissions under controlled SNR conditions. We also measure the real-time latency of the CNNs trained on the three datasets with an FPGA implementation\, and compare our approach with a fixed energy threshold mechanism. Results show that our learning-based approach can deliver a precision and recall of 98% and 97% respectively and a latency as low as 0.61ms.
URL:https://ece.northeastern.edu/event/ece-ms-thesis-defense-daniel-uvaydov/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210525T100000
DTEND;TZID=America/New_York:20210525T110000
DTSTAMP:20260515T143428
CREATED:20210517T174657Z
LAST-MODIFIED:20210517T174657Z
UID:4945-1621936800-1621940400@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Mohammad Hossein Hajkazemi
DESCRIPTION:PhD Dissertation Defense: High-performance Translation Layers for Cloud Immutable Storage \nMohammad Hossein Hajkazemi \nLocation: Zoom Link \nAbstract: Most storage interfaces support in-place updates: blocks can be rewritten\, files can be modified at byte granularity\, fields may be updated in database table rows. Yet internally these layers often rely on out-of-place (immutable) writes. In some cases\, this may be necessary to use media\, such as flash\, SMR (shingled magnetic recording) and IMR (interlaced magnetic recording) disk\, which do not allow overwrites. In others\, it is used to simplify the implementation of transactions and/or crash consistency\, in the form of journaling\, write-ahead logging\, shadow paging\, etc. \nIn a storage system\, translation layers perform out-of-place writes\, and they are implemented in different layers of storage stack from the file system to the storage device firmware depending on the application. In this dissertation I focus on translation layers for cloud immutable storage technologies to improve the cloud I/O performance. As a part of my thesis\, I focus on translation layers for state-of-the-art immutable storage media such as SMR and IMR used in cloud environments\, proposing several novel algorithms to improve their efficiency. I also introduce FSTL\, a framework to design and implement SMR translation layer. Finally\, I describe Collage\, a virtual disk I developed over S3 (could be implemented over a similar object storage) using a translation layer which performs large\, sequential\, out-of-place writes for high performance. It optionally uses fast local storage for write logging and as a write-back cache\, guaranteeing prefix consistency under all failure conditions and recovery of all acknowledged writes if the local cache is not lost. Collage supports snapshots and cloned volumes\, performs well over erasure-coded storage\, and allows consistent asynchronous volume replication over geographic distances. I show that Collage can achieve massive performance improvements (e.g.\, over 100x for microbenchmarks and 10x for macro-benchmarks) over CEPH RBD\, a popular open-source scale-out virtual disk implementation.
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-mohammad-hossein-hajkazemi/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210526T130000
DTEND;TZID=America/New_York:20210526T140000
DTSTAMP:20260515T143428
CREATED:20210524T222653Z
LAST-MODIFIED:20210524T222653Z
UID:4958-1622034000-1622037600@ece.northeastern.edu
SUMMARY:PhD Dissertation Defense: Kunpeng Li
DESCRIPTION:PhD Dissertation Defense: Visual Learning with Limited Supervision \nKunpeng Li \nLocation: Zoom Link \nAbstract: Deep learning models have achieved remarkable success in many computer vision tasks. However\, they typically rely on large amounts of carefully labeled training data whose annotating process is usually expensive\, time-consuming and even infeasible when considering the task complexity and scarcity of expert knowledge.\nIn this dissertation talk\, I will discuss several explorations along the direction of visual learning with limited supervision. They are mainly about learning from data with weak forms of annotations and learning from multi-modal data pairs. Specifically\, I will first present a guided attention learning framework to conduct semantic segmentation by mainly using image-level labels\, as such weak form of annotation can be collected much more efficiently than pixel-level labels. Under mild assumptions\, our framework can also be used as a plug-in to existing convolutional neural networks to improve their generalization performance. This is achieved by guiding the network to focus on correct things when learning concepts from a limited set of training samples. Besides\, I will also introduce models that can effectively learn from multi-modal data pairs without relying on dense annotations of visual semantic concepts. Our models incorporate relational reasoning ability into the visual representation learning process so that it can be better aligned with the supervision from corresponding text descriptions.
URL:https://ece.northeastern.edu/event/phd-dissertation-defense-kunpeng-li/
END:VEVENT
END:VCALENDAR