BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical &amp; Computer Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Electrical &amp; Computer Engineering
X-ORIGINAL-URL:https://ece.northeastern.edu
X-WR-CALDESC:Events for Department of Electrical &amp; Computer Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201130T130000
DTEND;TZID=America/New_York:20201130T140000
DTSTAMP:20260502T220443
CREATED:20201123T204938Z
LAST-MODIFIED:20201123T204938Z
UID:4592-1606741200-1606744800@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Berkan Kadioglu
DESCRIPTION:PhD Proposal Review: Sample Complexity of Pairwise Ranking Regression \nBerkan Kadioglu \nLocation: Zoom \nAbstract: We consider a rank regression setting\, in which a dataset of $N$ samples with features in $\mathbb{R}^d$ is ranked by an oracle via $M$ pairwise comparisons.\nSpecifically\, there exists a latent total ordering of the samples; when presented with a pair of samples\, a noisy oracle identifies the one ranked higher w.r.t. the underlying total ordering. A learner observes a dataset of such comparisons\, and wishes to regress sample ranks from their features.\nWe show that to learn the model parameters with $\epsilon > 0$ accuracy\, it suffices to conduct $M \in \Omega(dN\log^3 N/\epsilon^2)$ comparisons uniformly at random when $N$ is $\Omega(d/\epsilon^2)$. \n 
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-berkan-kadioglu/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201130T093000
DTEND;TZID=America/New_York:20201130T103000
DTSTAMP:20260502T220443
CREATED:20201121T024753Z
LAST-MODIFIED:20201123T205506Z
UID:4591-1606728600-1606732200@ece.northeastern.edu
SUMMARY:ECE MS Thesis Defense: Sila Deniz Calisgan
DESCRIPTION:MS Thesis Defense: MEMS Infrared Resonant Detectors With Near-Zero Power Readout For Miniaturized Low Power Systems \nSila Deniz Calisgan \nLocation: Online \nAbstract: The demand for low-cost and low-power microsystems for spectrally-selective IR sensing has been rising with the proliferation of Internet of Things (IoT) for applications such as security surveillance and natural disaster monitoring. As a result\, there is a need for low-power\, high sensitivity IR sensors with minimum deployment and maintenance cost that can detect trace levels of chemicals. This thesis reports on the first experimental demonstrations of passive integrated microsystems based on transmission spectroscopy using narrowband uncooled microelectromechanical resonant infrared (IR) detectors. Moreover\, the MEMS-CMOS integrated microsystem can turn itself ON to quantify the intensity of infrared radiation when an above-threshold IR signature is present\, but otherwise remain dormant with near-zero standby power consumption. The proposed sensor system combines the unique advantage of two recently developed technologies\, namely\, the zero-power nature of micromechanical photoswitches (MPs) and the high resolution of aluminum nitride (AlN) MEMS resonant infrared detectors\, to achieve an unprecedented IR sensing capability. Thanks to the spectral selectivity enabled by the plasmonically enhanced thermo-mechanical transduction in MEMS structures\, the proposed sensor system is capable of discriminating the spectral content of incoming IR radiation for the identification of events of interest. The prototype presented here is automatically powered up by the MP when the incoming IR radiation exceeds 440 nW showing a high IR detection resolution in active state and a near-zero power consumption (~3 nW) in standby. The ultrathin plasmonic absorber with narrow bandwidth (FWHM<17% ) and near-perfect IR absorption (η>92%) coupled with the high IR detection capability ( NEP~ 463 pW/√Hz) of the AlN resonator was exploited for a filter-free spectroscopic chemical sensor based on uncooled AlN resonant IR detectors with a minimum concentration detection limit of <0.01% (Benzonitrile in Hexane).
URL:https://ece.northeastern.edu/event/ece-ms-thesis-defense-sila-deniz-calisgan/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201125T120000
DTEND;TZID=America/New_York:20201125T130000
DTSTAMP:20260502T220443
CREATED:20201112T213551Z
LAST-MODIFIED:20201112T213551Z
UID:4572-1606305600-1606309200@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Aykut Onol
DESCRIPTION:PhD Dissertation Defense: Planning of Contact-Interaction Trajectories Using Numerical Optimization \nAykut Onol \nLocation: Zoom Link \nAbstract: Dynamic multi-contact behaviors\, such as locomotion and item manipulation\, remain to be a challenge for today’s robotic systems. This is primarily due to the discontinuous and non-smooth dynamics introduced by contacts. For mobile manipulators (e.g.\, humanoids) to become useful for dangerous\, dirty\, and dull tasks\, such as those in disaster response\, they need to be capable of interacting with their cluttered\, constrained\, and changing environments. It is therefore essential to develop methods that would enable robots to plan and execute contact-rich motions in dynamic surroundings.\nIn this dissertation research\, we investigate the planning of contact-interaction trajectories and utilize numerical optimal control techniques to solve this problem in a generalizable and computationally-tractable way. We develop a contact-implicit trajectory optimization framework for the automatic discovery of dynamic contact-rich behaviors given only a high-level goal\, i.e.\, the desired configuration of the environment. A variable smooth contact model is introduced to improve the convergence of gradient-based optimization without compromising the physical fidelity of resulting motions. This is achieved by employing smooth virtual forces that act as a decoupled relaxation of the rigid-body contact model. Second\, we develop a sequential convex optimization procedure that provides reliable convergence characteristics while solving this non-convex problem. Third\, a penalty loop approach is proposed to generalize this method to a wide range of robotic applications.\nIn addition to these\, we develop a novel Coulomb friction model and an on-the-fly contact constraint activation method using state-triggered constraints\, STCs. STCs are a more modular alternative to complementarity constraints which are widely used to model discrete aspects in contact-related problems. Our extensive simulation experiments demonstrate that STCs hold immense promise to efficiently model a broad range of discrete elements in the planning and control of contact-interaction trajectories. As a result\, this dissertation presents methods that enable the planning of dynamic contact-rich behaviors for different robots and tasks without requiring any parameter tuning or tailored initial guess.
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-aykut-onol/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201124T140000
DTEND;TZID=America/New_York:20201124T150000
DTSTAMP:20260502T220443
CREATED:20201103T210959Z
LAST-MODIFIED:20201103T210959Z
UID:4552-1606226400-1606230000@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Joseph Robinson
DESCRIPTION:PhD Dissertation Defense: Automatic Face Understanding: Recognizing Families in Photos \nJoseph Robinson \nLocation: Zoom Link \nAbstract: Visual kinship recognition has an abundance of practical uses. For this\, we built the largest database for kinship recognition\, FIW. Built entirely in-house with no cost using a semi-automatic labeling scheme. Specifically\, we first aligned faces detected in family photos with names in the corresponding text metadata to mine the label proposals with high confidence. The remaining data were labeled using a novel clustering algorithm that used label proposals as side information to guide more accurate clusters. Great savings in time and human input was had. Statistically\, FIW shows enormous gains over its predecessors. We have several benchmarks in kinship verification\, family classification\, tri-subject verification\, and large-scale search & retrieval. We also trained CNNs on FIW and deployed the model on the renowned KinWild I and II to gain state-of-the-art (SOTA). Most recently\, we further augmented FIW with multimedia (MM) for 200 of its 1\,000 families- a labeled collection we dubbed FIW-MM. Now\, video dynamics\, audio\, and text captions can be used in the decision making of kinship recognition systems. \nFIW continues to pave the way for this research track: (1) advanced SOTA (e.g.\, marginalized denoising auto-encoder based on metric learning that preserves intrinsic structures of kin-data and encapsulates discriminating information in learned features); (2) introduced generative models to predict a child’s appearance from a parent pair (i.e.\, proposed an adversarial autoencoder conditioned on age and gender to map between facial appearance and these higher-level features for control of age and gender); (3) designed evaluations with benchmarks to support challenges\, workshops\, and tutorials at top tier conferences (e.g.\, CVPR\, MM\, FG\, ICME)\, and a premiere Kaggle Competition. We expect FIW will significantly impact research and reality. \nAdditionally\, we tackled the classic problem of facial landmark localization in images. This is a task that has been in focus for decades\, and many solutions have been proposed. However\, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data with deep networks now prevailing throughout machine learning. A majority of these networks have objectives based on L1 or L2 norms\, which inherit several disadvantages. First of all\, the locations of landmarks are determined from generated heatmaps (i.e.\, confidence maps) from which predicted landmark locations (i.e.\, the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. To address this\, we introduced a LaplaceKL objective that penalizes for low confidence. Another issue is a dependency on labeled data\, which is expensive to collect and susceptible to error. We addressed both issues by proposing an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims SOTA on renowned benchmarks. Furthermore\, our model is robust with a reduced size: 1/8 the number of channels (i.e.\, 0.0398 MB) is comparable to state-of-that-art in real-time on a CPU. Thus\, our method is of high practical value to real-life applications. \nFinally\, we built the Balanced Faces in the Wild (BFW) dataset to serve as a proxy to measure bias across ethnicity and gender subgroups\, allowing us to characterize FR performances per subgroup. We show performances are non-optimal when a single score threshold is used to determine whether sample pairs are genuine or imposter. Furthermore\, actual performance ratings vary greatly from the reported across subgroups. Thus\, claims of specific error rates only hold for populations matching that of the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial encodings extracted using SOTA deep nets. Not only does this technique balance performance\, but it also boosts the overall performance. A benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. The removal of demographic knowledge prevents future potential biases from being injected into decision making. Additionally\, privacy concerns are satisfied by this removal. We explore why this works qualitatively with hard samples. We also show quantitatively that subgroup classifiers can no longer learn from the encodings mapped by the proposed. \n 
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-joseph-robinson/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201113T140000
DTEND;TZID=America/New_York:20201113T150000
DTSTAMP:20260502T220443
CREATED:20201110T024923Z
LAST-MODIFIED:20201110T024923Z
UID:4570-1605276000-1605279600@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Yumin Liu
DESCRIPTION:PhD Proposal Review: Learning from Spatio-Temporal Data with Applications in Climate Science \nYumin Liu \nLocation: Zoom Link \nAbstract:Climate change is one of the major challenges to human beings and many other species in our time. In the recent decade\, the number of disasters related to climate change such as wildfires\, storms\, floods and droughts are increasing\, and the casualty and economic losses caused by them are larger compared to those of decades ago. This calls for better and efficient ways to predict climate change in order to better prepare and reduce losses. Predicting climate change involves using historical observational data and model simulated data\, both of which usually involve multiple locations and timestamps and are spatio-temporal. With the rapid development and progress of machine learning\, these methods have achieved several impactful contributions in many domains; we would like to translate its impact to climate science.\nIn this thesis we addressseveral problems in climate science. This challenging complex domain enable us to develop\, innovate\, adapt\, and advance machine learning in the following ways. 1) We develop a multi-task learning method to estimate relationships between tasks and learn the basis tasks in different locations especially for nearby locations which may share similar climate patterns. This method assumes that the weights of an observed task is a linear combination of several latent basis tasks and that the task relationships can be learnt by imposing a regularized precision matrix. 2) We propose a nonparameteric mixture of sparse linear regression models to cluster and identify important climate models for prediction. This model incorporates Dirichlet Process (DP) to automatically determine the number of clusters\, imposes Markov Random Field (MRF) constraints to guarantee spatio-temporal smoothness\, and selects a subset of global climate models (GCMs) that are useful for prediction within each spatio-temporal cluster with a spike-and-slab prior. We derive an effective Gibbs sampling method for this model. 3) We adapt image super resolution method to climate downscaling — increasing spatial resolution for climate variables for local impact analysis. The proposed method is called YNet which is a novel deep convolutional neural network (CNN) with skip connections and fusion capabilities to perform downscaling for climate variables on multiple GCMs directly rather than on reanalysis data.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-yumin-liu/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201112T130000
DTEND;TZID=America/New_York:20201112T140000
DTSTAMP:20260502T220443
CREATED:20201103T200924Z
LAST-MODIFIED:20201103T200924Z
UID:4546-1605186000-1605189600@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Yaoshen Yuan
DESCRIPTION:PhD Proposal Review: Enhancements for Monte Carlo Light Modeling Method and Its Applications in Near-infrared-based Brain Techniques \nYaoshen Yuan \nLocation: Microsoft Teams Link \nAbstract: Studying light propagation in biological tissues is critical for developing biophotonics techniques and its applications. Monte Carlo (MC) method\, a stochastic solver for radiative transfer equation\, has been recognized as the gold standard for modeling light propagation in turbid media. However\, due to the stochastic nature of MC method\, millions even billions of photons are usually required to achieve accurate results using MC method\, leading to a long computational time even with the acceleration using graphical processing units (GPU).\nFurthermore\, due to the rapid advances in multi-scale optical imaging techniques such as optical coherence tomography (OCT) and multiphoton microscopy (MPM)\, there is an increasing need to model light propagation in extremely complex tissues such as vessel networks. The mesh-based Monte Carlo (MMC) is usually superior than the voxel-based MC method for such modeling since unlike grid-like voxels\, tetrahedral meshes can represent arbitrary shapes with curved boundaries. However\, the mesh density can be excessively high when the tissue structure is extremely complex\, resulting in high computational costs and memory demand. The goal of this proposal is to focus on solving the challenges mentioned above. \nTo tackle the first challenge\, we came up with a filtering approach with GPU acceleration to improve the signal-to-noise ratio (SNR) of the results while keeping the simulated photons low. The adaptive non-local means (ANLM) filter is selected to suppress the stochastic noise in MC results because 1) the filtering process on each voxel is mutually independent\, making it possible for parallel computing; 2) it has high performance in denoising and a strong capacity in edge-preserving. For the second problem\, a novel method\, implicit mesh-based Monte Carlo (iMMC)\, was proposed to significantly reduce the mesh density. The iMMC utilizes the edge\, node and face of the tetrahedral mesh to model tissue structures with shapes of cylinder\, sphere and thin layer. The typical applications for edge\, node and face-based iMMC are vessel networks\, porous media and membranes\, respectively. Lastly\, we applied MC simulations and aforementioned filter on segmented brain models derived from MRI neurodevelopmental atlas to estimate the light dosage for transcranial photobiomodulation (t-PBM)\, a technique for treating major depressive disorder using near infrared\, across lifespan. Furthermore\, a new approach that can improve the penetration depth in optical brain imaging as well as PBM is proposed. In this approach\, the possibility of placing light sources in head cavities is investigated using MC simulations. The preliminary results demonstrate a better performance in deep brain monitoring compared to the standard transcranial approach using 10-20 EEG positioning system. \n 
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-yaoshen-yuan/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201109T150000
DTEND;TZID=America/New_York:20201109T160000
DTSTAMP:20260502T220443
CREATED:20201105T220658Z
LAST-MODIFIED:20201105T220658Z
UID:4557-1604934000-1604937600@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Carlos Bocanegra
DESCRIPTION:PhD Proposal Review: A Systems Approach to Spectrum Sharing and Multi-antenna Operation for Emerging Networks \nCarlos Bocanegra \nLocation: TBD \nAbstract: The demands on wireless connectivity and sensing are ever-increasing\, fueled by both emerging applications and an exponential growth in the number of connected devices. Availability of new spectrum in the sub-6GHz bands is limited\, which motivates research on innovative ways to utilize the current available spectrum and explore the use of additional spectrum beyond the 6GHz threshold.\nThis thesis explores three promising techniques with focus on the Physical (L1) and Link (L2) layers. Approach 1 concerns spectrum sharing in the sub-6GHz band\, where wireless standards are granted opportunistic access within unlicensed spectrum to increase their usable bandwidth. Approach 2 concerns the design of massive multi-antenna systems\, through which devices can benefit from beamforming gains at transmission and diversity gains at reception. Approach 3 concerns the use of very high frequency bands (VHFB)\, or so called millimeter-wave bands. Each of these approaches\, however\, has its own set of challenges\, such as fairness in channel access\, interference management\, and optimal beamforming user-specified quality-of-service\, respectively.\nFor spectrum sharing as described in Approach 1\, this thesis presents E-Fi\, an interference-evasion mechanism that allows Wi-Fi devices to survive opportunistic in-band LTE transmissions. The main contribution is to achieve this without any cooperation between these two\, using Almost Blank Subframes (ABS). E-Fi ensures fair channel access while reusing existing Wi-Fi standards\, i.e.\, Wi-Fi Direct\, and thus incurring minimal deployment costs.\nFor Approach 2\, this thesis introduces two multi-antenna frameworks\, a decentralized one for cellular- and a centralized one for IoT-oriented applications\, respectively. For the former\, it presents NetBeam\, a reconfigurable system of distributed 3D beamformers (3DBF). While NetBeam uses 3DBF to tackle multi-user interference in 3D multi-user deployments\, it enforces Machine Learning and efficient antenna selection strategies to deliver the individual required SINR levels to users. As a centralized multi-antenna system\, it presents RFGo\, a privacy-preserving self-checkout system using passive Radio Frequency ID (RFID) tags. RFGo achieves fast tag discovery using a custom-built RFID reader\, which simultaneously decodes a tag’s response from multiple carrier-level synchronized antennas. RFGo achieves reliable tag detection by means of a neural network that accurately discriminates products within the checkout area from those laying outside of it.\nIn the proposed work that covers Approach 3\, this thesis describes an outline of an algorithmic framework for millimeter-wave communications that efficiently allocates antenna elements from Base Stations (BS) to users for hybrid beamforming\, while considering their individual traffic demands. We propose to trade-off flexible array geometries (that allows to limit interference to specific regions) versus the irregularity that results in the sidelobes.\nIn summary\, this thesis tackles complex challenges in the future 5G and beyond wireless networks through a combination theory\, algorithm design and experimental implementation.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-carlos-bocanegra/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201105T120000
DTEND;TZID=America/New_York:20201105T130000
DTSTAMP:20260502T220443
CREATED:20201023T211519Z
LAST-MODIFIED:20201023T211519Z
UID:4533-1604577600-1604581200@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Aziz Kocanaogullari
DESCRIPTION:PhD Dissertation Defense: Active Recursive Bayesian Classification (Querying and Stopping) for Event Related Potential Driven Brain Computer Interface Systems \nAziz Kocanaogullari \nLocation: Remote (contact akocanaogullari@ece.neu.edu) \nAbstract: Recursive Bayesian classification (RBC) requires optimal latent variable estimation in the presence of noisy observation to achieve real-time sequential decision making. Active RBC introduced in this dissertation attempts to effectively select queries that lead to more informative observations to rapidly reduce uncertainty until a confident decision is made. Accordingly\, active RBC includes the following fundamental components:(S)A stopping criterion based on the posterior probability to stop evidence collection;(Q)a querying step to decide how to collect further evidence from relevant sources to benefit speed and accuracy objectives of RBC;(C)a classification objective based on the posterior distribution and loss values attributed to each true label and decision option pair to determine the optimal decision once the stopping criterion has been satisfied.\nThis dissertation specifically focuses on optimizing querying (Q) and stopping (S) for RBC. Conventional stopping criterion design methodologies lack insight of the RBC geometry and evolution of the posterior probability vector. Additionally\, conventional active querying methods stagger due to misleading prior information. In this case\, the system uses time inefficiently to overcome the provided belief by querying most likely candidates a number of iterations. Furthermore\, in contrast to inference and querying being coactive\, typically the optimality objectives are designed separately.\nAn electroencephalography (EEG)-based brain computer interface (BCI) system specifically de-signed for typing is used as a testbed for active RBC. BCI systems provide a communication pathway between the user and the environment both in medical and non-medical domains. EEG signals are widely used with promising performance to estimate user intent in BCI systems. BCI typing systems are epitomes of RBC driven systems as repeated evidence collection is mandated due to highly variable EEG signals given a particular user intent (latent variable hidden in the brain). However\, in many cases\, EEG-based communication staggers and lacks accuracy and speed due to inefficient RBC.\nTo increase the performance of RBC\, motivated by information theoretic approaches to coding and active learning this dissertation contributes to the literature in three folds: (i) A complete analysis of stopping criterion and geometrical description of the RBC problem is provided. Motivated by the posterior motion a stopping criterion design is proposed. Moreover\, an early stopping scheme with one step ahead prediction is shown to make a decision with marginal accuracy deficit. (ii)Influenced by the posterior motion\, a new query selection objective is proposed. This querying mechanism is shown to result in rapid and accurate inference in scenarios in which the recursive inference starts with a misleading (or adversarial) prior probability distribution for the latent variable of interest (e.g. user attempting to type a letter/word that is unlikely according to the language model). (iii) Querying and stopping approaches are taken together into consideration and an experimental study specifically on BCI typing is presented. Additionally\, the dissertation shows it is possible to reformulate RBC with Rényi entropy measures solidifying the connection between stopping and querying objective design. All contributions are verified using a BCI typing system “BCIPy” with simulations and human-in-the-loop experiments.
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-aziz-kocanaogullari/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201102T090000
DTEND;TZID=America/New_York:20201102T100000
DTSTAMP:20260502T220443
CREATED:20201026T214743Z
LAST-MODIFIED:20201026T214743Z
UID:4537-1604307600-1604311200@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Ya Guo
DESCRIPTION:PhD Proposal Review: Power Optimization and Management of PV Grid-Connected Microgrid in Energy Market \nYa Guo \nLocation: Zoom Link \nAbstract: Microgrids can integrate renewable energy resources (RES)\, such as photovoltaics (PV) and wind energy generation\, with the main power grid to provide reliable\, secure and affordable energy. Fortunately\, the electricity markets have evolved to facilitate RES participation. One major challenge lies in how to manage power and energy flow within grid-connected microgrid system\, to optimize financial gains while maintaining high reliability. This becomes challenging since electricity trading policies and tariffs vary by utility companies from area to area. Furthermore\, RES are mostly intermittent sources. Adding additional energy storage systems (ESS) into microgrids becomes a vital solution to mitigating the energy production intermittency\, as well as providing energy backup in emergency. Battery ESS (BESS) are deployed on a large scale in grid-connected installations worldwide. Optimal operation of the energy storage system also becomes important for microgrid end-users to ensure that they will at least recover BESS operating cost. Moreover\, there always exist uncertainties in RES power generation\, load power consumption\, and even dynamic electricity pricing. It is vital to deal with the forecasted errors in real-time. Developing proper uncertainty characterization can better facilitate the whole system power management to limit the negative influences of these uncertainties.\nIn this research\, dynamic programming (DP) algorithm is proposed to forecast the global optimal solution to power flow dispatch of PV grid-connected microgrid. Various electricity pricing structures\, including fixed pricing\, time-of-use (TOU) pricing and real-time pricing (RTP) are explored for customers in different areas. The battery nonlinear charging/discharging degradation model is also exploited for system power optimization. The objective is to achieve the minimum microgrid system operation cost\, in other words\, the maximum economic benefits for end-users. Besides\, this research proposes power control methods to implement forecasted optimal power schedule\, as well as dealing with errors among forecast and real-time PV\, load and RTP. Rule-based (RB) algorithm is also studied as a baseline for comparison. Moreover\, uncertainty characterization for PV\, load and dynamic pricing will be developed using Monte Carlo Simulation (MCS)\, and stochastic optimization approach will be explored in cooperation with these uncertainties.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-ya-guo/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201030T110000
DTEND;TZID=America/New_York:20201030T120000
DTSTAMP:20260502T220443
CREATED:20201024T021519Z
LAST-MODIFIED:20201024T021519Z
UID:4534-1604055600-1604059200@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Ran Liu
DESCRIPTION:PhD Dissertation Defense: Optimal Proactive Services with Uncertain Predictions \nRan Liu \nLocation: Zoom Link \nAbstract: With the evolution of technologies such as machine learning and data science\, proactive services with the aid of predictive information have been recognized as a promising method to exploit network bandwidth\, storage\, and computation resources to achieve improved user experiences\, especially delay performance.\nSpecifically\, services can be processed proactively when the system is lightly loaded\, with the results stored to meet user demand in the future.\nOur primary goal in the thesis is to investigate the fundamental performance improvement that can be achieved from proactive services under uncertain predictions. We aim to analyze the queueing behavior of proactive systems under certain proactive strategies and characterize the improvement in terms of the limiting fraction of proactive work and the limiting average delay. \nIn the first work\, we analytically investigate the problem of how to efficiently utilize uncertain predictive information to design proactive caching strategies with provably good access-delay characteristics.\nFirst\, we derive an upper bound for the average amount of proactive service per request that the system can support.\nThen we analyze the behavior of a family of threshold-based proactive strategies with a Markov chain\, which shows that the average amount of proactive service per request can be maximized by properly selecting the threshold.\nFinally\, we propose the UNIFORM strategy\, which is the threshold-based strategy with the optimal threshold\, and show that it outperforms the commonly used Earliest-Deadline-First (EDF) type proactive strategies in terms of delay.\nWe perform extensive numerical experiments to demonstrate the influence of thresholds on delay performance under the threshold-based strategies\, and specifically\, compare the EDF strategy and the UNIFORM strategy to verify our results. \nIn the second work\, we study a more generalized proactive service problem with a more generalized service model and derive explicit solutions on the limiting average fraction of proactive work and the limiting average delay in closed-form expressions.\nIn this work\, we analytically investigate how to optimally take advantage of under-utilized network resources for proactive services with the aid of uncertain predictive information.\nSpecifically\, we first derive an upper bound on the fraction of services that can be completed proactively by a single-server system.\nThen we analyze a family of fixed-probability (FIXP) proactive strategies in two proactive systems\, namely the Genie-Aided system and the Realistic Proactive system.\nWe analyze the asymptotic behaviors of the FIXP strategies by modeling a Markov process and the corresponding embedded Markov Chain.\nWe obtain optimal FIXP strategies in both systems and prove that the optimal FIXP strategies maximize the limiting fraction of proactive service among all proactive strategies and minimize average delay among FIXP strategies.\nWe perform extensive numerical experiments to demonstrate the influence of the parameter of FIXP on the performance of the limiting fraction of proactive service and the limiting average delay in both proactive systems and verify our theoretical results in multiple scenarios.
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-ran-liu-2/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201027T150000
DTEND;TZID=America/New_York:20201027T160000
DTSTAMP:20260502T220443
CREATED:20201021T182911Z
LAST-MODIFIED:20201021T182911Z
UID:4524-1603810800-1603814400@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Kunpeng Li
DESCRIPTION:PhD Proposal Review: Attention Mechanism in Deep Learning for Visual Recognition  \nKunpeng Li \nLocation: Zoom Link \nAbstract: Deep learning models have achieved great success in various tasks for visual recognition such as image classification\, semantic segmentation\, visual semantic matching etc. Instead of just treating them as black boxes\, recently\, a tremendous of efforts have been put into the explanations of how these models work and bridging the gap between deep neural networks and human cognition systems. Visual attention is one of the efficient ways to explain the network’s decision by highlighting the regions of images that are responsible for it. It is inspired by the attention mechanism of the human vision system to selectively focus on the salient features in a visual scene. \nThis thesis is on the visual attention in deep learning for visual recognition. For the first time\, we make gradient-based attention maps a natural and explicit component in the training pipeline\, such that they are end-to-end trainable. Then\, we can provide guidance on the attention maps and guide the network to focus on correct things when learning concepts. Under mild assumptions\, our method can be understood as a plug-in to existing convolutional neural networks to improve their generalization performance. Besides\, the improved attention maps also help to provide better localization cues for weakly-supervised semantic segmentation task. \nMoving a step toward higher-level visual understanding with natural language\, we study the effectives of building visual reasoning models on top of the bottom-up attention regions\, so that the learnt visual representations can better capture semantic concepts as in its corresponding text caption. Specifically\, we first build up connections between attention regions and perform reasoning with Graph Convolutional Networks to generate region features with semantic relationships. Then\, we propose to use the gate and memory mechanism to perform global semantic reasoning on these relationship-enhanced region features\, select the discriminative information and gradually generate the representation for the whole scene. Evaluations have been conducted on MS-COCO and Flickr30K datasets for the image-text matching task.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-kunpeng-li/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201021T090000
DTEND;TZID=America/New_York:20201021T100000
DTSTAMP:20260502T220443
CREATED:20201016T181139Z
LAST-MODIFIED:20201016T181139Z
UID:4521-1603270800-1603274400@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Wenqian Liu
DESCRIPTION:PhD Proposal Review: Explainable Efficient Models for Computer Vision Applications \nWenqian Liu \nLocation: Zoom Link \nAbstract: State of the art deep learning based models\, such as Convolutional Neural Networks (CNNs) and generative models\, achieve impressive results\, but with their great performance comes great complexity and opacity\, huge parametric spaces and little explainability. The criticality of model explainability and output interpretability\, manifests clearly in real-time critical decision making processes and human-centred applications\, such as in healthcare\, security and insurance. \nExplainability and interpretability are tackled in this thesis\, as intrinsic qualities in the model architecture as well as post-hoc improvement on existing models. \nIn the area of frame prediction in video sequences\, we introduce DYAN\, a novel network with very few parameters\, that is easy to train and produces accurate high quality frame predictions and more compact than previous approaches. Another key aspect of DYAN is interpretability\, as its encoder-decoder architecture is designed following concepts from systems identification theory and exploits the dynamics-based invariants of the data. We also introduce KW-DYAN\, an extension of DYAN that tackles the issue of time lagging in video predictions\, by implementing a novel way of quantifying prediction timeliness and proposing a new recurrent network for adaptive temporal sequence prediction that employs a warping module to reduce dynamic changes and a Kalman filtering module to detect dynamic changes in video frames. The experimental results show the reduced lagging across the tested Caltech dataset and the UCF dataset\, while also performing well in other commonly used metrics. \nIn the area of image classification\, categorization and scene understanding\, we observe that techniques such as gradient-based visual attentions have driven much recent efforts in using visual attention maps as a mean for visual explanations of Convolution Neural Networks (CNNs)\, with impressive results but fail to extend to explaining generative models\, e.g Variational Autoencoders (VAEs) as efficiently. In this thesis we bridge this crucial gap\, and propose the first technique to visually explain VAEs by means of gradient-based attentions\, with methods to generate visual attentions from the learned latent space\, and also demonstrate such attention explanations serve more than just explaining VAEs. We show how these attention maps can be used to localize anomalies in images\, conducting state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training\, helping bootstrap the VAEs into learning disentangled latent space\, as proved on the Dsprites dataset.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-wenqian-liu/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201015T160000
DTEND;TZID=America/New_York:20201015T170000
DTSTAMP:20260502T220443
CREATED:20201007T191555Z
LAST-MODIFIED:20201007T191555Z
UID:4427-1602777600-1602781200@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Maher Kachmar
DESCRIPTION:PhD Proposal Review: Active Resource Partitioning and Planning for Storage Systems using Time Series Forecasting and Machine Learning Techniques \nMaher Kachmar \nLocation: Zoom Link \nAbstract: In today’s enterprise storage systems\, supported data services such as snapshot delete or drive rebuild can result in tremendous performance overhead if executed inline along with heavy foreground IO\, often leading to missing Service Level Objectives (SLOs). Moreover\, static partitioning of storage systems resources such as CPU cores or memory caches may lead to missing Service Level Agreements (SLAs) such as data reduction rate (DRR). However\, typical storage system applications such as Virtual Desktop Infrastructure (VDI) or web services follow a repetitive workload pattern that can be learned and/or forecasted. Learning these workload pattern allows us to address several storage system resource partitioning and planning challenges that may not be overcome with traditional manual tuning and primitive feedback mechanism. \nWe propose a priority-based background scheduler that learns storage system workload repetitive pattern and allows storage systems to maintain peak performance and meet service level objectives (SLOs) while supporting a number of data services. When foreground IO demand intensifies\, system resources are dedicated to service foreground IO requests and any background processing that can be deferred are recorded to be processed in future idle cycles as long as our forecaster predicts that the storage pool has remaining capacity. The smart background scheduler adopts a resource partitioning model that allows both foreground and background IO to execute together as long as foreground IOs are not impacted\, harnessing any free cycles to clear background debt. Using traces from VDI and web services applications\, we show how our technique can out performance a static method that sets fixed limits on the deferred background debt and reduces SLO violations from 54.6% (when using a fixed background debt watermark)\, to only 6.2% when dynamically adjusted by our smart background scheduler. \nThis thesis also proposes a smart capacity planning and recommendation tool that ensures the right number of drives are available in the storage pool in order to meet both capacity and performance constrains without over-provisioning storage. Aided by forecasting models that characterizes workload pattern\, we can predict future storage pool utilization and drive over-wearing. Similarly\, to meet SLOs\, the tool recommends expanding pool space in order to defer more background work through larger debt bins. We also propose a content-aware learning cache (CALC) that uses machine learning techniques to actively partition the storage system cache between a deduplication data digest cache\, content cache\, and address based data cache to improve cache hit performance while maximizing data reduction rate (DRR).
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-maher-kachmar/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201012T140000
DTEND;TZID=America/New_York:20201012T150000
DTSTAMP:20260502T220443
CREATED:20201006T232350Z
LAST-MODIFIED:20201006T232350Z
UID:4412-1602511200-1602514800@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Sara Banian
DESCRIPTION:PhD Proposal Review: Content-Aware Design Assistance Frameworks for Graphic Design Layouts \nSara Banian \nLocation: Zoom Link \nAbstract: Layout is an important visual communication factor in graphic design that encompasses a page’s overall composition. During the different design stages\, designers express their requirements through images describing the interface’s visual layout\, hierarchical structure\, and content. They create wireframe layouts to meet user requirements and find relative design examples to gain inspiration and explore design alternatives. This is not only an iterative process\, but also a time-consuming one. \nIn this proposal\, we aim to design and evaluate design assistance methodologies to augment the process of layout design with a particular focus on visual search and wireframe creation in the context of mobile User Interface (UI) deign. For visual search\, we investigate how to find design examples that are relative to the design requirements of a UI layout. Layout retrieval is different from pixel-level image retrieval\, as it requires processing both the spatial layout and the content of the data to retrieve similar images. To achieve this\, I explore the problem of user interface image retrieval from both the data and the model side\, by collecting a more highly annotated\, well-suited dataset and proposing an object-detection based image retrieval model. The model takes as input a user interface image and retrieves the visually similar design examples. It uses object detection to identify the user interface components\, performs semantic segmentation to produce a hierarchical structure\, and trains an attention-aware multi-modal embedding network that leans the structure and content of the given layout design for relevant image retrieval. Results show that the system is capable of retrieving relative design examples through content analysis. Next\, I propose a generative framework to investigate how to generate layout wireframes according to user specifications and following common design practices and conventions. The generative framework aims at modeling the content of the UI layouts taking into account different layout variations and design features.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-sara-banian/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201012T130000
DTEND;TZID=America/New_York:20201012T140000
DTSTAMP:20260502T220443
CREATED:20201005T184512Z
LAST-MODIFIED:20201005T184512Z
UID:4406-1602507600-1602511200@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Muhamed Yildiz
DESCRIPTION:PhD Proposal Review: Interpretable Machine Learning for Retinopathy of Prematurity \nMuhamed Yildiz \nLocation: Zoom Link \nAbstract: Retinopathy of Prematurity (ROP)\, a leading cause of childhood blindness\, is diagnosed by clinical ophthalmoscopic examinations or reading retinal images. Plus disease\, defined as abnormal tortuosity and dilation of the posterior retinal blood vessels\, is the most important feature to determine treatment-requiring ROP. State-of-the-art ROP detection systems employ convolutional neural networks (CNNs) and achieve up to $0.947$ and $0.982$ area under the ROC curve (AUC) in the discrimination of and levels of ROP. However\, due to their black-box nature\, clinicians are reluctant to trust diagnostic predictions of CNNs. \nFirst\, we aim to create an interpretable\, feature extraction-based pipeline\, namely\, I-ROP ASSIST\, that achieves CNN like performance when diagnosing plus disease from retinal images. Our method segments retinal vessels\, detects the vessel centerlines. Then\, our method extracts features relevant to ROP\, including tortuosity and dilation measures\, and uses these features for classification via logistic regression\, support vector machines and neural networks to assess a severity score for the input. For predicting and levels of ROP on a dataset containing 5512 posterior retinal images\, we achieve $0.88$ and $0.94$ AUC\, respectively. Our system combining automatic retinal vessel segmentation\, tracing\, feature extraction and classification is able to diagnose plus disease in ROP with CNN like performance. \nFurthermore\, we aim to address the interpretability problem of CNN-based ROP detection system. Incorporating visual attention capabilities in CNNs enhances interpretability by highlighting regions in the images that CNNs utilize for prediction. Generic visual attention methods do not leverage structural domain information such as tortuosity and dilation of retinal blood vessels in ROP diagnosis. We propose the Structural Visual Guidance Attention Networks (SVGA-Net) method\, that leverages structural domain information to guide visual attention in CNNs. SVGA-Net achieves $0.979$ and $0.987$ AUC to predict and levels of ROP. Moreover\, SVGA-Net consistently results in higher AUC compared to visual attention CNNs without guidance\, baseline CNNs\, and CNNs with structured masks.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-muhamed-yildiz/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201006T120000
DTEND;TZID=America/New_York:20201006T120000
DTSTAMP:20260502T220443
CREATED:20201002T174710Z
LAST-MODIFIED:20201002T174710Z
UID:4401-1601985600-1601985600@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Mahsa Bayati
DESCRIPTION:PhD Defense: Efficient Data Access with Heterogeneous Computing using GPUs and Direct Non-volatile Storage. \nMahsa Bayati \nLocation: Zoom Link \nAbstract: The amount of data being collected that requires analysis is growing at an exponential rate. Along with this growth comes an increasing need for storage and computation. Researchers address these needs by (I) deploying distributed bigdata platforms equipped with cutting-edge storage devices\, and (II) building heterogeneous clusters with Central Processing Units (CPUs) and computational accelerators such as Graphics Processing Units (GPUs). The high performance of these mainstream systems is achieved by efficiently accessing data and computation resources and scheduling parallel and distributed tasks. \nThe performance of each job depends on the characteristics of both the application and the underlying storage and computational environments. However\, it is not a trivia to maintain efficiency and provide high performance in these mainstream systems. First\, in bigdata platforms like Spark and Hadoop\, full utilization of Solid State Devices (SSDs)\, i.e.\, Non-Volatile Memory Express (NVMe) and Key-Value (KV) SSDs is challenging. Data communication between Spark tasks\, levels of parallelism\, and resource co-location significantly affects achieving high I/O throughput. Second\, in heterogeneous systems\, one of the main bottlenecks of GPU computation is the data transfer bandwidth to GPUs in I/O intensive applications. The traditional GPU approach gets data from host memory\, which can limit data throughput and processing and thus degrade end-to-end performance. In this work\, we initially explore different attributes to exploit the full benefits of various SSDs in bigdata platforms. We then focus on mitigating the data transfer bottleneck in a heterogeneous bigdata framework. \nWe build a heterogeneous framework that facilitates GPU direct access to storage. Our framework aims to minimize the data transfer delay\, thus enhancing the performance of distributed and parallel tasks to obtain the full benefits of compute and storage resources. Our heterogeneous cluster is supplied with CPUs and GPUs as computing resources and non-volatile flash-based drives as storage resources. We also deploy the Spark bigdata platform to execute large workloads over our cluster. We then adopt a novel technique (e.g.\, Peer-to-Peer Direct Memory Access) to connect GPUs to non-volatile storage directly. Experimental results reveal that our heterogeneous Spark platform successfully bypasses the host memory and enables GPUs to communicate directly to the NVMe drive\, thus achieving higher data transfer throughput. The contributions of the dissertation are: (I) Realizing that bigdata processing applications need to consider framework features and application characteristics to fully utilize the high bandwidth of modern SSDs\, where compute and storage locality is essential to optimize the cost and performance. (II) Deploying our novel heterogeneous framework supporting GPU direct storage access improves data communication time around 35%- 50% and end-to-end performance by 30%.
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-mahsa-bayati/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200227T110000
DTEND;TZID=America/New_York:20200227T120000
DTSTAMP:20260502T220443
CREATED:20200221T194910Z
LAST-MODIFIED:20200221T194910Z
UID:4120-1582801200-1582804800@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Kai Sun
DESCRIPTION:Location: Dana 442 \nSeminar Title: A More Resilient Power Grid with Faster-Than-Real-Time Stability \nAbstract: \nSince the northeast blackout of 1965\, cascading blackouts have continued to happen on power grids in the North America and other countries. For a grid operator\, it is vitally important to be aware of real-time stability and reliability margin for the current grid state under possible disturbances. However\, a real-world power grid is an extremely complex\, nonlinear network system. For instance\, the bulk electric system of a US power grid is typically modeled by nonlinear DAEs on 5\,000+ electric machines and 50\,000+ nodes. Fast stability analysis and simulation of such a large-scale dynamical system subject even to a single disturbance is quite challenging. In the next decade\, renewable generation\, such as power electronics-interfaced distributed energy resources\, will reach 30%-50% in power grids of many countries. That can further increase the complexity of a power grid\, change its dynamic characteristics and bring more uncertainties and challenges to real-time grid operations. For a more resilient grid\, the power industry is looking forward to emerging technologies that enable “faster-than-real-time” stability assessment and adaptive\, distributed control to prevent and mitigate cascading power outages. The speaker will share his visions and research in this field and introduce two promising enabling approaches established with ongoing supports from NSF and DOE: 1) faster-than-real-time grid simulation using a semi-analytical approach\, and 2) grid stability assessment and control based on a new method named “Nonlinear Modal Decoupling” and the utilization of wide-area measurements and distributed energy resources. \nBio:\nKai Sun is an associate professor with the Department of Electrical Engineering and Computer Science in the University of Tennessee\, Knoxville. He is also a faculty member with the NSF/DOE Engineering Research Center for Ultra-Wide-Area-Resilient Electric Energy Transmission Networks (CURENT). He received his Bachelor’s degree in automation in 1999 and his Ph.D. degree in control science and engineering in 2004 both from Tsinghua University\, Beijing. He received the National Top 100 Doctoral Dissertations Award in 2006 from the Ministry of Education of China. Before joining the University of Tennessee\, Dr. Sun was a project manager with the Electric Power Research Institute (EPRI) from 2007 to 2012 for R&D programs in the area of grid operations\, planning and renewable integration. Earlier\, he worked as a research associate at Arizona State University\, Tempe.\nDr. Sun received EPRI Chauncey Award\, the institute’s highest honor\, in 2009\, two best papers awards from IEEE Power & Energy Society General Meetings in 2014 and 2015\, NSF CAREER Award in 2016\, the “Most Valuable Players” Award by North American Synchrophasor Initiative and DOE in 2016\, and the Professional Promise in Research Award twice in 2016 and 2019 by the College of Engineering\, the University of Tennessee. Dr. Sun authored one book titled Power System Control under Cascading Failures and 70+ journal publications. He is currently an associate editor with four IEEE journals including IEEE Transactions on Power Systems\, IEEE Transactions on Smart Grid\, IEEE Access and IEEE Open Access Journal of Power and Energy.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-kai-sun/
LOCATION:442 Dana\, 360 Huntington Ave\, 442 DA\, Boston\, MA\, 02115\, United States
GEO:42.3387508;-71.0923044
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=442 Dana 360 Huntington Ave 442 DA Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 442 DA:geo:-71.0923044,42.3387508
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200226T110000
DTEND;TZID=America/New_York:20200226T120000
DTSTAMP:20260502T220443
CREATED:20200220T195112Z
LAST-MODIFIED:20200220T195112Z
UID:4117-1582714800-1582718400@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Lili Su
DESCRIPTION:Location: ISEC 138 \nLearning with Distributed Systems: Adversary-Resilience and Neural Networks \nAbstract: \nIn this talk\, Su will first talk about how to secure Federated Learning (FL) against adversarial faults.\nFL is a new distributed learning paradigm proposed by Google. The goal of FL is to enable the cloud (i.e.\, the learner) to train a model without collecting the training data from users’ mobile devices. Compared with traditional learning\, FL suffers serious security issues and several practical constraints call for new security strategies. Towards quantitative and systematic insights into the impacts of those security issues\, Su and her team formulated and studied the problem of Byzantine-resilient Federated Learning. Su proposed two robust learning rules that secure gradient descent against Byzantine faults. The estimation error achieved under our more recently proposed rule is order-optimal in the minimax sense.\nThen\, she will briefly talk about her recent results on neural networks\, including both biological and artificial neural networks. Notably\, her results on the artificial neural networks (i.e.\, training over-parameterized 2-layer neural networks) improved the state-of-the-art. In particular\, they showed that nearly-linear network over-parameterization is sufficient for the global convergence of gradient descent. \nBio:\nLili Su is a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT\, hosted by Professor Nancy Lynch. She received a Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 2017\, supervised by Professor Nitin H. Vaidya. Her research intersects distributed systems\, learning\, security\, and brain computing. She was the runner-up for the Best Student Paper Award at DISC 2016\, and she received the 2015 Best Student Paper Award at SSS 2015. She received UIUC’s Sundaram Seshu International Student Fellowship for 2016\, and was invited to participate in Rising Stars in EECS (2018). She has served on TPC for several conferences including ICDCS and ICDCN
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-lili-su/
LOCATION:138 ISEC\, 360 Huntington Ave\, 138 ISEC\, Boston\, MA\, 02115\, United States
GEO:42.3401758;-71.0892797
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=138 ISEC 360 Huntington Ave 138 ISEC Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 138 ISEC:geo:-71.0892797,42.3401758
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200225T160000
DTEND;TZID=America/New_York:20200225T170000
DTSTAMP:20260502T220443
CREATED:20200224T200126Z
LAST-MODIFIED:20200224T200126Z
UID:4123-1582646400-1582650000@ece.northeastern.edu
SUMMARY:ECE Seminar: Faculty Openings in a Unique International Institute for Global Impact in Multidisciplinary Engineering
DESCRIPTION:Philip Krein\, Executive Dean\, Zhejiang University/University of Illinois at Urbana-Champaign Institute \nOverview: \nCome learn about the many open faculty positions at a new campus in China (bring resumes!).  The Zhejiang University/University of Illinois at Urbana-Champaign Institute (ZJUI) is a joint engineering college on the new Zhejiang University (ZJU) International Campus in Haining\, China\, about 120 km southwest of Shanghai. ZJUI is a unique peer partnership of two leading global universities. The programs and research themes build on more than 100 years of University of Illinois involvement in China\, and decades of active research collaborations between the two partners. ZJUI conducts teaching and research in broad program themes of advanced materials and devices engineering sciences; information and data sciences; and energy\, environment\, and infrastructure sciences. Undergraduate and graduate degrees are offered in civil engineering\, computer engineering\, electrical engineering\, and mechanical engineering.  Numerous faculty positions are open in these disciplines\, computer science\, materials engineering\, mathematics\, and related fields. \nA short talk will be given that describes how projects and issues in China illustrate fundamental global development challenges.  Examples are presented in terms of infrastructure\, energy\, environmental impact\, advanced manufacturing\, data sciences\, and other major topics.  In each case\, innovations in the United States and in China have huge potential global impact if they can be scaled up and applied broadly.  The talk discusses how a new generation of science and engineering faculty with multidisciplinary interests and global aspirations will be developed to lead global impact. It will also be an opportunity for Northeastern graduate students to learn more about how to apply for faculty positions at ZJU-UIUC. \nBiography: Philip Krein holds the Grainger Endowed Chair Emeritus in Electric Machinery and Electromechanics at the University of Illinois at Urbana-Champaign. He is also Executive Dean of the Zhejiang University/University of Illinois Institute in Haining\, China\, and a faculty member at Zhejiang University in Hangzhou\, China.  From 2003 to 2014 he was a Founder and Director of SolarBridge Technologies\, Inc.\, Austin\, TX\, a developer of long-life integrated inverters for solar energy. He holds 42 U.S. patents. His current research interests include power electronics\, machines\, electric transportation\, and renewable energy\, with an emphasis on nonlinear control approaches.  Dr. Krein received the IEEE William E. Newell Award in Power Electronics and is a past President of the IEEE Power Electronics Society and a past Chair of the IEEE Transportation Electrification Community.  He is a member of the U.S. National Academy of Engineering\, a fellow of the National Academy of Inventors\, and a Foreign Expert under the China 1000 Talents Program.
URL:https://ece.northeastern.edu/event/ece-seminar-faculty-openings-in-a-unique-international-institute-for-global-impact-in-multidisciplinary-engineering/
LOCATION:442 Dana\, 360 Huntington Ave\, 442 DA\, Boston\, MA\, 02115\, United States
GEO:42.3387508;-71.0923044
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=442 Dana 360 Huntington Ave 442 DA Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 442 DA:geo:-71.0923044,42.3387508
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200220T120000
DTEND;TZID=America/New_York:20200220T130000
DTSTAMP:20260502T220443
CREATED:20200210T233743Z
LAST-MODIFIED:20200210T233743Z
UID:4094-1582200000-1582203600@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Dimitrios Skarlatos
DESCRIPTION:Location: ISEC 136 \nRethinking Operating System and Hardware Abstractions for Good and Evil \nAbstract: \nCurrent hardware and operating system abstractions were conceived at a time when we had minimal security threats\, scarce compute and memory resources and limited numbers of users. These assumptions are not true today. On one hand\, attacks such as Spectre and Meltdown have shown that current hardware is plagued by vulnerabilities. On the other hand\, new emerging cloud paradigms like microservices and serverless computing have led to the sharing of computing resources among hundreds of users at a time. In this new era of computing\, we can no longer afford to build each layer separately. Instead\, we have to rethink the synergy between the operating system and hardware from the ground up. \nIn this talk\, Skarlatos will focus on rethinking the virtual memory abstraction. First\, he will introduce Microarchitectural Replay Attacks\, a novel family of side-channel attacks that exploit existing virtual memory mechanisms. These attacks leverage the fact that\, in modern out-of-order processors\, a single dynamic instruction can be forced to execute many times. Then\, he will describe Elastic Cuckoo Page Tables\, his proposal to rebuild the virtual memory abstraction for parallelism. Finally\, he will conclude by describing ongoing and future directions towards redesigning the hardware and the operating system layers. \nBio:  \nDimitrios Skarlatos is a PhD student at the University of Illinois at Urbana-Champaign (UIUC)\, working with Professor Josep Torrellas.  His research lies at the intersection of computer architecture\, security\, and operating systems. He particularly enjoys questioning the fundamental assumptions behind computer design decisions. He builds practical solutions that improve the performance and bolster–or sometimes break–the security guarantees of computing systems. \nDimitrios is a UIUC College of Engineering Mavis Future Faculty Fellow. He is the recipient of the W. J. Poppelbaum Memorial Award\, the David J. Kuck Outstanding MS Thesis Award\, the UIUC Computer Science Excellence Fellowship\, a 2020 MICRO Top Picks in Computer Architecture\, and a 2019 MICRO Top Picks Honorable Mention. He was invited to participate in the Rising Stars in Computer Architecture workshop. He has earned an MS from UIUC and a BS in Electronic and Computer Engineering from the Technical University of Crete in Greece.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-dimitrios-skarlatos/
LOCATION:136 ISEC\, 360 Huntington Ave\, 136 ISEC\, Boston\, MA\, 02115\, United States
GEO:42.3401758;-71.0892797
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=136 ISEC 360 Huntington Ave 136 ISEC Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 136 ISEC:geo:-71.0892797,42.3401758
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200219T110000
DTEND;TZID=America/New_York:20200219T120000
DTSTAMP:20260502T220443
CREATED:20200213T195211Z
LAST-MODIFIED:20200213T195211Z
UID:4104-1582110000-1582113600@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Arjuna Madanayak
DESCRIPTION:Location: ISEC 138 \nMultidimensional Signal Processing Circuits for Low-SWaP Multi-Beam Arrays \nAbstract: \nIn this talk\, array processing circuits that exploit the computing paradigm of approximate computing are explored. Low-size\, -weight\, and -power consumption (SWaP) algorithms and circuits are proposed to achieve thousands of fully-digital beams for emerging applications. The proposed low-SWaP multibeam digital beamformers use approximate computing to enable up to 90% smaller circuit complexity compared to FFT-based techniques used to achieve multiple orthogonal RF beams. Furthermore\, the proposed array processing systems exploit wave physics to improve the performance of key signal processing components in wireless base stations. Specifically\, the spatiotemporal causality properties of electromagnetic plane waves – as described in Special Theory of Relativity – are used in novel multi-port transceiver circuits to improve energy efficiency\, reduce additive white Gaussian noise\, and improve linearity of array receivers at the physical layer. The multi-dimensional frequency-domain region of support (ROS) of all propagating plane waves\, which correspond to wireless propagation channels\, are shown to be confined inside the “Light Cone”. The region of spacetime outside this light cone is a void (elsewhere) within which wireless communications signals cannot propagate. A “cone of silence” appears in the multidimensional spacetime frequency domain\, which demarcates a conical region outside of which waves do not exist. The aim is to spatio-temporally shape noise and transceiver distortion into this electromagnetically silent region so that their presence does not affect the performance of arrays. The technique enables multi-port versions of LNAs\, ADCs\, and DACs for array processing that exploits noise and shaping in multiple dimensions in space and time to greatly improve performance. \nBio: \nDr. Arjuna Madanayake is an Associate Professor of Electrical and Computer Engineering at Florida International University. His research interests include multidimensional signal processing\, array processing\, FPGA and digital systems\, microwave circuits\, VLSI\, analog and mixed-signal circuit design\, fast algorithms\, digital signal processing\, alternative computing\, wireless communications\, mm-wave systems and 5G/6G topics\, sub-THz and THz systems\, satellite communications\, wireless sensing and imaging\, radar sensing\, computing architecture\, internet of things (IoT)\, RF sensing for unmanned aerial systems\, and electronic warfare. He started an Assistant Professorship at the University of Akron in Ohio in 2010\, and received early tenure and promotion to Associate Professor in 2015. Dr. Madanayake was selected as the most outstanding candidate in Electrical Engineering and Computing Sciences category for the NSERC 2009 Canada Post-doctoral Fellowship competition. Dr. Madanayake completed a postdoctoral associateship in 2009 in which he explored multidimensional signal processing and FPGA circuits for beamformer aperture arrays as part of the Canadian Square Kilometer Array (SKA) effort. He completed the Ph.D. and M.Sc. both in Electrical Engineering at the University of Calgary\, specializing in multidimensional signal processing\, circuits and systems\, especially FPGA systems. In his current tenured appointment at FIU\, Dr. Madanayake directs the RF\, Analog and Digital (RAND) Circuits Lab at FIU which has been conducting multiple projects funded by 3 DARPA\, 3 ONR and 7 NSF awards. Arjuna tries to pursue elephant conservation and rural development in Sri Lanka\, and high-end audio engineering as hobbies.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-arjuna-madanayak/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200218T113000
DTEND;TZID=America/New_York:20200218T123000
DTSTAMP:20260502T220443
CREATED:20200208T022238Z
LAST-MODIFIED:20200208T022238Z
UID:4091-1582025400-1582029000@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Murat Kocaoglu
DESCRIPTION:Location: ISEC 138 \nCausality: From Learning to Generative Models \nAbstract: \nCausal inference is fundamental for multiple disciplines ranging from medical research to engineering\, statistics\, and economics. It is also central in machine learning and is now becoming a core component of artificial intelligence research. Although causal inference has been studied for a long time in various fields under different frameworks\, today we need tools that can process a large number of variables to handle modern large datasets. The graphical approach to probabilistic causation advocated by Judea Pearl and others provides a way to compactly represent the causal relations using directed acyclic graphs and paves the way for the design of algorithms that can answer causal questions for many variables. \nIn this talk\, Kocaoglu first provides a friendly introduction to causality and explain why causal understanding is important. As his first contribution\, he will propose a framework called entropic causal inference for inferring the causal direction between two variables from data. He will show that entropy can be used to capture the complexity of a causal mechanism. Further\, if the true direction has a simple mechanism\, we can identify it from data. The entropic causal inference framework leverages tools from information theory for causal inference. As his second contribution\, he will show how we can apply causality in deep generative models – deep neural networks used for modeling complex data. He will demonstrate how to define and train a causal deep generative model\, called CausalGAN for generating images with labels. As an extension of generative adversarial networks (GANs)\, CausalGAN allows sampling not only from the observed data distribution but also from the interventional distributions of images. He will conclude with future directions for causal inference and its applications in supervised learning and reinforcement learning. \n \nBio: \nMurat Kocaoglu received his B.S. degree in Electrical – Electronics Engineering with a minor degree in Physics from the Middle East Technical University in 2010. He received his M.S. degree from the Koc University\, Turkey in 2012 under the supervision of Prof. Ozgur B. Akan\, and PhD degree from The University of Texas at Austin in 2018\, under the supervision of Prof. Alex Dimakis and Prof. Sriram Vishwanath. He is currently a Research Staff Member in the MIT-IBM Watson AI Lab in IBM Research\, Cambridge\, Massachusetts. His current research interests include causal inference\, generative adversarial networks\, and information theory
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-murat-kocaoglu/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200218T103000
DTEND;TZID=America/New_York:20200218T113000
DTSTAMP:20260502T220443
CREATED:20200203T211723Z
LAST-MODIFIED:20200215T035349Z
UID:4072-1582021800-1582025400@ece.northeastern.edu
SUMMARY:Engineers Week: Harnessing Metamaterials to Manipulate Electromagnetic and Acoustic Waves
DESCRIPTION:Dr. Xin Zhang\, Professor\, Boston University \nLocation: 138 ISEC \nMetamaterials have been intensively studied and applied to a broad range of practical applications ranging from wireless communications to magnetic resonance imaging. Photonic metamaterials consisting of subwavelength “meta-atoms” have received enormous interest due to their extraordinary and unprecedented optical properties. Specifically\, the effective permittivity and permeability can be tailored and reconfigured to construct metamaterial devices by modulating or actuating the constituent meta-atoms. By leveraging microelectromechanical system (MEMS) technology\, we have developed functional metamaterial devices to manipulate and detect the terahertz waves. In addition\, metamaterials exhibit extraordinary near-field properties to control electric and magnetic field distribution. I will introduce our progress on intelligent magnetic metamaterials to enhance the signal to noise ratio of magnetic resonance imaging. Besides electromagnetic metamaterials\, acoustic metamaterials for sound wave shaping and silencing will also be discussed. \nXin Zhang received her Ph.D. in Mechanical Engineering from the Hong Kong University of Science and Technology (HKUST). She was a Postdoctoral Researcher and then a Research Scientist with the Massachusetts Institute of Technology (MIT). She then joined Boston University (BU) as a Faculty Member\, where she is currently a Professor of Mechanical Engineering\, Electrical & Computer Engineering\, Biomedical Engineering\, Materials Science & Engineering\, and the Photonics Center. Dr. Zhang is the Associate Director of the Boston University Nanotechnology Innovation Center and Director of both the NSF Research Experiences for Undergraduates (REU) and Teachers (RET) Sites in Integrated Nanomanufacturing at Boston University. \nDr. Zhang’s research interests are in the broad areas of microelectromechanical systems (MEMS or microsystems) and metamaterials (acoustic\, electromagnetic\, nonlinear\, photonic\, terahertz\, tunable\, etc.). She has published 160+ papers in interdisciplinary journals\, become both US and EU-US National Academy of Engineering Invitee (ages: 30-45)\, and is an Elected Fellow of AAAS\, AIMBE\, APS\, ASME\, IEEE\, NAI\, and OSA\, and Associate Fellow of AIAA. \nHosted by the Electrical and Computer Engineering Department
URL:https://ece.northeastern.edu/event/engineers-week-harnessing-metamaterials-to-manipulate-electromagnetic-and-acoustic-waves/
LOCATION:138 ISEC\, 360 Huntington Ave\, 138 ISEC\, Boston\, MA\, 02115\, United States
GEO:42.3401758;-71.0892797
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=138 ISEC 360 Huntington Ave 138 ISEC Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 138 ISEC:geo:-71.0892797,42.3401758
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200213T150000
DTEND;TZID=America/New_York:20200213T160000
DTSTAMP:20260502T220443
CREATED:20200208T014815Z
LAST-MODIFIED:20200208T014849Z
UID:4088-1581606000-1581609600@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Najme Ebrahimi
DESCRIPTION:Location: ISEC 136 \nNext Generation of Smart Wireless World: from High Data-Rate Mm-wave Directional Arrays to Reliable and Secured IoT Connectivity for 5G and Beyond \nAbstract: \nThe next generation of smart wireless world requires massive and reliable connectivity as well as high datarate communication and sensing. Consequently\, the immediate response of the wireless world is acquiring the mm-wave (MMW) wireless band (30 GHz–300 GHz) and the development of 5G and beyond. The major challenge of deploying high data-rate communication system at MMW frequency bands is the channel fading and multi-path diffraction effect. Hence\, multiple-element transceivers such as scalable “directional” phased array or massive MIMO are required. Moreover\, the next generation of wireless world is expected to have over one trillion Internet of Things (IoT) devices connected\, requiring secured connectivity such as protection against interference\, jammers\, and eavesdroppers  In this talk\, I will present novel techniques to overcome the challenges for future scalable high data-rate MMW transceiver array from silicon circuit toward RFIC system and packaging. This includes parasitic-insensitive\, power-efficient\, and wideband 2×2 arrays of injection-locked oscillators for efficient local oscillator (LO) distribution and phase shifting (circuit technique)\,  image selection Weaver architecture to significantly reduce the required bandwidth of the LO generation circuitry for the MMW system from conventional 20% to only 4% (RFIC architecture)\, and compact differential aperture coupled LO distribution feed network for compact and scalable antenna-IC integration (packaging). I will also discuss several future directions toward high-frequency signal generation and modulation based on integrating the circuit and electromagnetics fundamental theories for communication and sensing above 100 GHz\, namely\, as 6G.   On the other hand\, employing a “directional” antenna for interference/eavesdropper cancellation for IoTs suffers from side-lobe leakage and requires accurate beam alignment and localization. In this talk\, I will present a novel embedded architecture for a distributed IoT network that utilizes a masterslave full-duplex communication using an omnidirectional antenna to exchange a random modulated phase shift as the secret key while canceling out the eavesdropper effect. I will also discuss two future directions for interference cancellation from circuit level to system level; from cooperative and distributed pulse coupled synchronization for dynamically interference canceling towards wideband interference canceller/filter at RF front-end of IoT devices using a single antenna to turn the radio with one-bit ADC into reality \nBio: \nNajme Ebrahimi is a Post-Doctoral Research Fellow at the University of Michigan (U-M) since September 2017. At the University of Michigan\, she is mainly conducting research on both mm-Wave/THz high data rate communication and sensing in addition to the connectivity of the next generation of distributed Internet-of-Things network. She earned her PhD from the University of California\, San Diego (UCSD) in June 2017\, with a thesis emphasize on enabling high data rate and scalable mm-wave phased array for the next generation of smart wireless world. She received her MS degree and BS degree\, with highest honors\, from Amirkabir University of Technology\, Tehran\, Iran\, in 2011 and Shahid Beheshti University\, Tehran\, Iran\, in 2009\, respectively. She is a member of IEEE Solid-State Circuits and IEEE Microwave Theory and Techniques societies. She is the recipient of PhD Endowed Graduate Fellowship from UCSD (2012-2013) and U-M Departmental Postdoctoral Fellowship (20172019). She is currently serving as the vice-chair of IEEE Southeastern Michigan for Microwave Theory and Techniques Chapter where she is awarded MTT-s Travel Grant (2019). She is selected as 2019 EECS Rising Star by MIT launched Rising-star program and 2020 ISSCC Rising Star by IEEE Solid-State Circuits Society
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-najme-ebrahimi/
LOCATION:136 ISEC\, 360 Huntington Ave\, 136 ISEC\, Boston\, MA\, 02115\, United States
GEO:42.3401758;-71.0892797
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=136 ISEC 360 Huntington Ave 136 ISEC Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 136 ISEC:geo:-71.0892797,42.3401758
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200212T110000
DTEND;TZID=America/New_York:20200212T120000
DTSTAMP:20260502T220443
CREATED:20200203T195100Z
LAST-MODIFIED:20200203T195100Z
UID:4066-1581505200-1581508800@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Basak Guler
DESCRIPTION:ISEC 138 \nCoded Computing for Next-Generation Information Processing Systems \nAbstract: \nModern networks are designed to facilitate information processing in large-scale\, highly distributed environments that connect humans with smart machines. Information collected in such networks is often privacy-sensitive\, such as healthcare records\, financial transactions\, or geolocation data. Moreover\, these networks often need to operate in unknown environments using unreliable information sources\, which can lead to various interpretations of the received information. This brings three main challenges in designing effective distributed information-processing frameworks: scalability\, privacy\, and context-awareness. In this talk\, I will discuss how to address these challenges through information and coding theory principles. I will first introduce a fast and privacy-preserving framework for distributed machine learning\, which can provide an order of magnitude speedup over the existing cryptographic approaches. Next\, I will address a major bottleneck for the scalability of large-scale distributed computing frameworks\, the interprocessor communication load\, in the context of distributed graph processing.  To do so\, I will introduce a topology-aware graph allocation and communication strategy using coding theory and demonstrate that it can reduce the inter-processor communication load significantly for both real-world and random graph structures. Finally\, I will discuss the fundamental performance limits of information transmission in context-aware multiuser communication networks. I will characterize the information-theoretic performance limits for lossy transmission of correlated sources in a multi-user communication channel when the communicating parties have access to context information correlated with the sources. \nBio: \nBasak Guler is a postdoctoral scholar at the University of Southern California. She received her M.Sc. and PhD from the Department of Electrical Engineering at the Pennsylvania State University. Her research interests include information and coding theory\, distributed computing\, wireless communications\, graph signal processing\, machine learning\, privacy and security\, and game theory. She is a recipient of the Dr. Nirmal K. Bose Dissertation Award by the Pennsylvania State University\, the Young Scholar Award by the Turkish-American Scientists and Scholars Association\, and was named a Rising Star in EECS by MIT.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-basak-guler/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200211T113000
DTEND;TZID=America/New_York:20200211T123000
DTSTAMP:20260502T220443
CREATED:20200131T194339Z
LAST-MODIFIED:20200131T194339Z
UID:4058-1581420600-1581424200@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Qing Qu
DESCRIPTION:Location: ISEC 138 \nTitle: Learning Low-complexity Models from the Data – Geometry\, Optimization\, and Applications \nAbstract: \nToday we are collecting a massive amount of data in forms of images and videos\, that we want to learn from the data themselves to extract useful information and to make predictions. The data are high-dimensional\, but often possess certain low-dimensional structures (e.g.\, sparsity). However\, learning these low-complexity models often results in highly nonconvex optimization problems\, where in the past our understandings of solving them were very limited. In the worst case\, optimizing a nonconvex problem is NP-hard. \nIn this talk\, we present global nonconvex optimization theory and guaranteed algorithms for efficient learning of low-complexity models from high-dimensional data. For several important problems in imaging science (i.e.\, sparse blind deconvolution) and representation learning (i.e.\, convolutional/overcomplete dictionary learning)\, we show that the underlying symmetry and low-complexity structures avoid the worst-case scenarios\, leading to benign global geometric properties of the nonconvex optimization landscapes. In particular\, for sparse blind deconvolution that aims to jointly learn the underlying physical model and sparse signals from convolutions\, the geometric intuitions lead to efficient nonconvex algorithms\, with linear convergence to target solutions. Moreover\, we extended our geometric analysis to convolutional dictionary learning based on its similarity with overcomplete dictionary learning\, providing the first global algorithmic guarantees for both problems. Finally\, we demonstrate our methods on several important applications in scientific discovery and draw connections to learning deep neural networks. \nThis talk is mainly based on one paper appeared in NeurIPS’19 (spotlight)\, and two papers accepted by ICLR’20 (one oral). \nBio:  \nQing Qu is a Moore-Sloan data science fellow at the Center for Data Science\, New York University. He received his Ph.D. from Columbia University in Electrical Engineering in Oct. 2018. He received his B.Eng. from Tsinghua University in Jul. 2011\, and an M.Sc.from the Johns Hopkins University in Dec. 2012\, both in Electrical and Computer Engineering. He interned at U.S. Army Research Laboratory in 2012 and Microsoft Research in 2016\, respectively. His research interest lies at the intersection of the foundation of data science\, machine learning\, numerical optimization\, and signal/image processing. His research focuses on developing computational methods for learning low-complexity models/structures from high dimensional data\, leveraging tools from machine learning\, numerical optimization\, and high dimensional probability/geometry. He is also interested in applying these data-driven methods to various engineering problems in imaging sciences\, scientific discovery\, and healthcare. He is the recipient of Best Student Paper Award at SPARS’15 (with Ju Sun and John Wright)\, and the recipient of Microsoft Ph.D. Fellowship 2016-2018 in machine learning.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-qing-qu/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200210T110000
DTEND;TZID=America/New_York:20200210T120000
DTSTAMP:20260502T220443
CREATED:20200203T230328Z
LAST-MODIFIED:20200203T230328Z
UID:4070-1581332400-1581336000@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Xiaolin Xu
DESCRIPTION:Location: ISEC 138 \nEnsuring Hardware Cybersecurity from a Cross-Layer Perspective \nAbstract: \nThe rapid development of the semiconductor industry has significantly increased the number\, complexity\, and applicability of commercial electronics over the past few decades. As a result\, the security and assurance of hardware are playing a critical role in the cyberscape of modern society\, such as national defense\, healthcare\, transportation\, and finances. Hardware has been assumed to be trustworthy and reliable “by default.” However\, this assumption is no longer true\, with an increasing number of attacks reported on the hardware. In practice\, the globalization of semiconductor business poses grave risks from untrusted fabrication and distribution\, where Trojans insertion\, IP cloning\, and counterfeits may happen.    In this talk\, I will present our research efforts dedicated to the hardware-oriented cybersecurity from a cross-layer perspective. Specifically\, I will introduce two frameworks that we built to address these problems in the supply chain and embedded system layers. I will first present an identification technique based on the physical disorder of integrated circuitry that enables the authentication of electronic devices. Then\, I will present a hardware IP protection framework based on logic locking and circuit editing\, which can effectively mitigate the vulnerabilities from untrusted off-shore foundries and supply chains. At the end of the talk\, I will briefly present our scientific achievements in advancing the hardware security in the system and architecture layers\, as well as proposing a future research agenda of this emerging area. \nBio: \nDr. Xiaolin Xu is currently an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Illinois at Chicago (UIC). Prior to joining UIC\, he spent two years at Post-doc Fellow at the Florida Institute for Cybersecurity (FICS) research center at the University of Florida. He received his Ph.D. degree from the University of Massachusetts Amherst in 2016 after he got the B.S. and M.S. degrees in Electrical Engineering from the University of Electronic Science and Technology of China in 2008 and 2011\, respectively. His research interests span hardware security and trust\, FPGA\, IoT security\, VLSI\, computer architecture\, embedded system\, and hardware-software co-design for modern computing systems. He is also interested in developing IoT devices and cloud-computing infrastructures with particular emphasis on security\, high-performance\, privacy protection.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-xiaolin-xu/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200207T110000
DTEND;TZID=America/New_York:20200207T120000
DTSTAMP:20260502T220443
CREATED:20200203T194915Z
LAST-MODIFIED:20200203T195142Z
UID:4064-1581073200-1581076800@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Kris Dorsey
DESCRIPTION:Location: ISEC 136 \nIt’s a bit of a stretch: selective\, flexible mechanical sensors towards VR\, healthcare\, and robotics applications \nAbstract: \nIn this talk\, Kris Dorsey will discuss work related to mechanically “programming” soft sensors to respond to a particular mechanical deformation. Advances in 3D-printing\, soft polymer fabrication\, and other rapid fabrication processes have made the vision of conformal and stretchable mechanical sensors for wearable devices and soft robotics possible. One limitation of these sensors is their low selectivity between different modes of mechanical deformation\, such as strain\, torsion\, and bending.\nShe will present recent work in enhancing the selectivity of stretchable sensors. By using non-planar sensor morphology to bias the sensor towards a particular deformation mode\, the selectivity of the sensor can be enhanced. She will discuss projects including designing a sensor with electrically-tunable sensitivity and the fabrication origami-patterned\, deformation-selective flexible sensors. \n  \nBio: \nKris Dorsey is an assistant professor of engineering in the Picker Engineering Program at Smith College. She was a President’s Postdoctoral Fellow at the University of California\, Berkeley and University of California\, San Diego. Dr. Dorsey graduated from Carnegie Mellon University with a Ph.D. in Electrical and Computer Engineering and earned her Bachelors of Science in Electrical and Computer Engineering from Olin College. \nShe founded The MicroSMITHie Lab at Smith College to investigate micro- and miniature-scale sensor design and to prepare undergraduates for graduate study in engineering. Her current research interests include strain-stable\, hyperelastic components\, novel morphology soft sensors\, and sensors for soft robots and wearable devices. \nDr. Dorsey has co-authored several publications on hyperelastic strain sensors\, novel soft lithography processes\, and the stability of gas chemical sensors. In 2019\, she received the NSF CAREER award.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-kris-dorsey/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200207T110000
DTEND;TZID=America/New_York:20200207T110000
DTSTAMP:20260502T220443
CREATED:20200205T004700Z
LAST-MODIFIED:20200205T004700Z
UID:4080-1581073200-1581073200@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Fan Zhang
DESCRIPTION:Location: ISEC 140 \nPersistent Fault Attacks in Practice \nAbstract: \nPersistent fault analysis (PFA) was proposed at CHES 2018 as a novel fault analysis technique. It was shown to completely defeat standard redundancy based countermeasure against fault analysis. The original PFA was demonstrated with rowhammer-based fault injections. However whether such an analysis can be applied to traditional microcontrollers\, together with its attack difficulty in practice\, has not been investigated. In this talk\, for the first time\, a persistent fault attack is conducted on an unprotected AES algorithm implemented on ATmega163L microcontroller. Several critical challenges are coped with our new improvements. This talk will introduce the PFA at both theoretical and practical levels. \nBio: \nDr. Fan Zhang graduated from the Department of Computer Science and Engineering\, University of Connecticut\, USA. He is currently an associate professor in the College of Computer Science\, Zhejiang University\, China. He was a visiting scholar at the National University of Singapore and currently he is a visiting professor at Singapore University of Technology and Design. His major research interest is the general cybersecurity which includes hardware security\, system security\, network security and more. His special expertise lies in the domain of side-channel attacks (SCA) and countermeasures\, fault attacks\, cryptography\, and computer architecture. He is the Program Chair of PROOFS\, TPC member of DAC\, AsiaCCS\, AsianHOST\, ASHES\, COSADE\, FDTC\, Inscrypt\, and the Associate Editor of IEEE Access\, Cybersecurity. He has more than 60 publications in international conferences and journals such as CHES\, DATE\, COSADE\, FDTC\, TIFS\, TPDS.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-fan-zhang/
LOCATION:140 ISEC\, 360 Huntington Ave\, 140 ISEC\, Boston\, MA\, 02115\, United States
GEO:42.3401758;-71.0892797
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=140 ISEC 360 Huntington Ave 140 ISEC Boston MA 02115 United States;X-APPLE-RADIUS=500;X-TITLE=360 Huntington Ave\, 140 ISEC:geo:-71.0892797,42.3401758
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200203T110000
DTEND;TZID=America/New_York:20200203T120000
DTSTAMP:20260502T220443
CREATED:20200128T193610Z
LAST-MODIFIED:20200203T195845Z
UID:4029-1580727600-1580731200@ece.northeastern.edu
SUMMARY:Electrical and Computer Engineering Seminar: Alper Ozgurluk
DESCRIPTION:138 ISEC \nHigh-Q Strong Coupling Capacitive-Gap Transduced RF Micromechanical Resonators \nAbstract: \nThis talk presents a hierarchical\, intuitive\, and technology agnostic procedure for designing RF channel-select filters\, followed by an actual demonstration that consists of 96 mechanically coupled capacitive-gap-transduced polysilicon disk resonators\, centered at 224MHz with only 0.1% (9kHz) bandwidth\, all while attaining 2.7dB insertion loss and more than 50dB out-of-channel stopband rejection\, solidly confirming the validity of the design method. Two distinct methods then follow that aim to increase the resonator electromechanical coupling coefficient (kt2)\, which substantially improves the functionality of the demonstrated filter for future applications\, e.g.\, ones that require higher-order with sharper roll-off characteristics and less passband ripple. Specifically\, single-digit-nanometer electrode-to-resonator gaps have enabled 200-MHz radial-contour mode polysilicon disk resonators with motional resistance Rx as low as 144Ohm while still posting Q’s exceeding 10\,000\, all with only 2.5V dc-bias. The demonstrated gap spacings down to 7.98nm are the smallest to date for upper-VHF micromechanical resonators and fully capitalize on the fourth power dependence of motional resistance on gap spacing. The scale here is perhaps best conveyed with the recognition that this gap corresponds to only 16 SiO2 molecules! \nHigh device yield and ease of measurement debunk popular prognosticated pitfalls often associated with tiny gaps\, e.g.\, tunneling\, Casimir forces\, low yield\, none of which appear. The tiny motional resistance\, together with kt2’s up to 1% at 4.7V dc-bias and kt2-Q products exceeding 100\, propel polysilicon capacitive-gap transduced resonator technology to the forefront of MEMS resonator applications that put a premium on noise performance\, such as radar oscillators. To increase functionality even further\, the rest of this talk introduces a fabrication and post-processing method using CMOS-compatible ruthenium metal that allows integration of micromechanical devices\, such as the aforementioned RF filters\, atop CMOS. To this end\, the introduction of tensile stress via localized Joule heating has yielded some of the highest metal MEMS resonator Q’s measured to date\, as high as 48\,919 for a 12-MHz ruthenium micromechanical clamped-clamped beam\, defying the common belief that metal Q cannot compete with conventional micro machinable materials. The low-temperature ruthenium metal process\, with highest temperature of 450°C and paths to an even lower ceiling of 200°C\, further allows for MEMS post-processing directly over finished foundry CMOS wafers\, thereby offering a promising route towards fully monolithic realization of CMOS-MEMS circuits\, such as needed in communication transceivers. This\, together with its higher Q\, may eventually make ruthenium metal preferable over polysilicon in some applications. \nBio: \nAlper Ozgurluk received the B.S. degree in electrical and electronics engineering from Bilkent University\, Ankara\, Turkey\, in 2012 and the Ph.D. degree in electrical engineering and computer sciences from the University of California\, Berkeley\, CA\, USA\, in 2019. The first part of his Ph.D. research focused on the design\, fabrication\, and testing of medium-scale micromechanical circuits using capacitive-gap transduced disk resonators as building blocks to demonstrate RF channel filters for ultra-low power radio applications. During his Ph.D.\, he also worked on design and fabrication methods to shrink the gaps of capacitive-gap resonators to single-digit-nanometers transforming the performance of such devices. The last part of his Ph.D. mainly focused on CMOS-compatible resonator materials and post-processing techniques that could provide decent performance much needed for CMOS-MEMS integration. In 2019\, he joined Apple Inc. as a Display Exploration Engineer.
URL:https://ece.northeastern.edu/event/electrical-and-computer-engineering-seminar-alper-ozgurluk/
END:VEVENT
END:VCALENDAR