BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical &amp; Computer Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Electrical &amp; Computer Engineering
X-ORIGINAL-URL:https://ece.northeastern.edu
X-WR-CALDESC:Events for Department of Electrical &amp; Computer Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201001T120000
DTEND;TZID=America/New_York:20201001T130000
DTSTAMP:20260507T085212
CREATED:20200923T180841Z
LAST-MODIFIED:20200923T180841Z
UID:4382-1601553600-1601557200@ece.northeastern.edu
SUMMARY:Virtual Heads Up Lunchtime Funtime
DESCRIPTION:Graduate Student Services is hosting a Heads Up Virtual “lunchtime funtime” event for Master’s students to get to know each other and play an exciting competition against each other on October 1st\, 12-1pm EST!  Heads Up is a game where one student in a group will have to guess 10 words within a category with the help of their group members.  You will need to register via the Zoom link we will send out shortly to students to attend the event.
URL:https://ece.northeastern.edu/event/virtual-heads-up-lunchtime-funtime/
ORGANIZER;CN="Graduate School of Engineering":MAILTO:coe-gradadmissions@northeastern.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201006T120000
DTEND;TZID=America/New_York:20201006T120000
DTSTAMP:20260507T085212
CREATED:20201002T174710Z
LAST-MODIFIED:20201002T174710Z
UID:4401-1601985600-1601985600@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Mahsa Bayati
DESCRIPTION:PhD Defense: Efficient Data Access with Heterogeneous Computing using GPUs and Direct Non-volatile Storage. \nMahsa Bayati \nLocation: Zoom Link \nAbstract: The amount of data being collected that requires analysis is growing at an exponential rate. Along with this growth comes an increasing need for storage and computation. Researchers address these needs by (I) deploying distributed bigdata platforms equipped with cutting-edge storage devices\, and (II) building heterogeneous clusters with Central Processing Units (CPUs) and computational accelerators such as Graphics Processing Units (GPUs). The high performance of these mainstream systems is achieved by efficiently accessing data and computation resources and scheduling parallel and distributed tasks. \nThe performance of each job depends on the characteristics of both the application and the underlying storage and computational environments. However\, it is not a trivia to maintain efficiency and provide high performance in these mainstream systems. First\, in bigdata platforms like Spark and Hadoop\, full utilization of Solid State Devices (SSDs)\, i.e.\, Non-Volatile Memory Express (NVMe) and Key-Value (KV) SSDs is challenging. Data communication between Spark tasks\, levels of parallelism\, and resource co-location significantly affects achieving high I/O throughput. Second\, in heterogeneous systems\, one of the main bottlenecks of GPU computation is the data transfer bandwidth to GPUs in I/O intensive applications. The traditional GPU approach gets data from host memory\, which can limit data throughput and processing and thus degrade end-to-end performance. In this work\, we initially explore different attributes to exploit the full benefits of various SSDs in bigdata platforms. We then focus on mitigating the data transfer bottleneck in a heterogeneous bigdata framework. \nWe build a heterogeneous framework that facilitates GPU direct access to storage. Our framework aims to minimize the data transfer delay\, thus enhancing the performance of distributed and parallel tasks to obtain the full benefits of compute and storage resources. Our heterogeneous cluster is supplied with CPUs and GPUs as computing resources and non-volatile flash-based drives as storage resources. We also deploy the Spark bigdata platform to execute large workloads over our cluster. We then adopt a novel technique (e.g.\, Peer-to-Peer Direct Memory Access) to connect GPUs to non-volatile storage directly. Experimental results reveal that our heterogeneous Spark platform successfully bypasses the host memory and enables GPUs to communicate directly to the NVMe drive\, thus achieving higher data transfer throughput. The contributions of the dissertation are: (I) Realizing that bigdata processing applications need to consider framework features and application characteristics to fully utilize the high bandwidth of modern SSDs\, where compute and storage locality is essential to optimize the cost and performance. (II) Deploying our novel heterogeneous framework supporting GPU direct storage access improves data communication time around 35%- 50% and end-to-end performance by 30%.
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-mahsa-bayati/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201012T130000
DTEND;TZID=America/New_York:20201012T140000
DTSTAMP:20260507T085212
CREATED:20201005T184512Z
LAST-MODIFIED:20201005T184512Z
UID:4406-1602507600-1602511200@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Muhamed Yildiz
DESCRIPTION:PhD Proposal Review: Interpretable Machine Learning for Retinopathy of Prematurity \nMuhamed Yildiz \nLocation: Zoom Link \nAbstract: Retinopathy of Prematurity (ROP)\, a leading cause of childhood blindness\, is diagnosed by clinical ophthalmoscopic examinations or reading retinal images. Plus disease\, defined as abnormal tortuosity and dilation of the posterior retinal blood vessels\, is the most important feature to determine treatment-requiring ROP. State-of-the-art ROP detection systems employ convolutional neural networks (CNNs) and achieve up to $0.947$ and $0.982$ area under the ROC curve (AUC) in the discrimination of and levels of ROP. However\, due to their black-box nature\, clinicians are reluctant to trust diagnostic predictions of CNNs. \nFirst\, we aim to create an interpretable\, feature extraction-based pipeline\, namely\, I-ROP ASSIST\, that achieves CNN like performance when diagnosing plus disease from retinal images. Our method segments retinal vessels\, detects the vessel centerlines. Then\, our method extracts features relevant to ROP\, including tortuosity and dilation measures\, and uses these features for classification via logistic regression\, support vector machines and neural networks to assess a severity score for the input. For predicting and levels of ROP on a dataset containing 5512 posterior retinal images\, we achieve $0.88$ and $0.94$ AUC\, respectively. Our system combining automatic retinal vessel segmentation\, tracing\, feature extraction and classification is able to diagnose plus disease in ROP with CNN like performance. \nFurthermore\, we aim to address the interpretability problem of CNN-based ROP detection system. Incorporating visual attention capabilities in CNNs enhances interpretability by highlighting regions in the images that CNNs utilize for prediction. Generic visual attention methods do not leverage structural domain information such as tortuosity and dilation of retinal blood vessels in ROP diagnosis. We propose the Structural Visual Guidance Attention Networks (SVGA-Net) method\, that leverages structural domain information to guide visual attention in CNNs. SVGA-Net achieves $0.979$ and $0.987$ AUC to predict and levels of ROP. Moreover\, SVGA-Net consistently results in higher AUC compared to visual attention CNNs without guidance\, baseline CNNs\, and CNNs with structured masks.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-muhamed-yildiz/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201012T140000
DTEND;TZID=America/New_York:20201012T150000
DTSTAMP:20260507T085212
CREATED:20201006T232350Z
LAST-MODIFIED:20201006T232350Z
UID:4412-1602511200-1602514800@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Sara Banian
DESCRIPTION:PhD Proposal Review: Content-Aware Design Assistance Frameworks for Graphic Design Layouts \nSara Banian \nLocation: Zoom Link \nAbstract: Layout is an important visual communication factor in graphic design that encompasses a page’s overall composition. During the different design stages\, designers express their requirements through images describing the interface’s visual layout\, hierarchical structure\, and content. They create wireframe layouts to meet user requirements and find relative design examples to gain inspiration and explore design alternatives. This is not only an iterative process\, but also a time-consuming one. \nIn this proposal\, we aim to design and evaluate design assistance methodologies to augment the process of layout design with a particular focus on visual search and wireframe creation in the context of mobile User Interface (UI) deign. For visual search\, we investigate how to find design examples that are relative to the design requirements of a UI layout. Layout retrieval is different from pixel-level image retrieval\, as it requires processing both the spatial layout and the content of the data to retrieve similar images. To achieve this\, I explore the problem of user interface image retrieval from both the data and the model side\, by collecting a more highly annotated\, well-suited dataset and proposing an object-detection based image retrieval model. The model takes as input a user interface image and retrieves the visually similar design examples. It uses object detection to identify the user interface components\, performs semantic segmentation to produce a hierarchical structure\, and trains an attention-aware multi-modal embedding network that leans the structure and content of the given layout design for relevant image retrieval. Results show that the system is capable of retrieving relative design examples through content analysis. Next\, I propose a generative framework to investigate how to generate layout wireframes according to user specifications and following common design practices and conventions. The generative framework aims at modeling the content of the UI layouts taking into account different layout variations and design features.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-sara-banian/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201015T160000
DTEND;TZID=America/New_York:20201015T170000
DTSTAMP:20260507T085212
CREATED:20201007T191555Z
LAST-MODIFIED:20201007T191555Z
UID:4427-1602777600-1602781200@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Maher Kachmar
DESCRIPTION:PhD Proposal Review: Active Resource Partitioning and Planning for Storage Systems using Time Series Forecasting and Machine Learning Techniques \nMaher Kachmar \nLocation: Zoom Link \nAbstract: In today’s enterprise storage systems\, supported data services such as snapshot delete or drive rebuild can result in tremendous performance overhead if executed inline along with heavy foreground IO\, often leading to missing Service Level Objectives (SLOs). Moreover\, static partitioning of storage systems resources such as CPU cores or memory caches may lead to missing Service Level Agreements (SLAs) such as data reduction rate (DRR). However\, typical storage system applications such as Virtual Desktop Infrastructure (VDI) or web services follow a repetitive workload pattern that can be learned and/or forecasted. Learning these workload pattern allows us to address several storage system resource partitioning and planning challenges that may not be overcome with traditional manual tuning and primitive feedback mechanism. \nWe propose a priority-based background scheduler that learns storage system workload repetitive pattern and allows storage systems to maintain peak performance and meet service level objectives (SLOs) while supporting a number of data services. When foreground IO demand intensifies\, system resources are dedicated to service foreground IO requests and any background processing that can be deferred are recorded to be processed in future idle cycles as long as our forecaster predicts that the storage pool has remaining capacity. The smart background scheduler adopts a resource partitioning model that allows both foreground and background IO to execute together as long as foreground IOs are not impacted\, harnessing any free cycles to clear background debt. Using traces from VDI and web services applications\, we show how our technique can out performance a static method that sets fixed limits on the deferred background debt and reduces SLO violations from 54.6% (when using a fixed background debt watermark)\, to only 6.2% when dynamically adjusted by our smart background scheduler. \nThis thesis also proposes a smart capacity planning and recommendation tool that ensures the right number of drives are available in the storage pool in order to meet both capacity and performance constrains without over-provisioning storage. Aided by forecasting models that characterizes workload pattern\, we can predict future storage pool utilization and drive over-wearing. Similarly\, to meet SLOs\, the tool recommends expanding pool space in order to defer more background work through larger debt bins. We also propose a content-aware learning cache (CALC) that uses machine learning techniques to actively partition the storage system cache between a deduplication data digest cache\, content cache\, and address based data cache to improve cache hit performance while maximizing data reduction rate (DRR).
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-maher-kachmar/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201021T090000
DTEND;TZID=America/New_York:20201021T100000
DTSTAMP:20260507T085212
CREATED:20201016T181139Z
LAST-MODIFIED:20201016T181139Z
UID:4521-1603270800-1603274400@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Wenqian Liu
DESCRIPTION:PhD Proposal Review: Explainable Efficient Models for Computer Vision Applications \nWenqian Liu \nLocation: Zoom Link \nAbstract: State of the art deep learning based models\, such as Convolutional Neural Networks (CNNs) and generative models\, achieve impressive results\, but with their great performance comes great complexity and opacity\, huge parametric spaces and little explainability. The criticality of model explainability and output interpretability\, manifests clearly in real-time critical decision making processes and human-centred applications\, such as in healthcare\, security and insurance. \nExplainability and interpretability are tackled in this thesis\, as intrinsic qualities in the model architecture as well as post-hoc improvement on existing models. \nIn the area of frame prediction in video sequences\, we introduce DYAN\, a novel network with very few parameters\, that is easy to train and produces accurate high quality frame predictions and more compact than previous approaches. Another key aspect of DYAN is interpretability\, as its encoder-decoder architecture is designed following concepts from systems identification theory and exploits the dynamics-based invariants of the data. We also introduce KW-DYAN\, an extension of DYAN that tackles the issue of time lagging in video predictions\, by implementing a novel way of quantifying prediction timeliness and proposing a new recurrent network for adaptive temporal sequence prediction that employs a warping module to reduce dynamic changes and a Kalman filtering module to detect dynamic changes in video frames. The experimental results show the reduced lagging across the tested Caltech dataset and the UCF dataset\, while also performing well in other commonly used metrics. \nIn the area of image classification\, categorization and scene understanding\, we observe that techniques such as gradient-based visual attentions have driven much recent efforts in using visual attention maps as a mean for visual explanations of Convolution Neural Networks (CNNs)\, with impressive results but fail to extend to explaining generative models\, e.g Variational Autoencoders (VAEs) as efficiently. In this thesis we bridge this crucial gap\, and propose the first technique to visually explain VAEs by means of gradient-based attentions\, with methods to generate visual attentions from the learned latent space\, and also demonstrate such attention explanations serve more than just explaining VAEs. We show how these attention maps can be used to localize anomalies in images\, conducting state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training\, helping bootstrap the VAEs into learning disentangled latent space\, as proved on the Dsprites dataset.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-wenqian-liu/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201027T150000
DTEND;TZID=America/New_York:20201027T160000
DTSTAMP:20260507T085212
CREATED:20201021T182911Z
LAST-MODIFIED:20201021T182911Z
UID:4524-1603810800-1603814400@ece.northeastern.edu
SUMMARY:ECE PhD Proposal Review: Kunpeng Li
DESCRIPTION:PhD Proposal Review: Attention Mechanism in Deep Learning for Visual Recognition  \nKunpeng Li \nLocation: Zoom Link \nAbstract: Deep learning models have achieved great success in various tasks for visual recognition such as image classification\, semantic segmentation\, visual semantic matching etc. Instead of just treating them as black boxes\, recently\, a tremendous of efforts have been put into the explanations of how these models work and bridging the gap between deep neural networks and human cognition systems. Visual attention is one of the efficient ways to explain the network’s decision by highlighting the regions of images that are responsible for it. It is inspired by the attention mechanism of the human vision system to selectively focus on the salient features in a visual scene. \nThis thesis is on the visual attention in deep learning for visual recognition. For the first time\, we make gradient-based attention maps a natural and explicit component in the training pipeline\, such that they are end-to-end trainable. Then\, we can provide guidance on the attention maps and guide the network to focus on correct things when learning concepts. Under mild assumptions\, our method can be understood as a plug-in to existing convolutional neural networks to improve their generalization performance. Besides\, the improved attention maps also help to provide better localization cues for weakly-supervised semantic segmentation task. \nMoving a step toward higher-level visual understanding with natural language\, we study the effectives of building visual reasoning models on top of the bottom-up attention regions\, so that the learnt visual representations can better capture semantic concepts as in its corresponding text caption. Specifically\, we first build up connections between attention regions and perform reasoning with Graph Convolutional Networks to generate region features with semantic relationships. Then\, we propose to use the gate and memory mechanism to perform global semantic reasoning on these relationship-enhanced region features\, select the discriminative information and gradually generate the representation for the whole scene. Evaluations have been conducted on MS-COCO and Flickr30K datasets for the image-text matching task.
URL:https://ece.northeastern.edu/event/ece-phd-proposal-review-kunpeng-li/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201030T110000
DTEND;TZID=America/New_York:20201030T120000
DTSTAMP:20260507T085212
CREATED:20201024T021519Z
LAST-MODIFIED:20201024T021519Z
UID:4534-1604055600-1604059200@ece.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Ran Liu
DESCRIPTION:PhD Dissertation Defense: Optimal Proactive Services with Uncertain Predictions \nRan Liu \nLocation: Zoom Link \nAbstract: With the evolution of technologies such as machine learning and data science\, proactive services with the aid of predictive information have been recognized as a promising method to exploit network bandwidth\, storage\, and computation resources to achieve improved user experiences\, especially delay performance.\nSpecifically\, services can be processed proactively when the system is lightly loaded\, with the results stored to meet user demand in the future.\nOur primary goal in the thesis is to investigate the fundamental performance improvement that can be achieved from proactive services under uncertain predictions. We aim to analyze the queueing behavior of proactive systems under certain proactive strategies and characterize the improvement in terms of the limiting fraction of proactive work and the limiting average delay. \nIn the first work\, we analytically investigate the problem of how to efficiently utilize uncertain predictive information to design proactive caching strategies with provably good access-delay characteristics.\nFirst\, we derive an upper bound for the average amount of proactive service per request that the system can support.\nThen we analyze the behavior of a family of threshold-based proactive strategies with a Markov chain\, which shows that the average amount of proactive service per request can be maximized by properly selecting the threshold.\nFinally\, we propose the UNIFORM strategy\, which is the threshold-based strategy with the optimal threshold\, and show that it outperforms the commonly used Earliest-Deadline-First (EDF) type proactive strategies in terms of delay.\nWe perform extensive numerical experiments to demonstrate the influence of thresholds on delay performance under the threshold-based strategies\, and specifically\, compare the EDF strategy and the UNIFORM strategy to verify our results. \nIn the second work\, we study a more generalized proactive service problem with a more generalized service model and derive explicit solutions on the limiting average fraction of proactive work and the limiting average delay in closed-form expressions.\nIn this work\, we analytically investigate how to optimally take advantage of under-utilized network resources for proactive services with the aid of uncertain predictive information.\nSpecifically\, we first derive an upper bound on the fraction of services that can be completed proactively by a single-server system.\nThen we analyze a family of fixed-probability (FIXP) proactive strategies in two proactive systems\, namely the Genie-Aided system and the Realistic Proactive system.\nWe analyze the asymptotic behaviors of the FIXP strategies by modeling a Markov process and the corresponding embedded Markov Chain.\nWe obtain optimal FIXP strategies in both systems and prove that the optimal FIXP strategies maximize the limiting fraction of proactive service among all proactive strategies and minimize average delay among FIXP strategies.\nWe perform extensive numerical experiments to demonstrate the influence of the parameter of FIXP on the performance of the limiting fraction of proactive service and the limiting average delay in both proactive systems and verify our theoretical results in multiple scenarios.
URL:https://ece.northeastern.edu/event/ece-phd-dissertation-defense-ran-liu-2/
END:VEVENT
END:VCALENDAR