-
- [Research] A paper from Prof. Lee jinkyu's lab. Papers was approveed by publication at ACM/IEEE DAC 2024, IEEE RTAS 2024
- 제목: 이진규 교수 연구실(실시간 컴퓨팅 연구실, RTCL@SKKU) ACM/IEEE DAC 2024, IEEE RTAS 2024 논문 발표 실시간 컴퓨팅 연구실(지도교수: 이진규)에서 작성한 논문이 ACM/IEEE DAC 2024 (the 61th Design Automation Conference)와 IEEE RTAS 2024 (30th IEEE Real-Time and Embedded Technology and Applications Symposium)에 발표되었습니다. ACM/DAC은 Design Automation 분야의 Top1 국제 학술대회(정보과학회 최우수 등급, BK21+ IF3)이고 올해는 미국 샌프란시스코에서 2024년 6월 23일~27일 개최되었며, IEEE RTAS는 실시간 시스템 분야의 Top2 국제 학술대회(정보과학회 최우수 등급, BK21+ IF2)이며 올해는 홍콩에서 2024년 5월 13일~16일 총 29편의 논문이 발표되었습니다. ACM/IEEE DAC 2024 논문은 MCU등 소형 IoT 기기에서 인공지능 응용 작업 실행에 대한 실시간성 보장을 다루고 있으며, 실시간 컴퓨팅 연구실 석사과정 강석민 학생(제1저자), 박사과정 이성태 학생(공동제1저자), 학부과정 구현우 학생이 이진규 교수의 지도하에 참여하였고, DGIST 좌훈승 교수와의 공동연구로 진행되었습니다. IEEE RTAS 2024 논문은 메모리 부족 환경에서의 인공지능 응용 작업의 실행에 대한 실시간성 보장을 다루고 있으며, DGIST 좌훈승 교수 연구팀 주도하에 이진규 교수가 참여하였습니다. ACM/IEEE DAC 2024 홈페이지 https://www.dac.com/ IEEE RTAS 2024 홈페이지 https://2024.rtas.org/ 실시간 컴퓨팅 연구실 홈페이지 https://rtclskku.github.io/website/ - 논문제목: RT-MDM: Real-Time Scheduling Framework for Multi-DNN on MCU Using External Memory - Abstract: As the application scope of DNNs executed on microcontroller units (MCUs) extends to time-critical systems, it becomes important to ensure timing guarantees for increasing demand of DNN inferences. To this end, this paper proposes RT-MDM, the first Real-Time scheduling framework for Multiple DNN tasks executed on an MCU using external memory. Identifying execution-order dependencies among segmented DNN models and memory requirements for parallel execution subject to the dependencies, we propose (i) a segment-group-based memory management policy that achieves isolated memory usage within a segment group and sharded memory usage across different segment groups, and (ii) an intra-task scheduler specialized for the proposed policy. Implementing RT-MDM on an actual system and optimizing its parameters for DNN segmentation and segment-group mapping, we demonstrate the effectiveness of RT-MDM in accommodating more DNN tasks while providing their timing guarantees. - 논문제목: RT-Swap: Addressing GPU Memory Bottlenecks for Real-Time Multi-DNN Inference - Abstract: The increasing complexity and memory demands of Deep Neural Networks (DNNs) for real-time systems pose new significant challenges, one of which is the GPU memory capacity bottleneck, where the limited physical memory inside GPUs impedes the deployment of sophisticated DNN models. This paper presents, to the best of our knowledge, the first study of addressing the GPU memory bottleneck issues, while simultaneously ensuring the timely inference of multiple DNN tasks. We propose RT-Swap, a real-time memory management framework, that enables transparent and efficient swap scheduling of memory objects, employing the relatively larger CPU memory to extend the available GPU memory capacity, without compromising timing guarantees. We have implemented RT-Swap on top of representative machine-learning frameworks, demonstrating its effectiveness in making significantly more DNN task sets schedulable at least 72% over existing approaches even when the task sets demand up to 96.2% more memory than the GPU’s physical capacity. 이진규 | jinkyu.lee@skku.edu | 실시간컴퓨팅 Lab. | https://rtclskku.github.io/website/ Title: Papers from Prof. Jinkyu Lee’s Lab. (RTCL@SKKU) published in ACM/IEEE DAC 2024 and IEEE RTAS 2024 A paper from RTCL@SKKU (Advisor: Jinkyu Lee) has been published in ACM/IEEE DAC 2024 and IEEE RTAS 2024. ACM/IEEE DAC 2024 Website https://www.dac.com/ IEEE RTAS 2024 Website https://2024.rtas.org/ Real-Time Computing Lab. Website https://rtclskku.github.io/website/ - Paper Title: RT-MDM: Real-Time Scheduling Framework for Multi-DNN on MCU Using External Memory - Abstract: As the application scope of DNNs executed on microcontroller units (MCUs) extends to time-critical systems, it becomes important to ensure timing guarantees for increasing demand of DNN inferences. To this end, this paper proposes RT-MDM, the first Real-Time scheduling framework for Multiple DNN tasks executed on an MCU using external memory. Identifying execution-order dependencies among segmented DNN models and memory requirements for parallel execution subject to the dependencies, we propose (i) a segment-group-based memory management policy that achieves isolated memory usage within a segment group and sharded memory usage across different segment groups, and (ii) an intra-task scheduler specialized for the proposed policy. Implementing RT-MDM on an actual system and optimizing its parameters for DNN segmentation and segment-group mapping, we demonstrate the effectiveness of RT-MDM in accommodating more DNN tasks while providing their timing guarantees. - Paper Title: RT-Swap: Addressing GPU Memory Bottlenecks for Real-Time Multi-DNN Inference - Abstract: The increasing complexity and memory demands of Deep Neural Networks (DNNs) for real-time systems pose new significant challenges, one of which is the GPU memory capacity bottleneck, where the limited physical memory inside GPUs impedes the deployment of sophisticated DNN models. This paper presents, to the best of our knowledge, the first study of addressing the GPU memory bottleneck issues, while simultaneously ensuring the timely inference of multiple DNN tasks. We propose RT-Swap, a real-time memory management framework, that enables transparent and efficient swap scheduling of memory objects, employing the relatively larger CPU memory to extend the available GPU memory capacity, without compromising timing guarantees. We have implemented RT-Swap on top of representative machine-learning frameworks, demonstrating its effectiveness in making significantly more DNN task sets schedulable at least 72% over existing approaches even when the task sets demand up to 96.2% more memory than the GPU’s physical capacity. Jinkyu Lee | jinkyu.lee@skku.edu | RTCL@SKKU | https://rtclskku.github.io/website/
-
- 작성일 2024-07-01
- 조회수 2498
-
- [Research] 우사이먼성일 교수(DASH 연구실), Won PAKDD 2024 Best Paper Running-Up Award (2nd Place)
- DASH Lab won the Best Paper Running-Up Award (2nd Best Paper) at PAKDD 2024 in Taiwan Binh M. Le and Simon S. Woo’s paper, “SEE: Spherical Embedding Expansion for Improving Deep Metric Learning,” received the the Best Paper Running-Up Award (2nd best paper) in PAKDD 2024 (BK CS IF=1), held in Taipei in May 2024. Here is the background information about the award: “This year, PAKDD received 720 excellent submissions, and the selection process was competitive, rigorous, and thorough with over 500 PC and 100 SPC members. An award committee was formed by a chair and four committee members from different countries. There are only one Best Paper Award, two Best Paper Running-Up Awards, and one Best Student Paper Award.” Paper Link: https://link.springer.com/chapter/10.1007/978-981-97-2253-2_11
-
- 작성일 2024-06-07
- 조회수 2744
-
- [Research] One paper by Intelligent Embedded Systems Laboratory (supervisor: Shin Dong-gun) has been approved for publication at AA
- One paper by Intelligent Embedded Systems Laboratory (supervisor: Shin Dong-gun) has been approved for publication at AAAI Conference on Artificial Intelligence 2024 (AAAI-24), an excellent academic conference in the field of artificial intelligence 논문 #1: Proxyformer: Nystrom-Based Linear Transformer with Trainable Proxy Tokens (Lee Sang-ho, master's degree in artificial intelligence, and Lee Ha-yoon, Ph.D. in electrical, electronic, and computer engineering) The paper "Proxyformer: Nystrom-Based Linear Transformer with Trainable Proxy Tokens" focuses on the complexity of self-attention operations. In this paper, to solve the quadratic complexity of the input sequence length n of the existing self-attention operation, we propose an extended Nystrom attraction method by integrating the Nystrom method with neural memory. First, by introducing a learnable proxy token, which serves as a landmark of the Nystrom method, we reduce the complexity of the attraction operation from square to linear, and effectively create landmarks that take into account the input sequence. Second, we learn to effectively restore the attraction map using minimal landmarks by applying continuous learning. Third, we develop a dropout method suitable for the decomposed attraction matrix, enabling the normalization of the proxy tokens to be effectively learned. The proposed proxyformer effectively approximates the attention map with minimal proxy token, which outperforms existing techniques in LRA benchmarks and achieves 3.8 times higher throughput and 0.08 times lower memory usage in 4096-length input sequences compared to traditional self-attention methods. [Thesis #1 Information] Proxyformer: Nystrom-Based Linear Transformer with Trainable Proxy Tokens Sangho Lee, Hayun Lee, Dongkun Shin Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI), 2024 Transformer-based models have demonstrated remarkable performance in various domains, including natural language processing, image processing and generative modeling. The most significant contributor to the successful performance of Transformer models is the self-attention mechanism, which allows for a comprehensive understanding of the interactions between tokens in the input sequence. However, there is a well-known scalability issue, the quadratic dependency of self-attention operations on the input sequence length n, making the handling of lengthy sequences challenging. To address this limitation, there has been a surge of research on efficient transformers, aiming to alleviate the quadratic dependency on the input sequence length. Among these, the Nyströmformer, which utilizes the Nyström method to decompose the attention matrix, achieves superior performance in both accuracy and throughput. However, its landmark selection exhibits redundancy, and the model incurs computational overhead when calculating the pseudo-inverse matrix. We propose a novel Nyström method-based transformer, called Proxyformer. Unlike the traditional approach of selecting landmarks from input tokens, the Proxyformer utilizes trainable neural memory, called proxy tokens, for landmarks. By integrating contrastive learning, input injection, and a specialized dropout for the decomposed matrix, Proxyformer achieves top-tier performance for long sequence tasks in the Long Range Arena benchmark.
-
- 작성일 2023-12-26
- 조회수 3573
-
- [Research] Professor Jae-pil Heo's lab was approveed by publication of four AAAI 2024 papers
- Professor Jae-pil Heo's lab was approveed by publication of four AAAI 2024 papers Four papers from the Visual Computing Laboratory (Professor: Jae-pil Heo) have been approved for publication at the AAAI Conference on Artificial Intelligence 2024 (AAAI-24), an excellent academic conference in the field of artificial intelligence. 논문 #1: “Towards Squeezing-Averse Virtual Try-On via Sequential Deformation” (Ph.D. program in the Department of Artificial Intelligence Shim Sang-heon, master's program in the Department of Artificial Intelligence Jung 논문 #2: "Noise-free Optimization in Early Training Steps for Image Super-Resolution" (Lee Min-gyu, Ph.D. in Artificial Intelligence) 논문 #3: “VLCounter: Text-aware Visual Representation for Zero-Shot Object Counting” (Kang Seung-gu, Master of Artificial Intelligence, Doctor of Artificial Intelligence, Doctor of Artificial Intelligence, and Master of Artificial Intelligence Kim Eui-yeon) 논문 #4: “Task-disruptive Background Suppression for Few-Shot Segmentation” (Park Soo-ho, Ph.D. in software/mechanical engineering, Lee Soo-bin, Ph.D. in artificial intelligence, Hyun Jin-ik, Ph.D. in artificial intelligence, and Sung Hyun-seok) The paper "Towards Squeezing-Averse Virtual Try-On via Sequential Deformation" addresses the issue of visual quality degradation in the field of high-resolution virtual mounted image generation. Specifically, there was a problem with the texture of clothes being squeezed in the sleeve, as shown in the upper row of Fig. 1(a). The main reason for this problem is the gradient collision between the total variance loss (TV) loss and the adversarial loss, which are two loss functions that are essentially used in the field. TV loss aims to separate the boundary between the sleeve and torso from the warped clothing mask, while adversarial loss aims to engage between the two. These opposite goals feed back the false slope to cascaded appearance flow estimation, resulting in sleeve compression artifacts. To address this, in this paper, we approached from the perspective of interlayer connections in the network. Specifically, it was diagnosed that sleeve compression occurs because the conventional cascading appearance flow estimation is connected with a residual connection structure and is heavily influenced by the adversarial loss function, and to reduce this, a sequential connection structure between cascading appearance flows was introduced into the last layer of the network. Meanwhile, the lower row in Figure 1(a) shows different types of compression artifacts around the waist. To address this, in this study, we propose to warp into the style first worn when warping clothes (tucked-out shirts style), and then partially delete the texture from the initial warping result, and implement the computation for this. The proposed technique confirms that both types of artifacts are successfully resolved. In the paper "Noise-free Optimization in Early Training Steps for Image Super-Resolution," we address the limitations of existing learning methodologies and Knowledge Distillation in image super-resolution problems. Specifically, one high-resolution image was separated and analyzed into two key elements, the optimal centroid and latent noise. Through this, we have confirmed that latent noise in the learning data induces instability in early learning. To address this issue, we proposed a more stable learning technique by eliminating latent noise in the learning process using Mixup technology and pre-trained networks. We have confirmed that the proposed technique brings consistent performance improvement across multiple models in the Fidelity-oriented single image super-resolution field. In the paper "VLC Counters: Text-aware Visual Representation for Zero-Shot Object Counting", we address the problem of counting the number of objects designated as text in images. This paper raised the issue of the two-stage method in previous studies with massive computational volumes and the possibility of error propagation. To solve the preceding problem, we propose one-stage baseline, VLBase, and extended VLC Counters with three main technologies. First, instead of relearning CLIP, a pre-trained giant model, we introduced Visual Prompt Tuning (VPT). Additionally, textual information is added to the learnable token of VPT so that the corresponding object gets highlighted image features. Second, fine-tuning has been made to obtain a similarity map that emphasizes only the important part of the object area, not the whole. This allows the model to increase object-centric activation. Third, in order to improve the generalization ability of the model and to accurately locate the object, we integrate the image encoder feature into the decoding and multiply the previous similarity map by the feature to focus on the object area. Not only does the proposed technique significantly exceed the performance of existing methods, but it also doubles the learning and inference speed with a lightweight model. The "Task-disruptive Background Suppression for Few-shot Segmentation" paper addresses how to efficiently address the background of Support in a new-shot segmentation problem, which refers to a small number of images (Support) and masks to find objects in new images (Query).
-
- 작성일 2023-12-26
- 조회수 3411
-
- [Research] [Research] Professor Hongwook Woo's lab (CSI lab), AAAI 2024 paper was approved by publication (3 papers)
- Three papers from the CSI Research Institute (Director: Woo Hong-wook) have been accepted by The 38th Annual AAAI Conference on Artificial Intelligence (AAAI 2024). The paper will be presented in Vancouver, Canada, in February 24. 1. The paper "SemTra: A Semantic Skill Translator for Cross-Domain Zero-shot Policy Adaptation" was written by Shin Sang-woo (M.D.), Yoo Min-jong (PhD), and Lee Jung-woo (undergraduate program). This study is about Zero-Shot adaptation technology, which enables embedded agents such as robots to quickly respond to changes in their surroundings without further learning, and we present a SemTra (SemTra) framework that transforms multimodal data such as vision, sensor, and user commands into semantically interpretable skills (Sematic Skill), optimizing these skills to the environment and executing them in successive actions. SemTra has shown high performance by being tested in an autonomous environment with robots such as Meta-World, Franka Kitchen, RLBench, and CARLA, as a result of a study that transforms implicit behavior patterns into actionable skills (Skill, Continuous Behavior patterns) through pre-trained language models. 2. The paper "Risk-Conditioned Reinforcement Learning: A Generalized Approach for Adaptation to Varying Risk Measures" was written by Yoo Kwang-pyo (doctoral program) and Park Jin-woo (master's program) researchers from the Department of Software. This study proposes Risk Conditional Reinforcement Learning (Risk Conditional Reinforcement Learning), which can be used in applications that require significant decision-making involving risks, such as finance, robots, and autonomous driving. In particular, to cope with various dynamically changing preference risk levels through one learned reinforcement learning model, we implement a structure of a weighted value-at-risk (WV@R) based reinforcement learning model that enables a single representation of heterogeneous risk metrics for the first time, enabling flexible processing of reinforcement learning-based decision-making in a number of risk management-focused applications. 3. The paper "Robust Policy Learning via Offline Skill Diffusion" was written by Kim Woo-kyung (doctoral) and Yoo Min-jong (doctoral) researchers in the software department. In this work, we present DuSkill (Offline Skill Diffusion Model), a new offline learning framework that uses the Diffusion model to create a variety of embodied agent skills that are extended from the finite skills of the dataset. The DuSkill framework enhances the diversity of offline learned skills, accelerating the RL Policy Learning process for multi-tasks and heterogeneous environmental domains, and improving the robustness of the learned policies. The CSI lab is conducting research on network and cloud system optimization and embedded agents such as robot and drone autonomous driving using machine learning, reinforcement learning, and self-directed learning. The research in this AAAI 2024 paper is underway with the support of the People-Centered Artificial Intelligence Core Source Technology Project (IITP), the Korea Research Foundation's Personal Basic Project (NRF), and the Graduate School of Artificial Intelligence. 우홍욱 | hwoo@skku.edu | CSI Lab | https://sites.google.com/view/csi-agent-group
-
- 작성일 2023-12-26
- 조회수 2463
-
- [Research] A paper from Prof. Jinkyu Lee’s Lab. (RTCL@SKKU) published in IEEE RTSS 2023
- A paper from RTCL@SKKU (Advisor: Jinkyu Lee) has been published in IEEE RTSS 2023. IEEE RTSS is the premier conference in real-time systems, in which around 30 papers are usually published every year. In this year, IEEE RTSS 2023 was held in Taipei, Taiwan. IEEE RTSS 2023 Website http://2023.rtss.org/ Real-Time Computing Lab. Website https://rtclskku.github.io/website/ - (Paper Title) RT-Blockchain: Achieving Time-Predictable Transactions - (Abstract) Although blockchain technology is being increasingly utilized across various fields, the challenge of providing timing guarantees for transactions remains unmet, which is an obstacle in implementing blockchain solutions for time-sensitive applications such as high-frequency trading and real-time payments. In this paper, we propose the first solution to achieve a timing guarantee on blockchain. To this end, we raise and address two issues for timely transactions on a blockchain: (a) architectural support, and (b) real-time scheduling principles spe- cialized for blockchain. For (a), we modify an existing blockchain network, offering an interface to preferentially select the transactions with the earliest deadlines. We then extend the blockchain network to provide the flexibility of the number of generated blocks at a single block time. Under such architectural supports, we achieve (b) with three steps. First, to resolve a discrepancy between a periodic request of a transaction-generating node and the corresponding arrival on a block-generating node, we translate the former into the latter, which eases the modeling of the transaction load imposed on the blockchain network. Second, we derive a schedulability condition of the modeled transaction load, which guarantees no missed deadline for all transactions under a work-conserving deadline-based scheduling policy. Last, we develop a lazy scheduling policy and its condition, which reduces the number of generated blocks without compromising the degree of timing guarantees for the work-conserving policy. By implementing RT-blockchain on top of an existing open- source blockchain project, we demonstrate the effectiveness of the proposed scheduling principles with architectural supports in not only ensuring timely transactions but also reducing the number of generating blocks. Jinkyu Lee | jinkyu.lee@skku.edu | RTCL@SKKU | https://rtclskku.github.io/website/
-
- 작성일 2023-12-11
- 조회수 2763
-
- [Research] DASH Lab, Three Papers are accepted for publication at CIKM 2023 International Conference and Hosting the First Internat
- DASH Lab’s three papers have been accepted for CIKM (Conference on Information and Knowledge Management) 2023, one of the top-tier international academic conferences in the artificial intelligence and information retrieval field. The papers will be presented in October. Authors are doctoral candidates in computer science and engineering, Eunju Park and Binh M. Le, along with master’s students, Beomsang Cho in computer science and engineering, Sangyoung Lee in artificial intelligence, Seungyeon Baek in artificial intelligence, Jiwon Kim in artificial intelligence. The papers are as follows: 1.Machine Unlearning Research 2.Research on Deepfakes in Collaboration with CSIRO’s Data61 in Australia 3.Research on Datasets for Online ID fraud detection Also, the 1st international workshop on anomaly and novelty detection in satellite and drones systems is hosted at CIKM 2023. The organizing committee consists of Simon S. Woo from Sungkyunkwan University, Shahroz Tariq from CSIRO’s Data61, Youjin Shin from Catholic University, Daewon Chung from Korea Aerospace Research Institute. This workshop is centered around anomaly detection in the time-series and vision data of satellite and drone systems. 1. Sanyong Lee and Simon Woo, “UNDO: Effective and Accurate Unlearning Method for Deep Neural Networks”, Proceedings of the 32nd ACM International Conference on Information & Knowledge Management. 2023. Machine learning has evolved through extensive data usage, including personal and private information. Regulations like GDPR highlight the "Right to be forgotten" for user and data privacy. Research in machine unlearning aims to remove specific data from pre-trained models. We introduce a novel two-step unlearning method, UNDO. First, we selectively disrupt the decision boundary of forgetting data at the coarse-grained level. However, this can also inadvertently affect the decision boundary of other remaining data, lowering the overall performance of the classification task. Hence, we subsequently repair and refine the decision boundary for each class at the fine-grained level by introducing a loss to maintain the overall performance while completely removing the class. Our approach is validated through experiments on two datasets, outperforming other methods in effectiveness and efficiency. 2. Beomsang Cho, Binh M. Le, Jiwon Kim, Simon S. Woo , Shahroz Tariq, Alsharif Abuadbba, and Kristen Moore , “Toward Understanding of Deepfake Videos in the Wild”, Proceedings of the 32nd ACM International Conference on Information & Knowledge Management. 2023. Deepfakes have become a growing concern in recent years, prompting researchers to develop benchmark datasets and detection algorithms to tackle the issue. However, existing datasets suffer from significant drawbacks that hamper their effectiveness. Notably, these datasets fail to encompass the latest deepfake videos produced by state-of-the-art methods that are being shared across various platforms. This limitation impedes the ability to keep pace with the rapid evolution of generative AI techniques employed in real-world deepfake production. Our contributions in this IRB-approved study are to bridge this knowledge gap from current real-world deepfakes by providing in-depth analysis. We first present the largest and most diverse and recent deepfake dataset (RWDF-23) collected from the wild to date, consisting of 2,000 deepfake videos collected from 4 platforms targeting 4 different languages span created from 21 countries: Reddit, YouTube, TikTok, and Bilibili. By expanding the dataset’s scope beyond the previous research, we capture a broader range of real-world deepfake content, reflecting the ever-evolving landscape of online platforms. Also, we conduct a comprehensive analysis encompassing various aspects of deepfakes, including creators, manipulation strategies, purposes, and real-world content production methods. This allows us to gain valuable insights into the nuances and characteristics of deepfakes in different contexts. Lastly, in addition to the video content, we also collect viewer comments and interactions, enabling us to explore the engagements of internet users with deepfake content. By considering this rich contextual information, we aim to provide a holistic understanding of the evolving deepfake phenomenon and its impact on online platforms. 3. Eun-Ju Park, Seung-Yeon Back, Jeongho Kim, and Simon S. Woo, ”KID34K: A Dataset for Online Identity Card Fraud Detection”, Proceedings of the 32nd ACM International Conference on Information & Knowledge Management. 2023. Though digital financial systems have provided users with convenient and accessible services, such as supporting banking or payment services anywhere, it is necessary to have robust security to protect against identity misuse. Thus, online digital identity (ID) verification plays a crucial role in securing financial services on mobile platforms. One of the most widely employed techniques for digital ID verification is that mobile applications request users to take and upload a picture of their own ID cards. However, this approach has vulnerabilities where someone takes pictures of the ID cards belonging to another person displayed on a screen, or printed on paper to be verified as the ID card owner. To mitigate the risks associated with fraudulent ID card verification, we present a novel dataset for classifying cases where the ID card images that users upload to the verification system are genuine or digitally represented. Our dataset is replicas designed to resemble real ID cards, making it available while avoiding privacy issues. Through extensive experiments, we demonstrate that our dataset is effective for detecting digitally represented ID card images, not only in our replica dataset but also in the dataset consisting of real ID cards. 4. The 1st International Workshop on Anomaly and Novelty detection in Satellite and Drones systems (ANSD '23) The workshop on Anomaly and Novelty Detection in Drones and Satellite data at CIKM 2023 aims to bring together researchers, practitioners, and industry experts to discuss the latest advancements and challenges in detecting anomalies and novelties in drone and satellite data. With the increasing availability of such data, the workshop seeks to explore the potential of machine learning and data mining techniques to enable the timely and accurate detection of unexpected events or changes. The workshop will include presentations of research papers, keynote talks, panel discussions, and poster sessions, with a focus on promoting interdisciplinary collaboration and fostering new ideas for tackling real-world problems. Should you have questions, please ask professor Simon S. Woo(swoo@g.skku.edu) in DASH Lab(https://dash.skku.edu).
-
- 작성일 2023-09-18
- 조회수 3071
-
- [Research] Professor Eom Young-ik's laboratory (Distributed Computing Laboratory, DCLab.) is approved of publication of the SOSP 20
- The 29th ACM Symposium on Operating Systems Principles (SOSP 2023) has been approved for publication in the paper "MEMTIS: Efficient Memory Tiering with Dynamic Page Classification and Page Size Determination" by Professor Eom Young-ik of the Distributed Computing Laboratory and Dr. Lee Tae-hyung. The SOSP Society is the world's leading society for researchers, developers, and programmers in computer systems (BK21+ Recognized International Congress for Computer ScienceIF=4). This paper proposes how to effectively build large memory systems required by modern data center and cloud computing environments. Professor Eom Young-ik's research team proposed MEMTIS, a new hierarchical memory system utilizing DRAM, non-volatile memory (NVM), and CXL memory devices, which are next-generation hardware. Based on its own high-performance memory page management techniques, MEMTIS delivers up to 169% higher performance than modern tiered memory systems. This study was conducted as an international joint study between Professor Eom Young-ik's research team and Professor Min Chang-woo's research team from Virginia Tech in the United States. In addition, through the publication of this SOSP paper, the distributed computing laboratory is the first domestic laboratory to publish two or more papers (SOSP 2021 FragPicker, SOSP2023 MEMTIS). In addition, Professor Eom Young-ik's research team will publish its third top-tier conference paper this year alone, following ASPLOS and MobiCom. [SOSP 2023] The 29th ACM Symposium on Operating Systems Principles, October 23-26, 2023 https://sosp2023.mpi-sws.org/ [About the thesis] MEMTIS: Efficient Memory Tiering with Dynamic Page Classification and Page Size Determination Taehyung Lee, Sumit Kumar Monga, Changwoo Min, Young Ik Eom 29th Symposium on Operating Systems Principles (SOSP 2023) Abstract: The evergrowing memory demand fueled by datacenter workloads is the driving force behind new memory technology innovations (e.g., NVM, CXL). Tiered memory system is a promising solution which harnesses such multiple memory types with varying capacity, latency, and cost characteristics in an effort to reduce server hardware costs while fulfilling memory demand. Prior works on memory tiering make suboptimal (often pathological) page placement decisions because they rely on various heuristics and static thresholds without considering overall memory access distribution. Also, deciding the appropriate page size for an application is difficult as huge pages are not always beneficial as a result of skewed accesses within them. We present Memtis, a tiered memory system that adopts an informed decision-making for page placement and page size determination. Memtis leverages access distribution of allocated pages to optimally approximate the hot data set to the fast tier capacity. Moreover, Memtis dynamically determines the page size that allows applications to use huge pages while avoiding their drawbacks by detecting inefficient use of fast tier memory and splintering them if necessary. Our evaluation shows that Memtis outperforms state-of-the-art tiering systems by up to 169.0% and their best by up to 33.6%. Distributed Computing Lab: http://dclab.skku.ac.kr/xe/
-
- 작성일 2023-08-28
- 조회수 2199
-
- [Research] System Security Laboratory (Professor Lee Ho-joon) is Approved tp Publish of Papers in ACM CCS 2023
- The paper "Capacity: Cryptographically-Enforced In-process Capabilities for Modern ARM Architectures" by Dinh Kha (Ph.D. candidate), Cho Kyu-won (Ph.D. candidate), and Noh Tae-hyun (Master's candidate), under the guidance of Professor Lee Ho-jun (https://sslab.skku.edu), has been accepted for publication at the ACM Conference on Computer and Communications Security (CCS) 2023, one of the four major security conferences. The paper will be presented in November. Today's software poses a significant challenge in eliminating vulnerabilities due to its large and complex code base, as well as continuous changes, which often lead to numerous security incidents. In particular, the monolithic nature of various software components residing in a single address space makes the entire program vulnerable even with a single security flaw. To address this issue, the research on In-Process Isolation (IPI) has been widely conducted, aiming to mitigate the risks of vulnerabilities in different domains by isolating programs into multiple domains. The proposed technology, Capacity, extends the existing access control capabilities of operating systems using ARM's new hardware features, namely Pointer Authentication and Memory Tagging Extension, to achieve capability-based access control. Capacity implements a Capability system by cryptographically signing memory pointers and file descriptors, which are reference types for process resources, using keys unique to each domain and verifying their use in all instances. By adhering to the capability philosophy, robust mechanisms are employed to maintain the security of signed references, ensuring high security levels. The practicality and performance of Capacity have been validated through its application to real-world programs such as NGINX and OpenSSH.
-
- 작성일 2023-08-08
- 조회수 3087
-
- [Research] Prof. Jee hyong Lee' Research Lab(IISLab) Publishes a paper in the ICCV 23
- [Abstract] Deep learning models need to detect out-of-distribution (OOD) data in the inference stage because they are trained to estimate the train distribution and infer the data sampled from the distribution. Many methods have been proposed, but they have some limitations, such as requiring additional data, input processing, or high computational cost. Moreover, most methods have hyperparameters to be set by users, which have a significant impact on the detection rate. We propose a simple and effective OOD detection method by combining the feature norm and the Mahalanobis distance obtained from classification models trained with the cosine- based softmax loss. Our method is practical because it does not use additional data for training, is about three times faster when inferencing than the methods using the input processing, and is easy to apply because it does not have any hyperparameters for OOD detection. We confirm that our method is superior to or at least comparable to state- of-the-art OOD detection methods through the experiments.
-
- 작성일 2023-07-25
- 조회수 3442