Hyunseung Choo

Hyunseung Choo

Professor

College of Software

Department of Computer Science and Engineering

Sungkyunkwan University

 

Mail :

Tel : +82-31-290-7145

Fax : +82-31-299-4134

Room : 27304

BS Mathematics, Sungkyunkwan University, 1988

MS Computer Science, University of Texas at Dallas, 1990

Ph.D. Computer Science and Engineering, University of Texas at Arlington, 1996

 

Research Areas

 

Software Defined Networks

Software-defined networking (SDN) is an approach to computer networking that allows network administrators to manage network services through abstraction of lower-level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The inventors and vendors of these systems claim that this simplifies networking. SDN requires some method for the control plane to communicate with the data plane. One such mechanism is OpenFlow, which is the first standard communication interface defined between the control and forwarding layers of an SDN architecture. OpenFlow allows direct access to and manipulation of the forwarding plane of network devices such as switches and routers, both physical and virtual (hypervisor-based). A protocol like OpenFlow is needed to move network control out of the networking switches to logically centralized control software. OpenFlow can be compared to the instruction set of a CPU. The protocol specifies basic primitives that can be used by an external software application to program the forwarding plane of network devices, just like the instruction set of a CPU would program a computer system. The OpenFlow protocol is implemented on both sides of the interface between network infrastructure devices and the SDN control software. OpenFlow uses the concept of flows to identify network traffic based on pre-defined match rules that can be statically or dynamically programmed by the SDN control software.

Network Function Virtualization

Network-function virtualization (NFV) is a network architecture concept that proposes using the technologies of IT virtualization to virtualize entire classes of network node functions into building blocks that may be connected, or chained, to create communication services.

NFV relies upon, but differs from, traditional server-virtualization techniques, such as those used in enterprise IT. A virtualized network function, or VNF, may consist of one or more virtual machines running different software and processes, on top of standard high-volume servers, switches and storage, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function.

The NFV framework consists of three main components:

  1. Virtualized network functions (VNF’) are software implementations of network functions that can be deployed on a network function virtualization infrastructure (NFVI).
  2. Network function virtualization infrastructure (NFVI) is the totality of all hardware and software components that build the environment in which VNFs are deployed. The NFV infrastructure can span several locations. The network providing connectivity between these locations is regarded as part of the NFV infrastructure.
  3. Network functions virtualization management and orchestration architectural framework (NFV-MANO Architectural Framework) is the collection of all functional blocks, data repositories used by these blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs.
Wireless Ad-hoc Sensor Networks

A wireless ad hoc network is a decentralized type of wireless network. The network is ad hoc because it does not rely on a preexisting infrastructure, such as routers in wired networks or access points in managed (infrastructure) wireless networks. Instead, each node participates in routing by forwarding data for other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data. A wireless sensor network (WSN) consists of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enabling control of sensor activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such networks are used in many industrial and consumer applications, such as industrial process monitoring and control, machine health monitoring, and so on.

IoT / WoT / M2M

The Internet of Things refers to uniquely identifiable objects (things) and their virtual representations in an Internet-like structure. The term Internet of Things was first used by Kevin Ashton in 1999. The concept of the Internet of Things first became popular through the Auto-ID Center and related market analysts publications. Radio-frequency identification (RFID) is often seen as a prerequisite for the Internet of Things. If all objects and people in daily life were equipped with radio tags, they could be identified and inventoried by computers. However, unique identification of things may be achieved through other means such as barcodes or 2D-codes as well. The Web of Things is a vision inspired from the Internet of Things where everyday devices and objects, i.e. objects that contain an embedded device or computer, are connected by fully integrating them to the Web. Examples of smart devices and objects are wireless sensor networks, ambient devices, household appliances, RFID tagged objects, etc. Machine to machine (M2M) refers to technologies that allow both wireless and wired systems to communicate with other devices of the same ability. M2M uses a device (such as a sensor or meter) to capture an event (such as temperature, inventory level, etc.), which is relayed through a network (wireless, wired or hybrid) to an application (software program), that translates the captured event into meaningful information (for example, items need to be restocked). Such communication was originally accomplished by having a remote network of machines relay information back to a central hub for analysis, which would then be rerouted into a system like a personal computer.

Proxy Mobile IPv6 / Virtual IP Mobility

Proxy Mobile IPv6, a network-based mobility management enables IP-mobility within local domain, without any modifications to the host’s TCP/IP Protocol stack. With PMIP the host can change its point-of-attachment to the Internet without changing its IP address. This functionality is implemented by the network, which is responsible for tracking the movements of the host and initiating the required mobility signaling on its behalf. However in case the mobility involves different network interfaces, the host needs modifications in order to maintain the same IP address across different interfaces. To address this problem, concept of virtual IP address can be benefited from. It is an IP address assigned to multiple applications residing on a single host. In general, IP address depends on the attached Network Interface Card (NIC), and only one IP address can be assigned per card. Virtual IP addressing can remove these physical constraints. It enables hosting multiple applications or virtual machines on the server with single NIC. Multiple NICs can also be used to host a single application or a virtual machine on the server. It has two main advantages over physical addressing: availability and mobility. In other words, virtual IP addressing provides application level transparency. In general, these advantages are used for Virtual Private Network, Quality of Service and Link failover.

Autonomic Network Systems

Autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Started by IBM in, this initiative ultimately aims to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. The system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. Driven by such vision, a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. However, most of these approaches are typically conceived with centralized or cluster-based server architectures in mind and mostly address the need of reducing management costs rather than the need of enabling complex software systems or providing innovative services. Some autonomic systems involve mobile agents interacting via loosely coupled communication mechanisms.

Embedded Systems and Networking

In broader perspective, an Embedded system is a device that includes a computer. An end user, who is using the embedded device is usually unaware that a computer is present in it. The computer is used primarily to provide flexibility and to simplify the system design. A few examples are like robot, PDA, mobile phone or tablet, and so on. Classical examples of embedded systems in networking domain are network routers, switches, hubs, electronic wrist watches, smart phones etc…
The research areas in this field include developing a heterogeneous device search and recognition mechanisms so that information can be transferred within a confined area using existing standards/protocols. In addition, it includes developing an inter networking of various devices, connected to heterogeneous networks, using M2M home gateway. Another prominent research area includes studying network system SW core technologies and applications to support mobile /wireless technologies. To realize them, it is necessary to have a secure, stable and seamless mobility support in next generation internet environment and to preoccupy core technologies and SW through organic and systematic convergence of component technologies.

Human-Computer Interaction and Its Software

Human–computer interaction (HCI) researches the design and use of computer technology, focusing particularly on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that lets humans interact with computers in novel ways. Human-Computer Interaction studies the ways in which humans make, or make not, use of computational artifacts, systems and infrastructures. In doing so, much of the research in the field seek to `improve’ human-computer interaction by improving the `usability’ of computer interfaces. How `usability’ is to be precisely understood, how it relates to other social and cultural values and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.

Achievements

 

251

International Journals

160

International Conferences

17

International Patents

 

 

45

Domestic Journals

357

Domestic Conferences

226

Domestic Patents

9

Domestic Books

49

Domestic Projects

 

Edited Volumes

M. L. Gavrilova, O. Gervasi, V. Kumar, A. Lagana, H. Choo, Y. Mun, D. Tanier, C. J. K. Tan, “Computational Science and Its Applications” – IEEE International Conference on CommunicationsSA 2006 (Part I, II, III, IV, V) Lecture Notes in Computer Science, Vol. 3980, 3981, 3982, 3983, 3984 Springer- Verlag

Professional Activities

성균관대학교 정보통신대학 부학장

한국인터넷정보학회 이사

한국정보처리학회 이사

Editor of KSII Transactions on Internet and Information Systems

한국HCI학회 이사

컨버젼스연구소 연구소장

Editor of The Journal of Supercomputing

(사)개방형컴퓨터통신연구회 이사

미래창조과학부–다부처공동기술협력특별위원회 위원

한국 정보과학회 이사

한국 네트워크 산업협회 SDN/NFV 포럼 분과장

Editor of International Journal of Distributed Sensor Networks

미래부 Grand-ITRC 센터장

한국정보처리학회 대학협력위원

 

 

Awards and Honors

 

2016.12
미래창조과학부 표창장
2016.07
HCI International 2016 Best Paper Award
2016.04
한국정보처리학회 춘계학술대회 최우수논문상
2015.06
Best Paper Award in the International Conference on Computational Science and Its Applications (ICCSA) 2015
2014.06
한국연구재단 감사장 (기초연구본부 전자정보∙융합연구단 전문위원 재임2010-2014)
2013.01
컨버전스연구소 2012년 성균관대 교내부설 연구기관 우수 연구기관 선정
2011.12
한국연구재단 2011년 기초연구사업 우수평가자
2010.12
교육과학기술부 국가연구개발 우수성과 100선 선정
2010.09
한이음 일자리 엑스포 2010 작품 공모전 대학원부문 최우수상(정보통신산업진흥원)
2010.04
성균관대 산학협력중심대학육성사업 기술개발 우수 과제
2008.12
한국정보처리학회 논문대상
2008.11
한국정보처리학회 공로상
2008.05
한국인터넷정보학회 공로상
2008.02
Recognition of Service Award in Appreciation for Contributions to ACM
2008.01
Listed in The Marquis Who’s Who Publications Board
2007.12
정보통신연구진흥원 표창장
2007.06
한국인터넷정보학회 공로상
2006.06
성균관대 정보통신공학부 논문부분 최우수교수상
2005.12
BK21 우수사업단 선정(사업팀장) – 부총리상
2005.05
성균관대 정보통신공학부 논문부분 최우수교수상
2004.05
성균관대 정보통신공학부 논문부분 최우수교수상, 교육부분 최우수교수상
2003.05
한국인터넷정보학회 춘계학술대회 최우수논문상
2002.07
IASTED WOC 2002 최우수논문발표
2001.05
한국인터넷정보학회 춘계학술대회 최우수논문상

Books

2012.01
컴퓨터네트워크, McGraw Hill
2006.02
마스터링 MATLAB 7, 대광서림
2005.05
네트워킹, 시그마프레스
2005.02
최신컴퓨터개론 8판, 홍릉과학출판사
2005.02
컴퓨터문서작성, 홍릉과학출판사
2004.02
이산구조론 5판, SciTech
2001.02
이산구조론 4판, SciTech
2000.08
정보공학입문, 성균관대학교 출판부