Software Defined Networks

Software-defined networking (SDN) is an approach to computer networking that allows network administrators to manage network services through abstraction of lower-level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The inventors and vendors of these systems claim that this simplifies networking. SDN requires some method for the control plane to communicate with the data plane. One such mechanism is OpenFlow, which is the first standard communication interface defined between the control and forwarding layers of an SDN architecture. OpenFlow allows direct access to and manipulation of the forwarding plane of network devices such as switches and routers, both physical and virtual (hypervisor-based). A protocol like OpenFlow is needed to move network control out of the networking switches to logically centralized control software. OpenFlow can be compared to the instruction set of a CPU. The protocol specifies basic primitives that can be used by an external software application to program the forwarding plane of network devices, just like the instruction set of a CPU would program a computer system. The OpenFlow protocol is implemented on both sides of the interface between network infrastructure devices and the SDN control software. OpenFlow uses the concept of flows to identify network traffic based on pre-defined match rules that can be statically or dynamically programmed by the SDN control software.

Network Functions Virtualization

Network-function virtualization (NFV) is a network architecture concept that proposes using the technologies of IT virtualization to virtualize entire classes of network node functions into building blocks that may be connected, or chained, to create communication services.

NFV relies upon, but differs from, traditional server-virtualization techniques, such as those used in enterprise IT. A virtualized network function, or VNF, may consist of one or more virtual machines running different software and processes, on top of standard high-volume servers, switches and storage, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function.

The NFV framework consists of three main components:

  1. Virtualized network functions (VNF’) are software implementations of network functions that can be deployed on a network function virtualization infrastructure (NFVI).
  2. Network function virtualization infrastructure (NFVI) is the totality of all hardware and software components that build the environment in which VNFs are deployed. The NFV infrastructure can span several locations. The network providing connectivity between these locations is regarded as part of the NFV infrastructure.
  3. Network functions virtualization management and orchestration architectural framework (NFV-MANO Architectural Framework) is the collection of all functional blocks, data repositories used by these blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs.

Wireless Ad-hoc Sensor Networks

A wireless ad hoc network is a decentralized type of wireless network. The network is ad hoc because it does not rely on a preexisting infrastructure, such as routers in wired networks or access points in managed (infrastructure) wireless networks. Instead, each node participates in routing by forwarding data for other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data. A wireless sensor network (WSN) consists of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enabling control of sensor activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such networks are used in many industrial and consumer applications, such as industrial process monitoring and control, machine health monitoring, and so on.

IoT / WoT / M2M

The Internet of Things refers to uniquely identifiable objects (things) and their virtual representations in an Internet-like structure. The term Internet of Things was first used by Kevin Ashton in 1999. The concept of the Internet of Things first became popular through the Auto-ID Center and related market analysts publications. Radio-frequency identification (RFID) is often seen as a prerequisite for the Internet of Things. If all objects and people in daily life were equipped with radio tags, they could be identified and inventoried by computers. However, unique identification of things may be achieved through other means such as barcodes or 2D-codes as well. The Web of Things is a vision inspired from the Internet of Things where everyday devices and objects, i.e. objects that contain an embedded device or computer, are connected by fully integrating them to the Web. Examples of smart devices and objects are wireless sensor networks, ambient devices, household appliances, RFID tagged objects, etc. Machine to machine (M2M) refers to technologies that allow both wireless and wired systems to communicate with other devices of the same ability. M2M uses a device (such as a sensor or meter) to capture an event (such as temperature, inventory level, etc.), which is relayed through a network (wireless, wired or hybrid) to an application (software program), that translates the captured event into meaningful information (for example, items need to be restocked). Such communication was originally accomplished by having a remote network of machines relay information back to a central hub for analysis, which would then be rerouted into a system like a personal computer.

Proxy Mobile IPv6 / Virtual IP Mobility

Proxy Mobile IPv6, a network-based mobility management enables IP-mobility within local domain, without any modifications to the host’s TCP/IP Protocol stack. With PMIP the host can change its point-of-attachment to the Internet without changing its IP address. This functionality is implemented by the network, which is responsible for tracking the movements of the host and initiating the required mobility signaling on its behalf. However in case the mobility involves different network interfaces, the host needs modifications in order to maintain the same IP address across different interfaces. To address this problem, concept of virtual IP address can be benefited from. It is an IP address assigned to multiple applications residing on a single host. In general, IP address depends on the attached Network Interface Card (NIC), and only one IP address can be assigned per card. Virtual IP addressing can remove these physical constraints. It enables hosting multiple applications or virtual machines on the server with single NIC. Multiple NICs can also be used to host a single application or a virtual machine on the server. It has two main advantages over physical addressing: availability and mobility. In other words, virtual IP addressing provides application level transparency. In general, these advantages are used for Virtual Private Network, Quality of Service and Link failover.

Autonomic Network Systems

Autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Started by IBM in, this initiative ultimately aims to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. The system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. Driven by such vision, a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. However, most of these approaches are typically conceived with centralized or cluster-based server architectures in mind and mostly address the need of reducing management costs rather than the need of enabling complex software systems or providing innovative services. Some autonomic systems involve mobile agents interacting via loosely coupled communication mechanisms.

Embedded Systems and Networking

In broader perspective, an Embedded system is a device that includes a computer. An end user, who is using the embedded device is usually unaware that a computer is present in it. The computer is used primarily to provide flexibility and to simplify the system design. A few examples are like robot, PDA, mobile phone or tablet, and so on. Classical examples of embedded systems in networking domain are network routers, switches, hubs, electronic wrist watches, smart phones etc…
The research areas in this field include developing a heterogeneous device search and recognition mechanisms so that information can be transferred within a confined area using existing standards/protocols. In addition, it includes developing an inter networking of various devices, connected to heterogeneous networks, using M2M home gateway. Another prominent research area includes studying network system SW core technologies and applications to support mobile /wireless technologies. To realize them, it is necessary to have a secure, stable and seamless mobility support in next generation internet environment and to preoccupy core technologies and SW through organic and systematic convergence of component technologies.ies.

Human-Computer Interaction and Its Software

Human–computer interaction (HCI) researches the design and use of computer technology, focusing particularly on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that lets humans interact with computers in novel ways. Human-Computer Interaction studies the ways in which humans make, or make not, use of computational artifacts, systems and infrastructures. In doing so, much of the research in the field seek to `improve’ human-computer interaction by improving the `usability’ of computer interfaces. How `usability’ is to be precisely understood, how it relates to other social and cultural values and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.