Our Approach

CONIX systems can be decomposed into four main layers described below.

Policy and Programming
In order to simplify programming in highly dynamic and distributed environments, it must be the case that coordination, policy, and programming mechanisms allow designers to specify high-level goals and behavioral properties. Static checking and runtime system mechanisms must interpret these goals and then automatically render them onto the underlying network fabric.

Connectivity between Resources
The CONIX architecture requires abstractions with protocols and interfaces that allow the synthesis and mapping of resources in response to dynamic environments. Flexible and highly agile in-network computing will enable positioning of each virtualized resource and its interconnections in order to meet latency, congestion, and availability goals.

Virtualized Resources
Digital representations of the underlying physical environment must capture the mixture of computational, storage, sensing, actuation and communication resources that are both time and location varying. We envision all resources (cloud, network, and edge) being captured and manipulated in a manner similar to how virtualized servers and storage are currently managed in modern data centers.

Physical/Platforms
Perception-Cognition-Action loops exist at extreme and diverse spatio-temporal scales that span from (local) mixed reality applications all the way to (wide area) smart and connected urban spaces. The next-generation of platforms supporting these applications will require mechanisms for co-design of hardware and software including cognitive architectures and flexible accelerators at the edge.


CONIX Pillars

The following elements represent the core building blocks that span each of our research themes.

Machine Learning for Resilience

Systems need the ability to learn about their operating environments as well as their own physical attributes, and to adapt over timescales that may range from sub-second to years. Devices will constantly self-optimize parameters in a data-driven manner at runtime.

Safe, Secure, Smart and Scalable Programming

Programming models and runtime environments need to allow safe network-scale applications with performance and security guarantees. Future systems will need to detect and respond to threats autonomously at machine time-scales.

Spatio-Temporal Awareness

In-network computing needs to allow flexible positioning of data and computation to meet latency, congestion, and availability goals. This in-network computing is enabled by software-defined data planes to complement current software-defined networks that largely focus on the control-plane and miss important data and time dependencies.

In-Network Coordination and Control

Network mechanisms need to allow control loops involving sense-learn-decide-actuate to be performed within specified time bounds, with necessary inter-loop synchronization, and across arbitrary spatial scales.

Cognitive Architecture and Accelerators for the Edge

Further pushing intelligence to edge devices via cognitive architectures and accelerators for learning algorithms and satisfiability modulo theories (SMT) solvers, to allow predictability guarantees in the face of operational uncertainty, better latency and resource usage, and safety and security assurances.