This project aims at creating a programmable network device, called FROG (Flexible and pROGrammable network device).
This was intended to execute mainly network application, but currently it can run any applications that is compatibile with its execution environment. Currently, standard virtual machines (implementing any service, not only network applications) and Docker are supported.
Multiple tenants are supported, such as a corporate ICT manager that defines a set of network applications that are active on the traffic of all the enterprise employees, or even end-users that can upload their preferred services (e.g., firewall, network monitor, parental control, traffic anonimizer), which will start operating on the user's traffic. In case multiple tenant are enabled, the overall service experienced by each user will be the composition of the services set by each individual tenant.
In the most recent incarnation, the FROG 3.0 support any type of virtualized application, spanning from network-related services such as firewall, network monitors, DHCP/DNS servers, to generic applications such as a bittorrent client, a storage server and more, and it is based on opensource software such as OpenStack and OpenDaylight, although with non-trivial modifications including an overarching orchestrator.
This third prototype was started to accommodate several comments, coming mostly from the world of network operators. They suggested that a box that had all the software written from scratch (the Frog 2.0, in fact) was for sure very optimized, but not very compatible with the environment of a network operator. In fact, a telecom operator would like to use standard hardware (i.e., standard, high-volume servers) and standard software technologies (e.g., virtual machines, OpenStack) to implement the FROG service.
We started from scratch again, this time moving our service model (users connect to the FROG node are given the possibility to install and operate their own network applications) to "standard" technologies, such as OpenStack.
Currently, user applications come under the form of standard Virtual Machines (the porting to Docker is planned), and a standard softswitch (currently OpenvSwitch and xDPd) is configured dynamically in order to implement the traffic steering among the different virtual machines under the control of that user.
This second prototype was build entirely from scratch. It abandoned OpenFlow, replaced by a custom-build softswitch. User applications (Java and C-native applications were supported) were installed in a custom execution environment, that looked similar to a virtual machine.
Performance were extremely high and we were finally able to use this prototype in our daily work in our lab.
This second version of the prototype was demonstrated in Oct 2013, although it became rather stable only at the beginning of 2014.
The first version was demonstrated in Oct 2012.
It was a proof of concept using OpenFlow to redirect all the traffic of the network node to an external controller (Beacon), which was hosting the applications written in Java.
Performance were very poor, but the prototype was able to demonstrate the potential of the idea.