This research is on the border of AI and Systems research. The goal is to explore both fundamental ideas for collective behavior, as well as applications to real distributed systems and modelling natural phenomena. There are many opportunities for graduate research and undergraduate projects. The purpose of this page is just to list a bunch of ideas for research into robust collective behavior --- how to program it and how to understand it.
Biologically-inspired Programming Paradigms
During embryogenesis, cells are capable of tolerating an astounding number of changes in cell size, cell numbers, cell death and growth - frog embryos, for example, when cut in half in early stages generate two complete tadpoles with half the volume and half the number of cells. Even after development, many organisms are able to regenerate entire structures - some starfish arms can even generate the rest of the body. What are the local, and global, organisational principles that allow these systems to adapt and reorganise? How do we design artificial systems that exhibit similar kinds of robustness?
Project ideas: (0) Investigate multi-agent algorithms modelled after robustness in cellular systems: apoptosis, cell motility, wound repair, immune system, regeneration, quorum-sensing (1) Develop new simulation models for self-assembly, for example "intelligent bricks" that self-assemble into a specified shape given environmental cues (artificial slime-mold), programmable "cilia" surface that moves things around, reconfigurable structure that can self-repair by reorganising remaining parts (2) Develop high-level programming languages that express goals in terms of functions and cues from the environment.
Related things to read:
Programming Paradigms for Sensor Networks and Smart Materials
As we miniturize computation and sensing, it becomes feasible to embed computation in materials and the environment. What happens when we can spray computation on the walls and embed sensors into ceiling tiles? How do we engineer self-organising, self-maintaining, self-configuring systems, and how do we interact with such a system? What will allow us to automatically compile high-level goals, like "Regulate temperature per floor, and if any floor exceeds its usual temperature variation then send a message to office 554 and start the alarm, and oh, by the way is conference room 223 free?", into robust agent programs.
Project ideas: (0) Design high-level languages and global-to-local compilation for different domains of applications of embedded sensors (1) Design distributed algorithms for localization, neighborhood computation, spatial detection, etc, that do not rely on precise placement or timing, and can adapt to agents dying and be replaced throughout the process.
Related things to read:
Programming Paradigms for Reconfigurable Robots and Swarm Robots
Many modular reconfigurable robots are currently being designed, where the goal is for these robots to be able to morph themselves from one shape to another as needed. How do we automatically compile complex shapes into rules for self-assembly, that obey the specific constraints of these systems? Current research in amorphous computing has shown that is possible to design programming languages for self-assembly, that compile complex global shapes into simple, local and robust agent rules. How do we apply these ideas to existing reconfigurable robots?
Project ideas: (1) Apply programmable self-assembly ideas, and global-to-local compilation, to reconfiguration of an existing modular robot (2) Investigate methods for self-repair and scale-indepedent shape formation (3) Design new programming langauages that incorporate environmental cues or functional descriptions of shape.
Social insects, such as ants and bees, cooperate to acheive very complex tasks, while being able to constantly adapt to changes in composition of members and large and small events in the environment. Sawrms of ant-like robots are being designed that could be used to monitor areas, clean spaces, do complex tasks. What are appropriate languages for describing global goals that we would like these robots to acheive? What are the underlying general primitives, and robot capabilities, that will make it possible for these multi-agent systems to function inspite of failures. Can we design languages that are expressive enough, and still free the programmer from worrying about details of low-level coordination and robots failing or being replaced?
Project ideas: (1) Design high-level programming language, and low level primitives, for robot teams for collecting and delivering objects, constructing structures, or other application domains (2) Investigate robust multi-agent control, modelled after bees and other less investigated social insects (3) Develop methods for analyzing the behavior of multi-agent robot algorithms.
Related things to read:
Computational Models of Biological Systems
It is fascinating how cells with identical DNA, starting from a mostly homogeneous egg, can cooperate to form complex structures, such as ourselves. Embryogenesis involves a complex cascade of decisions, that leads to the final pattern being developed. At the same time this process can show incredible robustness in the face of variations in volume, cell numbers, cell death and growth, and timing. Understanding how properties like robustness and scale-indepedence are achieved, involves more than understanding how by each individual decision is robust (or not), but instead how the system achieves robustness as a whole.
I am interested in building computational models and simulations of multicellular development, that link our growing understanding of cell signalling and molecular components with high-level observations of regional specification and morphogenesis. I am especially interested in how size regulation and robustness are achieved by complex cascades of decisions during early embryogensis, and whether we can use observations of system-level scaling (experiments and simulation) to place conditions on current models at the cell and molecular level. As more of the components and intercellular signalling are unraveled each day, it is becoming possible, and necessary, to develop ways for expressing and questioning how the system works together. Recent examples have shown how modelling a system, and measuring whether the system follows that model, can reveal gaping holes in our understanding of systems that were thought to be completely understood. One advantage of artificial models is that, when they fail to explain experimental results, we can understand exactly how they failed, and what levels of the cascade need to be revisited. Research in this area is meant to be interdisciplinary and will involve close collaboration between computation and biology.
Project ideas: (1) Investigate a specific example of scale-independent structure formation, size regulation, robust timing, during development of organisms such as the Drosophila and Xenopus. Design computational models and conduct experiments to verfiy or challenge the model (2) Develop wholistic models/simulations of processes during development, that capture a synthetic view of the current model and allow for comparisons of different competing explanatory models. What are the right model languages for describing development? (3) Develop models of current theories of pattern formation, and test their ability to explain observed system level phenomena, like scaling or cut-and-paste manipulation experiments.
Programming Paradigms for Synthetic Biology
Synthetic Biology is working towards being able to program cells, by genetically engineering plasmids from pre-characterized genetic building blocks, or biobricks. So far it has been possible to create simple circuits, like inverters and flip flops and simple intercellular signalling, and the library is growing. At the same time, one of the goals is to be able to program fields of cells that coordinate their behavior, for example laying down materials in a particular arrangement or aggregating into particular formations. Many of the ideas in Amorphous Computing research were developed with these systems in mind, but there is still a large gap to be filled. For example, what is an appropriate low-level language for expressing DNA programs? Is it possible to design a language that is readable, and incidentally executable in a simulation environment?
Project Ideas: (1) Design a language that can be easily translated to biobricks and tested by simulation (2) Design state machines for simple patterns: polka dots, bulls-eye, and determine what bio-bricks will be needed to translate these patterns into DNA that can be assembled on a plasmid. (3) Collaborate with Biobricks group to implement simple multicellular patterns.