Discussion – EON Project https://eon.elsi.jp Sat, 02 Jun 2018 18:00:24 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.12 https://eon.elsi.jp/wp-content/uploads/2015/09/cropped-logo-32x32.jpg Discussion – EON Project https://eon.elsi.jp 32 32 Exploring the Role and Possible Utility of Criticality in Evolutionary Games on Ising-Embodied Neural Networks https://eon.elsi.jp/exploring-the-role-and-possible-utility-of-criticality-in-evolutionary-games-on-ising-embodied-neural-networks/ https://eon.elsi.jp/exploring-the-role-and-possible-utility-of-criticality-in-evolutionary-games-on-ising-embodied-neural-networks/#respond Sat, 02 Jun 2018 14:50:30 +0000 https://eon.elsi.jp/?p=1890 At the heart of the scientific process is the yearning for universal characteristics that transcend historical contingency and generalize the unique patterns observed in nature. In many cases these universal characteristics bleed across the different branches of the tree of knowledge and almost serendipitously relate disparate systems together. In the last century a growing consciousness of one such emerging universality has sparked an obsession with the ideas behind criticality and phase transitions. One domain where criticality pops up quite often is in the dynamic domain of life (and complexity), and more recently in understanding the organization of brain and society. What is becoming clear about criticality is that it isn’t just a mode of organization that is possible, it seems to be an attractor in mode-space. For whatever reason, criticality, the self-organization of criticality, and evolution are deeply intertwined and it is the goal of this project to explore this intersection of ideas.

The present post was written by Sina Khajehabdollahi, and is the product of a collaboration with Olaf Witkowski, funded partly by the ELSI Origins Network. This research will be published in the Proceedings of the 2018 Conference on Artificial Life (ALIFE 2018), which will take place in Tokyo, Japan, July 23-27, 2018.

Criticality and Phase Transitions

Modes of Matter and their Collective Organizational Structures

Our understanding of the phases of matter start from ancient beginnings and have their origins rooted in the phenomenological experience of the world rather than scientific inquiry. Earth, Water, Air, Earth, Aether, from these everything is made, or so was/is thought. We have since moved on to more ‘fundamental’ concepts like quarks and electrons, waves and plasmas; Modes of matter and the ways that they can organize.

Is this a recursive process? Is ‘matter’ simply a sufficiently self-sufficient ‘mode of organization’? The brain as a phase of neural matter? The human as a phase of multi-cell life, multi-cell life as a phase of single-cell life, single-cell life a phase of molecular matter, etc.

Matter: A self-sufficient mode of organization with ‘reasonably’ defined boundaries.

The phases of nature are most prescient when they are changing, for example observing the condensation of vapor, the freezing of water, the sublimation and melt of ice. Emphasis must be put on the notion that phase transitions are generally describing sudden changes and reorganization with respect to ‘not-so-sudden’ changes in some other variable. In other words, phase transitions tend to describe how small quantitative changes can sometimes lead to large qualitative ones.

Associated with phase transitions are critical points (or perhaps regimes in the case of biological systems), where these critical points are analogous to a goldilocks zone between two opposing ‘ways of organization’. In an abstract/general sense, a critical point is the point where order and disorder meet. However, there may certainly be times where there are transitions between more than 2 modes of order. By tuning some choice of ‘control parameter and measuring your mode of ‘order’, a phase diagram can be plotted. In the case of Figure 1, this is seen for the control parameters temperature and pressure for relatively ‘simple’ statistical physics systems.

Figure 1 Typical Phase Diagram of Matter and Transition Points

However, such a plot is not readily available when discussing complex systems like humans or human society and its idiosyncratic modes of interaction, partially for a lack of computational power which is constantly improving, but fundamentally due to our constrained knowledge of complex/chaotic systems and their descriptive mathematics.

Finally, the importance of criticality arises from this ‘best-of-both-worlds’ scenario, given that the ‘both worlds’ are distinct and useful modes of order/disorder. For example in the conversation of the human brain, the dichotomy is often between segregation and cooperation; The brain is a modularly hierarchical structure of interconnected neurons which have the ability to take advantage of ‘division-of-labor’ strategies while also being capable of integrating the divided labor into a larger whole. There seems to be an appropriate balance of these forces, between efficiency and redundancy, between determinism and randomness, between excitation and inhibition, etc. The most appropriate ‘order parameter’ is not obvious and there are many choices possible along with many more choices for ‘control parameters’.

We need a sufficiently simple model to play with, and to that end we look to the Ising model.

Ising Model

Phenomenological model of phase transitions

The Ising model is probably the simplest way to model a many-body system of interacting elements. In its simplest form (at least the simplest that exhibits phase transitions), binary elements that can only take the values of ±1 are organized onto a 2D lattice grid, interacting only with their nearest neighbours. This model can be an analogy for solid matter or gasses with local interactions as it originally inspired, but also now more generally for modelling neurons in the brain, or humans or agents in socio-economic contexts.

Figure 2 Visualization of a 2D Ising Model at Criticality

The success of this simple model was in its ability to exhibit a phase transition by changing only one control parameter, the temperature of the heat bath. As the temperature increased, larger and larger energetic fluctuations are allowed where the Energy of a system in a configuration is:

$$ E = – \sum_{\left \langle i, j \right \rangle} J_{i,j} s_i s_j – \sum_i h_i $$

where the summation \( \left \langle i, j \right \rangle \) is over all nearest neighbours, \( J_{i,j} \) is the interaction strength between nodes \(i\) and \(j\), and \(h_i\) is the locally applied field/bias. The model tends to try to minimize its energy by favouring the probability of ‘spin-flips’ that minimize \(E\). Unfavorable spin-flips are allowed as a function of a Boltzmann factor:

$$ p \sim e^{\frac{\Delta E}{T}} $$

As the control parameter temperature is varied, the system can act more or less randomly. A critical temperature exists such that the qualitative organization of the model becomes discontinuous and changes. At the critical point, the system exhibits long-range correlations and maximizes information transfer between nodes, properties deemed extremely useful if not necessary for intelligent systems.

Figure 3 Statistical measures of an N=83 Ising model with mean=0, normally distributed connectivity matrix. Note the critical point near β=0.2 as seen in the discontinuity in the Magnetic Susceptibility as well as peaks/qualitative changes in the other measures.

Figure 3 visualizes some basic statistical measures of an example Ising system as a function of \( \beta = 1/T \) simulated using Monte Carlo Metropolis methods. In this case instead of a 2D lattice grid, a fully-connected graph with normally distributed (mean 0) random edge weights is generated. The transition point of this model is most easily discerned in the discontinuity of the Magnetic Susceptibility plot, though the transition exhibits itself in the other variables quite noticeably as well.

Ising-embodied Neural Networks

It’s Alive! Making the Ising model do things!

Now the point of this project wasn’t simply to play with any old Ising models, the goal was to embody these Ising models into context and bring them to life so that we can see how/if criticality may play a role in adaptation and survival. To that end, we introduce the Ising organism, an object class that has its own unique connectivity matrix and acts as an independent neural network organism as visualized in Figure 4.

Figure 4 Basic architecture of each organism’s Ising-embodied neural network.

The sensor neurons for these organisms are sensitive to the angle to the closest food, the distance to the closest food, and a directional proximity sensor with respect to other organisms. These sensor (input) neurons are connected to a layer of hidden neurons, which are in turn connected to both themselves and motor (output) neurons. The motor neurons control the linear/radial acceleration/deceleration, effectively identical to the steering wheel and gas/brakes in a car.

A community of 50 organisms with this architecture (but each with its own set of unique weights) are generated and placed in a 2D environment that spawns food (Figure 5). A generation is defined as an arbitrary amount of time (for example 4000 frames) where all organisms can explore their environment by moving around. When an organism (green circles) gets close enough to a parcel of food (purple dots), the food disappears, and a counter adds to the score of the organism. Food parcels respawn instantly upon being eaten so there is always an infinite supply.

From here, we can begin to explore ways in which these Ising-embodied organisms can adapt to their environment.

Figure 5 Sample frame of a community of 50 Ising-embodied neural network organisms.

Critical Learning

Strategies for approaching criticality

Initially this project was motivated by “Criticality as It Could Be: organizational invariance as self-organized criticality in embodied agents” a project by Miguel Aguilera and Manuel G. Bedia. In their work they demonstrate that criticality can be learned in an arbitrary Ising-embodied neural network simply by applying a gradient descent rule to the edge weights of the network. The gradient descent rule would attempt to learn a distribution of correlation values which are assigned to it a priori (Figure 6). The gradient descent rule governs the adaptation of the organism:

$$ h_i \leftarrow h_i + \mu (m_i^* – m_i^m) \\ J_{ij} \leftarrow J_{ij} + \mu (c_{ij}^* – c_{ij}^m) $$

The local fields and interaction strengths (edge weights) are updated with respect to the actual mean activation \( m_i^m  \) and the reference activations \( m_i^*  \) from the known critical system and similarly for the actual correlation values \( c_{i,j}^m  \) versus the reference correlation value \( c_{i,j}^*  \) from the known critical system.  \( \mu = 0.01 \) is the learning rate. We introduce a concept analogous to that of the generation, the semester, an arbitrary number of time points (again, 4000 frames for example) in which the organisms have time to generate correlation/mean activation statistics at the end of which the gradient descent rule is applied. In short, the gradient descent rule is applied once per semester.

By learning the correlation distributions of a known critical, Ising system, an arbitrary network could also learn to be critical without fine-tuning any control parameter like temperature.

Figure 6 Left: The connectivity matrix used to generate critical Ising correlations. This matrix is taken from the Human Connectome Project and is a rough map of brain interconnectivity. Right: The correlation distribution of the connectivity matrix pictured.

Aguilera et al. embody these networks in simple games (car in a valley, double pendulum balancing) and demonstrate that once the systems converge towards criticality that the agents maximized their exploration of phase space and would position themselves at the precipice of behavioural modes. However, what success these agents demonstrated in critical learning, exploration and playfulness, they lacked in their ability to actually play these games to win. Arguably, these goals may ultimately be perpendicular, however in trying to contextualize criticality with life-like systems we must push forward and see if we can apply critical learning and evolutionary selection simultaneously.

Genetic Algorithm and Evolutionary Selection

Playing games, competing for high scores, mating, mutating and duplicating

Parallel to the critical learning algorithm, we introduce a genetic algorithm whose goals are simply to reward those organisms that have eaten the most food. Using a combination of elitist selection and mating, successful organisms are rewarded by duplication and the chance to mate and generate offspring. Mutations accompany this process and allow for the stochastic exploration of the organisms’ genotype space.

Using the previously defined concept of a generation, the genetic algorithm is applied once at the end of each generation.

Visualizing Community Evolution

Observing the evolution/learning process

So we now have a community of Ising-embodied organisms playing a foraging game in a simple, shared, 2D environment in which we introduce the concept of generations (time span in which genetic algorithm operates) and semesters (time span in which critical learning algorithm operates).

Let’s look at these 2 algorithms once each separately, and then once again together. However, when combining the two algorithms, we face a dilemma. These two adaptation algorithms tend to destroy the learning done by the other. In other words, the genetic algorithm does not conserve what is learned by the critical algorithm and vice versa. However, perhaps if we combine these two algorithms in some appropriate ratio of time scales we can achieve an adaptation scheme that is relatively continuous and non-destructive to its previous adaptations. The choice for a ratio of 6 semesters to 1 generation is made such that the learning algorithm has time to kick in before the GA can select successful foragers.

First, let’s look at how the genotypes evolve across the generations. We use tSNE projections in Figure 7 to visualize the relationships between the connectivity matrices of the different organisms. As a general clustering algorithm, tSNE projections help us visualize the groupings of high dimensional data, in this case all the edge weights in each individual neural network. We then cluster these networks across all generations to observe how they adapt either through the GA, critical learning, or both methods combined.

Figure 7 tSNE Projections of the Connectivity Matrix of all 50 organisms across 2000 generations. Perplexity = 30. The color coding corresponds to the generation age and the marker size corresponds to the fitness level.

It’s useful to know how the fitness of these organisms goes across the generations, so we plot that in Figure 8 to get a feel of how well each algorithm did in playing our game.

Figure 8 Average fitness across 2000 generations. Solid lines are community averages, dark shaded areas represent bounds on one standard deviation, light shaded areas show bounds for minimum and maximums.

Finally, to compare our simulations to critical systems we measure the specific heat of each organism as a function of β. If the evolution algorithm works as intended then we should see the peaks of the specific heat converge to β = 1, the β which the simulation is natively run at.

Figure 9 Specific Heat vs. β for the 3 different adaptation paradigms.

 

From the fitness functions in Figure 8 it is clear the GA algorithm by itself is the most successful in increasing the fitness of the organisms. This is not altogether surprising as we expected that the two algorithms will undo each other’s work if they are not appropriately made compatible. When looking at the tSNE projections of the GA algorithm by themselves we note its evolution is marked by major individual milestones (where the large clusters are) and that these organisms duplicate and seed the next generation of mutated/successful organisms. However, even this strategy plateaus quite rapidly, within approximately 250 generations. The specific heat of this paradigm both starts, and ends centered near β = 1 so it is not clear if the slight shift in the curves is within error bounds or a real pattern. We would have to run many iterations of this experiment to know for sure. Another helpful experiment that starts the community of organisms farther away from criticality (for example if we change the local temperatures (Betas) of each organism), then we might be able to better see if evolution within this foraging game drives the system towards criticality.

For the critical learning only paradigm, the tSNE projections are straight forward. Here the learning is not contingent on the fitness of the organisms at all and therefore all learning is done ‘locally’. No organisms die (because of the lack of a GA) and so there is continuity in the genotypes. This paradigm seems to act to sharpen the specific heat plots near β = 1, making for a more dramatic/steeper transition. This contrasts with the evolution algorithm which did the opposite. The fitness function of this paradigm is quite awful, again as expected since this algorithm is not learning how to play the foraging game, it is only intended to learn ‘criticality’.

Finally, when combining the two paradigms together (in a ratio of 6 semesters per 1 generation), a slightly more complicated relationship forms. In the tSNE projections we observe stronger continuity between the generations. Perhaps the critical learning algorithm here is pushing all organisms in a universal direction which clusters their genotypes closer together than the evolution only paradigm. Unfortunately however, this patter does not result in any interesting increase in the fitness function of the community, instead behaving erratically or stochastically when compared to the other two paradigms. In a sense it gives hope with respect to the critical learning only paradigm since the erratic behaviour pushes its fitness higher rather than lower. Finally, no deep insight is visible in the specific heat plots of this paradigm either as our initial condition and generation 2000 results overlap showing no indication for a trend away criticality. The failure of this experiment was to ensure that the system does not start critical!

It seems our fears have come true in a sense, the two algorithms run perpendicular to each other, each undoing the work of the last. Furthermore, our experiments had the unfortunate feature that their initial conditions were very close to criticality to begin with. This property made it difficult for us to make obvious observations. However our preliminary results seem to indicate that the evolution algorithm and the critical learning algorithm work in opposing directions. The GA tends to pull the system towards sub criticality whereas the critical learning algorithm pulled the system to super criticality (by pushing the peaks of the specific heat curves either left or right in Figure 9). There needs to be a way for both of these forces to contribute to the connectivity of the network without undoing each other’s previous learnings.

Summary

TL;DR

This project aimed to incorporate the ideas of criticality into biological, evolutionary simulations of Ising-embodied neural networks playing a foraging game. We ran 3 experiments testing 2 different adaptation algorithms (critical learning and the GA) as well as their combination is a ratio of 6 learning semesters to 1 GA generation. Our experiments results highlighted some of the short comings of our experimental set up, namely that our initial conditions were too close to criticality to begin with to distinguish any flows in parameter space. In the next experiment, communities will spawn with varying distributions of local temperatures which will force the specific heat peaks away from β = 1. This way as the community evolves, it will be much more clear if there is a flow towards β = 1 or not. Such a flow could indicate the utility of criticality. Nonetheless, in this process a set of tools were constructed for future experiments to explore further the interaction and self-organization of communities into criticality contextualized within a GA paradigm.

Finally, it was observed that critical learning and GA tend to work in opposite directions, which can be a good thing as sometimes half the battle is finding the right forms of ‘order’ and ‘disorder’. However, we were not able to reconcile these differences and instead the two algorithms tended to deconstruct each other at every turn. Future experiments in this paradigm can introduce two layers of connectivity, one for the GA and another for learning which overlap but evolve semi-independently. This reconciliation may allow for a more stable form of interaction and allow for a more diverse set of behaviours and genotypes. The idea here is that through the playful/exploratory nature of criticality, plateaus in the evolutionary process can be overcome faster thereby decreasing the time per generation to improve the community’s fitness.

]]>
https://eon.elsi.jp/exploring-the-role-and-possible-utility-of-criticality-in-evolutionary-games-on-ising-embodied-neural-networks/feed/ 0
Information and regulation at the origins of life https://eon.elsi.jp/information-and-regulation-at-the-origins-of-life/ https://eon.elsi.jp/information-and-regulation-at-the-origins-of-life/#respond Tue, 13 Feb 2018 13:50:21 +0000 https://eon.elsi.jp/?p=1746 An ongoing debate in artificial life involves the definition and characterisation of living systems. This definition is often taken for granted in some fields, listing either a set of functions (e.g. reproduction, self-maintenance, metabolism) or a set of properties (e.g. dna) that attempt to fully describe living systems. It is however common nowadays to question most of these attempts, finding possible counterexamples to these arguments (e.g. mules don’t reproduce and viruses, if alive at all, do not contain any DNA), and showing how elusive the definition of life really is. Recent efforts in addressing the origins of life propose to include information theory in order to describe the blurry line between non-living and living architectures.

My work here at ELSI is largely based on this last idea, with a strong drive to include concepts derived from dynamical systems and control theory alongside information theoretical measures. Control theory is the study of regulatory processes mainly deployed in engineering systems, the simplest examples being a thermostat, built for instance to regulate the temperature inside a building, or a cruise controller on a car, maintaining a constant speed despite changes in the environment like wind or the slope of a road. Biological systems show an extraordinary ability to regulate their internal states, from temperature to pH, to different chemical levels. These processes are usually addressed under the umbrella term “homeostasis”, representing the ability of living systems to finely tune conditions for their persistence. (“Homeorhesis” might be more correct in this context, with the former referring to static equilibria and the latter to stable trajectories. For simplicity we will however use the perhaps more familiar term, homeostasis.)

 

 

Is there a way to use this remarkable and in some ways maybe unique adaptation to formally characterise the origins of life? Homeostasis is most definitely a requirement for living systems but is it a sufficient condition? Probably not. Replication and other crucial functions of biological systems are, in my opinion, not easily described in terms of regulation alone. Artificial systems can also clearly be built to show some similar properties (e.g. the thermostat mentioned previously). I believe it is however vital to consider its role at the origins of life, as perhaps suggested in the formulation of autopoiesis and related theories.

In order to define homeostasis more rigorously, I explicitly refer to frameworks of control theory and one of their most influential applications to the study of natural sciences in the last century, cybernetics. In particular, a few key results: the “law of requisite variety” then included into the “good regulator theorem” (GRT) and the “internal model principle” (IMP). All of these results seem to point at a common concept for control and homeostasis: regulation implies the presence of a predictive model within the system being regulated. In other words, to maintain certain properties within bounds (i.e. to be a  “good regulator”), a system must be able to generate an output (control signal) capable of counteracting the effects of the input disturbances that may affect the system itself. For instance, let’s consider a thermostat trying to regulate the temperature of a room to be around 20°C. If the temperature is 12°, the thermostat must be able to increase the temperature by 8°, if it can’t then it won’t be a good regulator for this system. While this last statement may sound trivially true, it really isn’t since it allows us to say that predictive (or generative) models are present in a system of interest (e.g. one could write down a model of a good thermostat with the necessary information to tune the temperature).

The metaphorical relationship between a thermostat and homeostasis in living systems.

 

Living systems could metaphorically be seen as very complicated thermostats. They respond to most disturbances avoiding decay, they regulate their temperature alongside several other variables including pH, oxygen intake and various chemical levels. Models of the origins of life should, in my opinion, be able to characterise the abundance of regulatory mechanisms in living systems starting from simpler chemical reactions. Here at EON/ELSI I began investigating models of reaction-diffusion systems. These models try to capture the spatial and temporal changes in concentration of one or more interacting chemicals. The one I focused on, the Grey-Scott system, represents one of the most studied autocatalytic models (an autocatalytic model is one where the product of a chemical reaction is also a catalyst for the same reaction). This model has been previously proposed as a testing ground for theories of the origins of life, as shown in work by NathanielVirgo (now here at EON/ELSI) and colleagues.

Gray-Scott model of a reaction diffusion system. P is taken to be an inert product.

The Grey-Scott system shows a wide variety of patterns emerging through the reaction of as few as two chemicals. Different parameters in the model also allow for different behaviours to emerge, moving patterns as well as ones that don’t move, or blobs that divide in a mitosis-like fashion (for an example refer to this video). In my simulations I focused on two specific patterns, a moving one commonly addressed as “u-skate” given its u-shape and its movements on a straight line if left in isolation, and a stable one also known as a type of “soliton” (non-moving patterns that don’t divide/replicate). Work in this area focuses on the emergence of complex behaviour via the interactions of several patterns, my focus here is however on the analysis of the properties of a single one. For simplicity then, I set up the initial conditions (i.e. dropping a specific quantity of a chemical) necessary for the formation of only one shape per simulation. The goal of these simulations was to identify significant changes in some information quantities between the pattern of interest and its environment. Part of my intentions was also to avoid pre-specifying conditions and rules in order to recognise the pattern itself, looking for ways in which information measures alone could define whether a shape is formed and whether information is stored within it, in a meaningful way. During this study I focused on two questions:

  1. Can one of these patterns show better predictions of its future states, if compared to its “environment” (i.e. chemicals not forming patterns)? Is it a relevant way of describing the formation of complex structures that could lead to life?
  2. Can one of these patterns be shown to encode information from its environment? Is there a flow of information from the environment to the pattern showing how information is aggregated in the pattern itself?
On the left side, the non-moving spot emerging after some chemical is dropped in the middle of a 50×50 units grid. On the right side, the moving pattern emerging after some chemical is dropped in the middle of a 100×100 units grid.

For the analysis, I focused on Predictive Information (PI) and Transfer Entropy (TE) measures. The former was used to investigate if a blob’s ability to predict its future states is qualitatively different when compared to its surroundings or in other words, is self-prediction a good indicator for the emergence of complex structures, maybe life? The latter was used to see whether there is an emergence of directed information exchange between a blob and its environment or in other words, is there a relevant information flow emerging when chemicals organise in more robust structures?

Preliminary results show that PI appears to correlate well with the dynamics of the patterns’ formation, indicating perhaps how information is stored in the pattern itself during its formation. In the case of a soliton over a very long simulation time however, PI shows no difference between solitons and their environments, suggesting that when the chemical system has reached a stable enough state, this quantity is not meaningfully capturing differences between a stable blob and its surroundings. In the case of the u-skate, I tracked the movement of the shape(thus dropping momentarily one of my goals, the automatic recognition of patterns from information measures alone) and measured PI in a moving framework in an attempt to capture the dynamics of this pattern. The results are the moment being analysed, with waves of PI that seem to propagate from the pattern at its formation (probably due to its movement, generated by changes in chemical gradients around the shape). In the long run, for the u-skate too we can see how PI seems not to capture differences between the pattern and the surrounding, with similar levels of PI for different parts of the systems. In both the soliton and the u-skate, the most promising results at the moment seem to emerge from a perturbation analysis that I started recently, with small amounts of chemicals dropped in several areas of the grid (including on the shape itself) at a quite high frequency. In this case, preliminary evidence might suggest how the shapes seem to maintain high PI if compared to the environment in spite of perturbations. My speculation is at the moment that PI might be capturing the robustness to perturbations of the shape, the patterns are more robust and therefore better at predicting their future states when compared to chemicals not organised in patterns.

Example of the average (over 10000 time units) Predictive Information (PI) for the u-skate moving pattern. PI is measured in bits (colorbar on the right side). We restricted the measure to a portion of the initial grid (100×100 units), focusing on a 40×40 units partition tracking the moving shape. Lag for PI measure: 10 time units. The grid was perturbed with units of chemicals dropped at random locations centred around the pattern.

 

The analysis using Transfer Entropy is still in the workings, with issues mostly due to the fact that it is not computationally feasible to measure TE between all possible combinations of time series even on a discretised grid. We are the moment considering ways to coarse-grain the system if a meaningful way.

 

 

To summarise my work (mostly in progress):

  1. How can we define the emergence of life from chemical systems?
  2. Can information theory in conjunction with control theory be used to quantify and explain information contents in simple chemical systems that are relevant for the origins of life?
  3. What are (if any) the relevant informational features of a living system?

In attempt to answer this questions, I set up some simulations with a model of reaction-diffusion equations of two chemicals, the Gray-Scott system. This system is known to show the emergence of many different patterns with quite diverse behaviour and has been suggested before to be a possible test ground for theories of the origins of life (1.).

One of the processes that I would consider general to living systems is homeostasis, the ability to maintain some quantities within boundaries (e.g. temperature, oxygen level). In control theory it is well known that regulation processes (like homeostasis) require the presence of an internal (generative or predictive) model storing information about environmental disturbances affecting the system (2.). In my first attempts to investigate this idea, I focused on two information measures that might help quantifying information in simple patterns of the Gray-Scott system that might relate to the presence of such predictive model. The two measures are Predictive Information (i.e. how much the past of a variable tells you about its own future) and Transfer Entropy (i.e. how much the past of a variable tells you about another variable’s future). The results are still very preliminary, but there is some small hint suggesting that information measures like PI might correlate to a general concept of robustness to perturbations that I believe to be fundamental for living systems.

 

 

 

About the author: Manuel Baltieri

Manuel is a PhD candidate at the University of Sussex, Brighton, UK. His research interests include questions regarding the relationship between information and control theory, biological and physical systems. His research project focuses at the moment on theories of Bayesian inference applied to biology and neuroscience, such as the active inference and predictive coding, and their connections to embodied theories of cognition. In his work, Manuel uses a combination of analytical and computational tools derived from probability/information theory, control theory and dynamical systems theory.

Twitter: @manuelbaltieri

]]>
https://eon.elsi.jp/information-and-regulation-at-the-origins-of-life/feed/ 0
Guided Self Organization in Complex Adaptive Systems https://eon.elsi.jp/guided-self-organization-in-complex-adaptive-systems/ https://eon.elsi.jp/guided-self-organization-in-complex-adaptive-systems/#respond Mon, 12 Feb 2018 09:56:49 +0000 https://eon.elsi.jp/?p=1670 The following pertains to a new line of research on the topic of
guided self-organization that was discussed with researchers at ELSI
during my visit in the period December 5, 2017 – January 5, 2018.

This research proposes a novel computational method for enabling
Guided Self-Organisation (GSO) in artificial complex adaptive systems.
Complex adaptive systems are networks comprising many components
(nodes), where nodes are highly connected, there is no central
control, but where sophisticated global self-organising behaviour is
exhibited resultant of local interactions between nodes.
Self-organisation is a ubiquitous emergent phenomenon in many natural
complex systems. For example, collective gathering and construction
behaviour in some social insects, cortical map formation during
neuronal development in animal brains, and swarming behaviour to ward
off predators.  Examples of artificial complex adaptive systems
include the Internet, major urban area traffic flow, and swarm robotic
systems.  GSO is the study of how to manipulate the complex system
nodes and their interactions so as new patterns and structures emerge,
guiding the system towards a desired state.

The potential impact of this line of research is that human designers
of artificial complex adaptive systems devise computational methods
that adapt the behaviour and interactions of individual nodes such
that the complex system self-regulates and appropriate (user desired)
global behaviours emerge in response to external (the system’s
environment) and internal (system node) changes. That is, GSO
(described by new computational methods) would ensure that an
artificial complex system self-regulates given external pressures or
component failures, meaning the system reconfigures itself as required
and continues to properly function. Thus, if GSO were to be
successfully implemented as a governing computational method that
could self-organise the global behaviour of any artificial complex
system, then such artificial complex systems would be far more
resilient and robust to damage and changes in their environment. For
example, automated self-organisation of problem solving behaviour to
cope with hub failures and network traffic congestion would be highly
desirable to increase the robustness and resilience of large scale
computer networks.

Currently there is no theoretical or practical formalisation of GSO
principles, though such a formalisation (for example, a computational
method or mathematical theory) would have many applications in a
diverse range of disciplines.

The research problem is thus how the behaviours of individual system
components and their interactions (for example, computer network
nodes) must be adapted in order that the system (for example, a
large-scale computer network) produces a desired solution (for
example, maintaining optimal traffic flow given disruptions to
specific nodes and network connectivity).

Other envisaged applications include applying GSO for adapting the
collective traffic flow of autonomous vehicles that must efficiently
navigate urban road networks, self-assembly and adaptation of complex
nano-structures used in engineering new materials, and swarm robotic
behaviours that emerge to effectively and efficiently solve complex
physical tasks such as collective construction.

 

 

]]>
https://eon.elsi.jp/guided-self-organization-in-complex-adaptive-systems/feed/ 0
Studying the origins of learning and memory using neuroevolution https://eon.elsi.jp/studying-the-origins-of-learning-and-memory-using-neuroevolution/ https://eon.elsi.jp/studying-the-origins-of-learning-and-memory-using-neuroevolution/#respond Fri, 02 Feb 2018 09:04:32 +0000 https://eon.elsi.jp/?p=1672 Memory and learning are two of the cornerstones of our cognition. Without learning, organisms would be restricted to the behaviors described by their genetic code. This can pose a problem when the environment they are born in is not entirely predictable. With learning, organisms can acquire new skills and increase their chances of survival and of reproduction by fully exploiting their surroundings.

The current trend in research on learning and memory focuses on synapses. Since the discovery that experiences modify synapses, every memory present in the brain has been studied within that framework. But if we look at the origins of life and the advantages provided by learning in the everyday life of an organism, it is not implausible that other mechanisms for learning were present before the appearance of plastic synapses. For instance, an early example of learning without synapses can be seen the slime mold Physarum Polycephalum. Without having a brain, this organism can measure the duration between two stimuli, memorise it and predict its next appearance. Its ability originates from the complex internal dynamics of the chemicals composing it.

In my work, I am interested in finding out if learning mechanisms similar to the ones found in Physarum Polycephalum can exist in the brain, albeit using the interactions between neurons instead of chemicals. To that end, in my previous research, I used artificial neural networks to model the brain, and tuned their parameters using evolutionary algorithms to complete tasks requiring specific learning abilities to be completed.

My research during my visit at ELSI focuses on the origins of time perception and symbolic memories. During my previous work, I evolved neural networks without synaptic plasticity that were capable of memorising symbolic information. The memory was stored inside a fixed attractor, or in its attractor basin. In another research, I applied the same methodology to study the origins of time perception and discovered that the memory could also be stored in different trajectories within the dynamical landscape of the network (see figure 1). These two phenomena require learning and memory, but the dynamics that evolved in both research differed greatly. This made me wonder what kind of mechanism would evolve if a task requires symbolic memory and time perception to be completed. Maybe both mechanisms would exist in different neural modules, or one would evolve faster and the second would rely on the neural systems evolved for the first, or if a single mechanism can implement everything. I setup a evolutionary robotic experiment to find out which of the choices it could be. My goal is that at the end of this experiment I will have a better understanding of how early mechanisms for learning and memory evolved, and how they might have interacted.

Figure 1: Each colored line represents one trajectory in the internal dynamics of a complex neural network when asked to measure and remember the duration of one stimulus. Five durations could be memorised (1s to 5s).
]]>
https://eon.elsi.jp/studying-the-origins-of-learning-and-memory-using-neuroevolution/feed/ 0
The language of exoplanet ranking metrics needs to change (on the nature astronomy) https://eon.elsi.jp/the-language-of-exoplanet-ranking-metrics-needs-to-change-on-the-nature-astronomy/ https://eon.elsi.jp/the-language-of-exoplanet-ranking-metrics-needs-to-change-on-the-nature-astronomy/#respond Fri, 03 Feb 2017 02:02:04 +0000 https://eon.elsi.jp/?p=1349 group-picture-lq

Elizabeth Tasker, Joshua Tan, Kevin Heng, Stephen Kane, David Spiegel & the ELSI Origins Network Planetary Diversity Workshop

We have found many Earth-sized worlds but we have no way of determining if their surfaces are Earth-like. This makes it impossible to quantitatively compare habitability, and pretending we can risks damaging the field.

See more at: http://www.nature.com/articles/s41550-017-0042

]]>
https://eon.elsi.jp/the-language-of-exoplanet-ranking-metrics-needs-to-change-on-the-nature-astronomy/feed/ 0
Article of Interest: Continents and The Rise of Atmospheric Oxygen https://eon.elsi.jp/riseofoxygenarticle/ https://eon.elsi.jp/riseofoxygenarticle/#respond Fri, 10 Jun 2016 00:19:30 +0000 https://eon.elsi.jp/?p=1159  

A recent article published in Nature Geoscience discusses how the increase in atmospheric oxygen was likely linked to continent composition and growth. Other factors not discussed in detail in the paper include photosynthetic life, nutrient concentration, water, etc. For the origin of life on Earth, oxygen plays a quintessential role, but researchers have not been able to fully understand the evolution of Earth’s atmosphere from oxygen-free conditions to its present state. Lee et al. demonstrates that the Great Oxidation Event (~2.5–2.0 billion years ago) occurred due to a change in the composition of Earth’s crust from basaltic to felsic magmatism, essentially decreasing the amount of oxygen sinks available, such as Fe2+ and S2–. The second oxidation event (Neoproterozoic Oxygenation Event, ~2 billion years ago) was likely affected by CO2 accumulation on continents. After providing detailed evidence for the effects of continents on the two oxidation events, Lee et al. models the CO2 and O2 concentrations over time. This paper provides a new perspective on the rise of atmospheric oxygen, which is necessary to further understand the origin of life.

 

Linkes to Official Press Releases:

University of Tokyo: English | Japanese (includes ~3 min video!)

Rice University: English

 

Authors of Paper:

Department of Earth Science, Rice University, Houston, Texas, USA
Cin-Ty A. Lee, Laurence Y. Yeung & Adrian Lenardic

Department of Geology and Geophysics, Yale University, New Haven, Connecticut, USA
N. Ryan McKenzie

Department of Earth Sciences, University of Hong Kong, Pokfulam, Hong Kong, China
N. Ryan McKenzie

Atmosphere and Ocean Research Institute, University of Tokyo, Japan
Yusuke Yokoyama & Kazumi Ozaki

]]>
https://eon.elsi.jp/riseofoxygenarticle/feed/ 0
“The deep sea, the origin of life, and astrobiology ” – A short article appeared in the Institute Letter https://eon.elsi.jp/the-deep-sea-the-origin-of-life-and-astrobiology-a-short-article-appeared-in-the-institute-letter/ https://eon.elsi.jp/the-deep-sea-the-origin-of-life-and-astrobiology-a-short-article-appeared-in-the-institute-letter/#respond Tue, 17 May 2016 03:59:56 +0000 https://eon.elsi.jp/?p=1105 Deep-sea hydrothemal vent
A deep-sea hydrothermal vent photographed through the DSV Alvin porthole by Donato Giovannelli during a dive at 2500 m

Recently a short article I wrote on Earth’s last frontiersthe deep-sea—appeared in the Institute for Advanced Studies Institute Letter. In the article I briefly speak about the discovery of deep-sea hydrothermal vents, and how deep-sea exploration has changed our view on life and habitability. You can read the article at the following link https://www.ias.edu/ideas/2016/giovannelli-last-frontier. I strongly believe that deep-sea exploration, and a better understanding of the largest ecosystem of our planet could help us to shed light on the emergence and evolution of life on our planet.

Enjoy!

]]>
https://eon.elsi.jp/the-deep-sea-the-origin-of-life-and-astrobiology-a-short-article-appeared-in-the-institute-letter/feed/ 0
ECOLI: Early Career Origin of Life Initiative https://eon.elsi.jp/ecoli-early-career-origin-of-life-initiative/ https://eon.elsi.jp/ecoli-early-career-origin-of-life-initiative/#comments Tue, 08 Mar 2016 14:08:02 +0000 https://eon.elsi.jp/?p=1034 If you attended the exciting “Re-Conceptualizing the Origins of Life” conference last year, you’ll probably remember a bunch of us talking about organizing an early-career workshop. Well, the time has come to put that idea into action! Now introducing ECOLI: The “Early Career Origin of Life Initiative”. Because at least four of us are from Arizona State University, we thought it would be easier (logistically) to host this workshop in Arizona. Originally, we had thought that a 2-week workshop would be ample time to establish lasting collaborations and friendships, but others expressed that a 1-week workshop would be more agreeable with other responsibilities.

Speaking of Arizona, it would need to be a winter workshop… not a summer one! We are looking to target January 2017… unless there is another group who would rather host the workshop at their local institution. If that’s the case, we could aim for a summer.

Several of us are still very much interested in putting a workshop like this together. It would be absolutely fantastic to build a community among the early career scientists! We all work on very exciting things, and this will give us a chance to learn each other’s perspectives and invent new routes of investigation. But since this is a grassroots effort, there are still some things that need to be agreed upon! These are:

  • Should we do this in the Phoenix-ish, Arizona area? (If so, then it will be a winter workshop)
  • If not, then is anyone else willing to take the initiative on hosting?
  • Should it be one or two weeks long?

Please, feel free to jump in on the discussion! The more voices that can actively participate, the sooner this can become a reality!

]]>
https://eon.elsi.jp/ecoli-early-career-origin-of-life-initiative/feed/ 1
Information ecology and the evolution of complexity https://eon.elsi.jp/information-ecology-and-the-evolution-of-complexity/ https://eon.elsi.jp/information-ecology-and-the-evolution-of-complexity/#respond Wed, 02 Mar 2016 01:00:35 +0000 https://eon.elsi.jp/?p=1047 A complex tree, but never anything but a tree. From an unrelated simulation (Chapter 9)
Evolved tree from an unrelated simulation (Chapter 9) of complexity-increasing evolution. However, the simulation will never produce anything but trees that compete by being taller – there is no possibility of innovation or novel mechanism.

‘Open-ended increase in complexity’ is a thing that we often want to find or make in ALife. The idea that we could come up with some simulation that, if we just threw more time and computer power at it, would constantly reveal new things to us is a bit of a holy grail. It also seems to be something that life on Earth has at first glance managed to do (although there have been bursting periods and stagnant periods). But in general, if you visited Earth 2 billion years ago, there would be a large number of biological innovations that had yet to come about despite the fact that there had been life on Earth for quite some time up to that point. 2 billion years is a very long time to go (in terms of generations of cells) without realizing that you’ve done everything there is to do.

It seems to be pretty difficult to get, though. Evolution alone isn’t enough – you can find all sorts of simulations with evolution where it happily goes along, discovering new things, until it finds that it has discovered the best thing ever – the best possible replicator, the most competitive genome, the optimal solution to the problems posed by the simulation. So once it finds that, the system just sits there doing that thing for the rest of time. Alternatively, there are simulations where a fluctuating environment or competitive cycle causes the organisms to constantly change what they’re doing. But when you look carefully, you find that the system is repeating, or just doing things randomly (so that it is in some sense statistically stationary).  Or, a more subtle thing, it might even go off and make something arbitrarily elaborate and complex but which remains qualitatively identical – it makes all sorts of elaborate trees, but it never makes something that isn’t a tree.

When someone first starts doing ALife and is first faced with this conundrum, there’s a sequence of ideas that almost everyone goes through. ‘Maybe we just need to run it longer?’, ‘Maybe the system just needs to be bigger?’, ‘What if you add a fluctuating environment?’, etc. These are all pretty easy to try out, and so one quickly explores these possibilities and builds an intuition about what each of these things is doing (hopefully). But there’s a much harder one to answer which keeps coming up: ‘What if, to get a lot of complexity in life, we need a lot of complexity in the laws of physics/the world?’.

This is a particularly tricky thing, because its one of those hypotheses that if it’s really the answer to the question, it shuts down the purpose of the whole endeavor. If we can only get out what we put in, why bother at all? It also looks counter-intuitive given what we see in the real world – chemistry has a lot of complexity and structure, but it’s a natural result of quantum mechanics which has much less complexity ‘baked in’. There are many more chemical compounds than chemical elements, and this sort of combinatoric character feels like it should get you something. We also see things like programming languages, where a program can be much more complex than the compiler used to generate it.

Even if the idea feels unsatisfying, it would be good to give it a fair shake and see whether or not there’s anything to it. Maybe rather than just getting out what we put in, there’s a sort of leverage where we get out something proportional to but greater than what we put in, which would give us an idea of how to design the kinds of feedbacks we would need to chain into real open-endedness. Or there could be some other subtlety revealed in careful examination. And if we found that actually complexity in the environment doesn’t always get you something, it would suggest that we can maybe put this particular direction to rest and look elsewhere for the time being.

Its not enough to just pick a favorite system, throw it into a complex environment, and see what happens – if you got a null result, you wouldn’t be able to say that it wasn’t just a bad choice, and that it doesn’t say anything in general. Instead what I’m going to try to do here is to come up with a system where I can guarantee that an increase in environmental complexity results in an increase in the complexity of the resulting evolutionary system – the hope being that then if it fails, it suggests at the least that there should be some reason why it wasn’t possible to build that might point at a deeper understanding. Towards this end, perhaps a place to start is to look at deep learning, where we know that large amounts of rich data do make an observable difference in the way neural networks learn. In such cases, the neural networks discover statistical and structural regularities in the data that begin to let them actually generalize to widely different problems – a network pre-trained to recognize what sport is being played in a video is actually faster at learning to recognize objects than starting from scratch.

If I want to make an evolutionary analogue of this, I think the important thing is to recognize that once a neural network correctly classifies a given piece of data, it can be shown that data again without having its weights change. So it naturally insulates the parts of it that have already learned things from the expanding fronts where there are still new things to learn. An ecological version of this is the concept of niches – if I have some organisms adapted to one niche, they don’t compete with what’s going on in a different niche.

With a chain of predictors, you can figure out what new measurements to make to predict something given what you already know.
With a chain of predictors, you can figure out what to measure to predict a particular target.

So what if we treat each part of a set of real data as corresponding to a separate niche? In that case, we’d at least find that as the data becomes more complex, the number of niches increases, and so the complexity of the resultant ecology would increase. That’s very schematic, but its enough to consider how to actually design such a system.

If I have different independently provisioned and limited food sources, such that an organism must decide which food source to eat from, then that immediately gives me a number of independent niches equal to the number of food sources. So given some data set, I can treat each measured feature as a food source, and then allow the organisms to try to predict that feature using some other features. The result would be that each feature is like a separate evolutionary simulation in total isolation from the others, where it’s just trying to find the best way to predict that one target. However, for it to really qualify as complexity and not just ‘large amounts of stuff’, I want these things to all interact with each other somehow.

The way I’ll try to do that is to make it so that any organism which reads a certain feature to use in its predictions must pay some resources into that bin. So the population capacity of a given feature is based on how useful that feature ends up being for predicting other things about the data down the line. Features which are highly predictive will end up being worth more if you can in turn predict them. The result would conceivably some kind of explanatory network, where for any given feature you can determine different ways to infer that feature given other sparse measurements. For example, if the network discovered that features A and B predict feature C, and D and E predict B, then you can figure out that if you knew D and A and wanted to predict C, you can measure either B or E and propagate the result through the network.

In terms of the specific implementation, I associated every feature of the data set with a small food supply. Organisms can pull from that food supply based on how their predictions of that feature compare with their competitors, so each feature is its own ecological niche. That means that at least for the simple one organism = one prediction case, we should expect to see a number of stable species more or less equal to the number of features in the data set (if organisms could generate new features, we’d have a model with niche creation, but in this simple version they cannot). When organisms have eaten a certain amount of food, they replicate. This is balanced against all organisms having a constant death rate. For the predictor, each organism has a small neural network which is trained during its lifetime. However, the inputs, outputs, and hidden layer of the neural network are all specified genetically. Mutation occurs during replication, and alters the parameters of the network.

I haven’t gone into a full description of the model or the algorithm, and to be honest there are a lot of knobs to twist and things that I had to tweak to get something without obvious ‘cheating’ behavior (for example, if someone figures out how to use a feature to predict that same feature). The full layout would be a bit much for this blog post, but I’ve made some source-code available (it uses Python, with Lasagne/Theano for the neural nets) with example data from the survivors of the Titanic.

Network of predictive relationships between features in the Titanic survivors data. Note the discovery of a trivial relationship (Embarked can only be C,Q, or S, so if it's one then its always not any other)
Discovered network of predictive relationships between features in the Titanic survivors data. Note the discovery of a trivial relationship (Embarked can only be C,Q, or S, so if it’s one then its always not any other)

This setup does appear to work, at least in the sense that the evolutionary dynamics do increase the average predictive power of the collective population over time. You get some interesting networks that might suggest all sorts of potential relationships between features of the data, so this could also be useful for feature engineering. It’s unclear whether this is really ‘complex’ though, or if its just a lot of different independent things. The lack of any form of niche creation seems to severely limit how interesting the results can get. Presumably we’d want organisms to be able to use a hidden neuron to ‘summarize’ some collection of features about the data, and then perhaps have some other organism come along and make use of the hidden neuron itself rather than the base features in order to save on costs – e.g. creating something like a food web or series of informational trophic levels. These new features wouldn’t have an influx of food associated with them, but could gain associated food if the end up being useful to other organisms. So the result would be that the number of niches would start at being the number of features, but it could become larger if it is somehow useful to do so in order to more efficiently explain the data.

This might start to get at the original question: does something like this just mirror the complexity of the data set, or is there some way to leverage that initial complexity to generate something with greater complexity (as well as greater explanatory efficiency) than before. Certainly such a thing would not go open-ended, but it might give a good way to really provide environmental complexity that gets used by the resulting ecology, rather than simply being integrated over or ignored.

]]>
https://eon.elsi.jp/information-ecology-and-the-evolution-of-complexity/feed/ 0
Modeling the emergence of signal-based collective behavior https://eon.elsi.jp/modeling-the-emergence-of-signal-based-collective-behavior/ https://eon.elsi.jp/modeling-the-emergence-of-signal-based-collective-behavior/#comments Tue, 23 Feb 2016 23:27:16 +0000 https://eon.elsi.jp/?p=1039 Collective behaviors is found everywhere in nature, from birds flocking, fish schooling, ants foraging, fireflies synchronizing, to humans self-organizing in societies. Not only are all these examples of entities entering a higher-level of organization, but the way they reach this state is most interestingly done without any leader or central control. Instead, emergent swarming patterns are based on simple, local, individual decision making. Identifying the minimal features of biologically-inspired interacting agents leading to the emergence of such behavior is fundamental to our understanding of collective behavior, for many disciplines of research ranging from physics to chemistry, to biology, to artificial intelligence.

The literature of the last 20 years is saturated with mathematical models of swarming behavior that rely on a third-person controller following static rules of interaction, or imposed fixed leaders within the group. But few people seem to take a minimalistic modelling approach, simulating what emerges from agents uniquely exchanging simple signals between each other. I decided we could do better. I therefore coded up a computer simulation of agents navigating in a 3D world, guided by artificial neural networks. In the virtual environment, in order to survive, each agent must look for a resource it cannot detect directly. Instead, each agent senses signals produced by other agents in the neighborhood and processes them through its own artificial neural network and can modify its velocity and its own signal accordingly.

After a few dozens of generations, the simulation produces agents which collectively move together like a natural swarm. But the most surprising was how these dynamics allowed agents to be more efficient at reaching and sticking to resource areas, while still being unable to detect them directly. Once the collective behavior emerges and spreads to the whole population, it naturally leads to a genetic drift of the agents’ genotypes. I would like to suggest that this model presents dynamics favorable to the study of relaxed selection, which would be a natural next step for the simulations to develop into. This piece of work on emergent collective strategies for uninformed search in complex systems is an example of simulation-based approach, paving the way for future research on the origins of signaling and efficient collective patterns in evolution.

Computational modeling offers the significant advantage that the information flows between individuals — in an information theoretic sense — can be thoroughly computed within the simulation. These measures can typically lead to compelling insights on which properties lead to transitions to the next level of organization: namely the emergence of a cognitive swarm or, in a more general way, the transition from a population of solitary individuals to a collective society.

 

Link to the publication: http://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0152756

]]>
https://eon.elsi.jp/modeling-the-emergence-of-signal-based-collective-behavior/feed/ 4
A short history & future of MOL https://eon.elsi.jp/a-short-history-future-of-mol/ https://eon.elsi.jp/a-short-history-future-of-mol/#respond Fri, 19 Feb 2016 23:44:00 +0000 https://eon.elsi.jp/?p=1040 Three months ago, many of us attended the conference Re-Conceptualizing the Origin of Life at Carnegie Institution for Science in Washington, DC.  This meeting was the culmination of a grass roots movement called MOL, short for Modeling Origins of Life, which started a couple years earlier with a white paper, based on the first few MOL workshops.  It is our hope that the enthusiasm generated in the conference will continue, and lead to vigorous discussions here on the EON web site.  I want to thank Nathaniel Virgo for setting up the web site, and Sara Walker not only for chairing the Scientific Organizing Committee for the conference, but also for taking the lead in getting our discussions here started.

Our main goal, beyond the discussions here in cyber space, is to organize “working workshops” of the kind that we held in early 2014 at the Institute for Advanced Study in Princeton (photo below), and in the summer of 2014 in Japan, where we got together with a couple dozen people at ELSI in Tokyo at first, and then later continued in Kobe at the Center for Planetary Science, for a total of five weeks.  In addition, if anyone likes to organize local meetings or gettogethers of any kind, please go ahead, and make sure to announce them here.

An early MOL workshop
The first MOL workshop, in March 2014.
]]>
https://eon.elsi.jp/a-short-history-future-of-mol/feed/ 0
Seeking New Insight into Life’s Origin https://eon.elsi.jp/seeking-new-insight-into-lifes-origin/ https://eon.elsi.jp/seeking-new-insight-into-lifes-origin/#respond Thu, 14 Jan 2016 22:42:56 +0000 https://eon.elsi.jp/?p=991 Carnegie Meeting

A really nice article titled “Seeking New Insight into Life’s Origin” by Johnny Bontemps was posted on the NASA Astrobiology website about the November conference organized by MOL. The article is available here: https://astrobiology.nasa.gov/news/seeking-new-insight-into-lifes-origin/  . Loved the end sentence “For the solution to emerge, in the end, scientists working to solve life’s origin might well have to emulate the very process they are studying.”

]]>
https://eon.elsi.jp/seeking-new-insight-into-lifes-origin/feed/ 0