EON Science – EON Project https://eon.elsi.jp Sat, 02 Jun 2018 18:00:24 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.12 https://eon.elsi.jp/wp-content/uploads/2015/09/cropped-logo-32x32.jpg EON Science – EON Project https://eon.elsi.jp 32 32 Exploring the Role and Possible Utility of Criticality in Evolutionary Games on Ising-Embodied Neural Networks https://eon.elsi.jp/exploring-the-role-and-possible-utility-of-criticality-in-evolutionary-games-on-ising-embodied-neural-networks/ https://eon.elsi.jp/exploring-the-role-and-possible-utility-of-criticality-in-evolutionary-games-on-ising-embodied-neural-networks/#respond Sat, 02 Jun 2018 14:50:30 +0000 https://eon.elsi.jp/?p=1890 At the heart of the scientific process is the yearning for universal characteristics that transcend historical contingency and generalize the unique patterns observed in nature. In many cases these universal characteristics bleed across the different branches of the tree of knowledge and almost serendipitously relate disparate systems together. In the last century a growing consciousness of one such emerging universality has sparked an obsession with the ideas behind criticality and phase transitions. One domain where criticality pops up quite often is in the dynamic domain of life (and complexity), and more recently in understanding the organization of brain and society. What is becoming clear about criticality is that it isn’t just a mode of organization that is possible, it seems to be an attractor in mode-space. For whatever reason, criticality, the self-organization of criticality, and evolution are deeply intertwined and it is the goal of this project to explore this intersection of ideas.

The present post was written by Sina Khajehabdollahi, and is the product of a collaboration with Olaf Witkowski, funded partly by the ELSI Origins Network. This research will be published in the Proceedings of the 2018 Conference on Artificial Life (ALIFE 2018), which will take place in Tokyo, Japan, July 23-27, 2018.

Criticality and Phase Transitions

Modes of Matter and their Collective Organizational Structures

Our understanding of the phases of matter start from ancient beginnings and have their origins rooted in the phenomenological experience of the world rather than scientific inquiry. Earth, Water, Air, Earth, Aether, from these everything is made, or so was/is thought. We have since moved on to more ‘fundamental’ concepts like quarks and electrons, waves and plasmas; Modes of matter and the ways that they can organize.

Is this a recursive process? Is ‘matter’ simply a sufficiently self-sufficient ‘mode of organization’? The brain as a phase of neural matter? The human as a phase of multi-cell life, multi-cell life as a phase of single-cell life, single-cell life a phase of molecular matter, etc.

Matter: A self-sufficient mode of organization with ‘reasonably’ defined boundaries.

The phases of nature are most prescient when they are changing, for example observing the condensation of vapor, the freezing of water, the sublimation and melt of ice. Emphasis must be put on the notion that phase transitions are generally describing sudden changes and reorganization with respect to ‘not-so-sudden’ changes in some other variable. In other words, phase transitions tend to describe how small quantitative changes can sometimes lead to large qualitative ones.

Associated with phase transitions are critical points (or perhaps regimes in the case of biological systems), where these critical points are analogous to a goldilocks zone between two opposing ‘ways of organization’. In an abstract/general sense, a critical point is the point where order and disorder meet. However, there may certainly be times where there are transitions between more than 2 modes of order. By tuning some choice of ‘control parameter and measuring your mode of ‘order’, a phase diagram can be plotted. In the case of Figure 1, this is seen for the control parameters temperature and pressure for relatively ‘simple’ statistical physics systems.

Figure 1 Typical Phase Diagram of Matter and Transition Points

However, such a plot is not readily available when discussing complex systems like humans or human society and its idiosyncratic modes of interaction, partially for a lack of computational power which is constantly improving, but fundamentally due to our constrained knowledge of complex/chaotic systems and their descriptive mathematics.

Finally, the importance of criticality arises from this ‘best-of-both-worlds’ scenario, given that the ‘both worlds’ are distinct and useful modes of order/disorder. For example in the conversation of the human brain, the dichotomy is often between segregation and cooperation; The brain is a modularly hierarchical structure of interconnected neurons which have the ability to take advantage of ‘division-of-labor’ strategies while also being capable of integrating the divided labor into a larger whole. There seems to be an appropriate balance of these forces, between efficiency and redundancy, between determinism and randomness, between excitation and inhibition, etc. The most appropriate ‘order parameter’ is not obvious and there are many choices possible along with many more choices for ‘control parameters’.

We need a sufficiently simple model to play with, and to that end we look to the Ising model.

Ising Model

Phenomenological model of phase transitions

The Ising model is probably the simplest way to model a many-body system of interacting elements. In its simplest form (at least the simplest that exhibits phase transitions), binary elements that can only take the values of ±1 are organized onto a 2D lattice grid, interacting only with their nearest neighbours. This model can be an analogy for solid matter or gasses with local interactions as it originally inspired, but also now more generally for modelling neurons in the brain, or humans or agents in socio-economic contexts.

Figure 2 Visualization of a 2D Ising Model at Criticality

The success of this simple model was in its ability to exhibit a phase transition by changing only one control parameter, the temperature of the heat bath. As the temperature increased, larger and larger energetic fluctuations are allowed where the Energy of a system in a configuration is:

$$ E = – \sum_{\left \langle i, j \right \rangle} J_{i,j} s_i s_j – \sum_i h_i $$

where the summation \( \left \langle i, j \right \rangle \) is over all nearest neighbours, \( J_{i,j} \) is the interaction strength between nodes \(i\) and \(j\), and \(h_i\) is the locally applied field/bias. The model tends to try to minimize its energy by favouring the probability of ‘spin-flips’ that minimize \(E\). Unfavorable spin-flips are allowed as a function of a Boltzmann factor:

$$ p \sim e^{\frac{\Delta E}{T}} $$

As the control parameter temperature is varied, the system can act more or less randomly. A critical temperature exists such that the qualitative organization of the model becomes discontinuous and changes. At the critical point, the system exhibits long-range correlations and maximizes information transfer between nodes, properties deemed extremely useful if not necessary for intelligent systems.

Figure 3 Statistical measures of an N=83 Ising model with mean=0, normally distributed connectivity matrix. Note the critical point near β=0.2 as seen in the discontinuity in the Magnetic Susceptibility as well as peaks/qualitative changes in the other measures.

Figure 3 visualizes some basic statistical measures of an example Ising system as a function of \( \beta = 1/T \) simulated using Monte Carlo Metropolis methods. In this case instead of a 2D lattice grid, a fully-connected graph with normally distributed (mean 0) random edge weights is generated. The transition point of this model is most easily discerned in the discontinuity of the Magnetic Susceptibility plot, though the transition exhibits itself in the other variables quite noticeably as well.

Ising-embodied Neural Networks

It’s Alive! Making the Ising model do things!

Now the point of this project wasn’t simply to play with any old Ising models, the goal was to embody these Ising models into context and bring them to life so that we can see how/if criticality may play a role in adaptation and survival. To that end, we introduce the Ising organism, an object class that has its own unique connectivity matrix and acts as an independent neural network organism as visualized in Figure 4.

Figure 4 Basic architecture of each organism’s Ising-embodied neural network.

The sensor neurons for these organisms are sensitive to the angle to the closest food, the distance to the closest food, and a directional proximity sensor with respect to other organisms. These sensor (input) neurons are connected to a layer of hidden neurons, which are in turn connected to both themselves and motor (output) neurons. The motor neurons control the linear/radial acceleration/deceleration, effectively identical to the steering wheel and gas/brakes in a car.

A community of 50 organisms with this architecture (but each with its own set of unique weights) are generated and placed in a 2D environment that spawns food (Figure 5). A generation is defined as an arbitrary amount of time (for example 4000 frames) where all organisms can explore their environment by moving around. When an organism (green circles) gets close enough to a parcel of food (purple dots), the food disappears, and a counter adds to the score of the organism. Food parcels respawn instantly upon being eaten so there is always an infinite supply.

From here, we can begin to explore ways in which these Ising-embodied organisms can adapt to their environment.

Figure 5 Sample frame of a community of 50 Ising-embodied neural network organisms.

Critical Learning

Strategies for approaching criticality

Initially this project was motivated by “Criticality as It Could Be: organizational invariance as self-organized criticality in embodied agents” a project by Miguel Aguilera and Manuel G. Bedia. In their work they demonstrate that criticality can be learned in an arbitrary Ising-embodied neural network simply by applying a gradient descent rule to the edge weights of the network. The gradient descent rule would attempt to learn a distribution of correlation values which are assigned to it a priori (Figure 6). The gradient descent rule governs the adaptation of the organism:

$$ h_i \leftarrow h_i + \mu (m_i^* – m_i^m) \\ J_{ij} \leftarrow J_{ij} + \mu (c_{ij}^* – c_{ij}^m) $$

The local fields and interaction strengths (edge weights) are updated with respect to the actual mean activation \( m_i^m  \) and the reference activations \( m_i^*  \) from the known critical system and similarly for the actual correlation values \( c_{i,j}^m  \) versus the reference correlation value \( c_{i,j}^*  \) from the known critical system.  \( \mu = 0.01 \) is the learning rate. We introduce a concept analogous to that of the generation, the semester, an arbitrary number of time points (again, 4000 frames for example) in which the organisms have time to generate correlation/mean activation statistics at the end of which the gradient descent rule is applied. In short, the gradient descent rule is applied once per semester.

By learning the correlation distributions of a known critical, Ising system, an arbitrary network could also learn to be critical without fine-tuning any control parameter like temperature.

Figure 6 Left: The connectivity matrix used to generate critical Ising correlations. This matrix is taken from the Human Connectome Project and is a rough map of brain interconnectivity. Right: The correlation distribution of the connectivity matrix pictured.

Aguilera et al. embody these networks in simple games (car in a valley, double pendulum balancing) and demonstrate that once the systems converge towards criticality that the agents maximized their exploration of phase space and would position themselves at the precipice of behavioural modes. However, what success these agents demonstrated in critical learning, exploration and playfulness, they lacked in their ability to actually play these games to win. Arguably, these goals may ultimately be perpendicular, however in trying to contextualize criticality with life-like systems we must push forward and see if we can apply critical learning and evolutionary selection simultaneously.

Genetic Algorithm and Evolutionary Selection

Playing games, competing for high scores, mating, mutating and duplicating

Parallel to the critical learning algorithm, we introduce a genetic algorithm whose goals are simply to reward those organisms that have eaten the most food. Using a combination of elitist selection and mating, successful organisms are rewarded by duplication and the chance to mate and generate offspring. Mutations accompany this process and allow for the stochastic exploration of the organisms’ genotype space.

Using the previously defined concept of a generation, the genetic algorithm is applied once at the end of each generation.

Visualizing Community Evolution

Observing the evolution/learning process

So we now have a community of Ising-embodied organisms playing a foraging game in a simple, shared, 2D environment in which we introduce the concept of generations (time span in which genetic algorithm operates) and semesters (time span in which critical learning algorithm operates).

Let’s look at these 2 algorithms once each separately, and then once again together. However, when combining the two algorithms, we face a dilemma. These two adaptation algorithms tend to destroy the learning done by the other. In other words, the genetic algorithm does not conserve what is learned by the critical algorithm and vice versa. However, perhaps if we combine these two algorithms in some appropriate ratio of time scales we can achieve an adaptation scheme that is relatively continuous and non-destructive to its previous adaptations. The choice for a ratio of 6 semesters to 1 generation is made such that the learning algorithm has time to kick in before the GA can select successful foragers.

First, let’s look at how the genotypes evolve across the generations. We use tSNE projections in Figure 7 to visualize the relationships between the connectivity matrices of the different organisms. As a general clustering algorithm, tSNE projections help us visualize the groupings of high dimensional data, in this case all the edge weights in each individual neural network. We then cluster these networks across all generations to observe how they adapt either through the GA, critical learning, or both methods combined.

Figure 7 tSNE Projections of the Connectivity Matrix of all 50 organisms across 2000 generations. Perplexity = 30. The color coding corresponds to the generation age and the marker size corresponds to the fitness level.

It’s useful to know how the fitness of these organisms goes across the generations, so we plot that in Figure 8 to get a feel of how well each algorithm did in playing our game.

Figure 8 Average fitness across 2000 generations. Solid lines are community averages, dark shaded areas represent bounds on one standard deviation, light shaded areas show bounds for minimum and maximums.

Finally, to compare our simulations to critical systems we measure the specific heat of each organism as a function of β. If the evolution algorithm works as intended then we should see the peaks of the specific heat converge to β = 1, the β which the simulation is natively run at.

Figure 9 Specific Heat vs. β for the 3 different adaptation paradigms.

 

From the fitness functions in Figure 8 it is clear the GA algorithm by itself is the most successful in increasing the fitness of the organisms. This is not altogether surprising as we expected that the two algorithms will undo each other’s work if they are not appropriately made compatible. When looking at the tSNE projections of the GA algorithm by themselves we note its evolution is marked by major individual milestones (where the large clusters are) and that these organisms duplicate and seed the next generation of mutated/successful organisms. However, even this strategy plateaus quite rapidly, within approximately 250 generations. The specific heat of this paradigm both starts, and ends centered near β = 1 so it is not clear if the slight shift in the curves is within error bounds or a real pattern. We would have to run many iterations of this experiment to know for sure. Another helpful experiment that starts the community of organisms farther away from criticality (for example if we change the local temperatures (Betas) of each organism), then we might be able to better see if evolution within this foraging game drives the system towards criticality.

For the critical learning only paradigm, the tSNE projections are straight forward. Here the learning is not contingent on the fitness of the organisms at all and therefore all learning is done ‘locally’. No organisms die (because of the lack of a GA) and so there is continuity in the genotypes. This paradigm seems to act to sharpen the specific heat plots near β = 1, making for a more dramatic/steeper transition. This contrasts with the evolution algorithm which did the opposite. The fitness function of this paradigm is quite awful, again as expected since this algorithm is not learning how to play the foraging game, it is only intended to learn ‘criticality’.

Finally, when combining the two paradigms together (in a ratio of 6 semesters per 1 generation), a slightly more complicated relationship forms. In the tSNE projections we observe stronger continuity between the generations. Perhaps the critical learning algorithm here is pushing all organisms in a universal direction which clusters their genotypes closer together than the evolution only paradigm. Unfortunately however, this patter does not result in any interesting increase in the fitness function of the community, instead behaving erratically or stochastically when compared to the other two paradigms. In a sense it gives hope with respect to the critical learning only paradigm since the erratic behaviour pushes its fitness higher rather than lower. Finally, no deep insight is visible in the specific heat plots of this paradigm either as our initial condition and generation 2000 results overlap showing no indication for a trend away criticality. The failure of this experiment was to ensure that the system does not start critical!

It seems our fears have come true in a sense, the two algorithms run perpendicular to each other, each undoing the work of the last. Furthermore, our experiments had the unfortunate feature that their initial conditions were very close to criticality to begin with. This property made it difficult for us to make obvious observations. However our preliminary results seem to indicate that the evolution algorithm and the critical learning algorithm work in opposing directions. The GA tends to pull the system towards sub criticality whereas the critical learning algorithm pulled the system to super criticality (by pushing the peaks of the specific heat curves either left or right in Figure 9). There needs to be a way for both of these forces to contribute to the connectivity of the network without undoing each other’s previous learnings.

Summary

TL;DR

This project aimed to incorporate the ideas of criticality into biological, evolutionary simulations of Ising-embodied neural networks playing a foraging game. We ran 3 experiments testing 2 different adaptation algorithms (critical learning and the GA) as well as their combination is a ratio of 6 learning semesters to 1 GA generation. Our experiments results highlighted some of the short comings of our experimental set up, namely that our initial conditions were too close to criticality to begin with to distinguish any flows in parameter space. In the next experiment, communities will spawn with varying distributions of local temperatures which will force the specific heat peaks away from β = 1. This way as the community evolves, it will be much more clear if there is a flow towards β = 1 or not. Such a flow could indicate the utility of criticality. Nonetheless, in this process a set of tools were constructed for future experiments to explore further the interaction and self-organization of communities into criticality contextualized within a GA paradigm.

Finally, it was observed that critical learning and GA tend to work in opposite directions, which can be a good thing as sometimes half the battle is finding the right forms of ‘order’ and ‘disorder’. However, we were not able to reconcile these differences and instead the two algorithms tended to deconstruct each other at every turn. Future experiments in this paradigm can introduce two layers of connectivity, one for the GA and another for learning which overlap but evolve semi-independently. This reconciliation may allow for a more stable form of interaction and allow for a more diverse set of behaviours and genotypes. The idea here is that through the playful/exploratory nature of criticality, plateaus in the evolutionary process can be overcome faster thereby decreasing the time per generation to improve the community’s fitness.

]]>
https://eon.elsi.jp/exploring-the-role-and-possible-utility-of-criticality-in-evolutionary-games-on-ising-embodied-neural-networks/feed/ 0
Mineral Surface-Templated Self-Assembling Systems: Case Studies from Nanoscience and Surface Science towards Origins of Life Research https://eon.elsi.jp/mineral-surface-templated-self-assembling-systems-case-studies-from-nanoscience-and-surface-science-towards-origins-of-life-research/ https://eon.elsi.jp/mineral-surface-templated-self-assembling-systems-case-studies-from-nanoscience-and-surface-science-towards-origins-of-life-research/#respond Tue, 08 May 2018 09:37:49 +0000 https://eon.elsi.jp/?p=1846 Origins of life and astrobiology research has often utilized analytical tools and scientific knowledge from other research fields, including chemistry, biology, geology, planetary science, physics, and many more. For this reason, much of the advancement within our research field has been built upon the technological and theoretical advancement of other fields. Recently, the fields of nanoscience and surface science have made significant progress in technical and theoretical aspects. Additionally, many of the lessons learned from self-assembling systems (and also sometimes the systems themselves) studied in nanoscience are quite relevant to advancement of the knowledge of our own origins. For example, recent work in self-assembling short peptide systems provides a simple prebiotically-plausible molecule which is able to result in very complex architecture without the necessity of the formation of high-energy peptide bonds.

 

In the following review article by EON postdoctoral fellow Richard J. Gillams, and ELSI member Tony Z. Jia, the authors present various case studies from nanoscience and surface science. Each case study involves a templated self-assembled system induced by a mineral surface, one of the most common interfaces available in the cosmos, and also a likely very important participant in the initial development of early life on Earth. These self-assembling systems templated by mineral surfaces, which include nucleic acid secondary structures, peptide nanofibrils, and simple ordered organic monolayers, may also have been able to contribute to the emergence of functional, structural, and chemical diversity on an early Earth environment. Such self-assembling systems are also known to play a role in biomineralization processes, a mechanism by which organisms can produce minerals (such as teeth, bones, shells, etc.) for its own use.

 

We envision a model by which mineral surfaces on primitive Earth catalyzed the synthesis and/or adsorption of simple biomolecules (such as nucleotides), polymerization of monomers into polymers (such as peptides or nucleic acids), and eventual self-assembly (such as in peptide amyloid structures) of these molecules into supramolecular structures. these supramolecular self-assemblies could then have catalyzed the formation of new mineral surfaces, resulting in a circular autocatalytic cycle. It is very clear that the geology of our planet was affected directly by various non-living chemical processes while the Earth-life system was moving beyond primitive chemistries and into a living system early in its history, resulting in an interactive and symbiotic feedback loop in which geology and early pre-living systems were closely linked to one another even before life’s origin. We hope that through these various case studies of mineral-templated self-assemblies, researchers in the origins of life and astrobiology fields see the merit of incorporating ideas from the fields of nanoscience and surface science into their own research.

Synergistic cyclical model of mineral-templated self-assembling systems promoting mineralization.

Paper title: Mineral Surface-Templated Self-Assembling Systems: Case Studies from Nanoscience and Surface Science towards Origins of Life Research
Authors: Richard J. Gillams and Tony Z. Jia
Journal: Life 20188(2), 10; doi:10.3390/life8020010

]]>
https://eon.elsi.jp/mineral-surface-templated-self-assembling-systems-case-studies-from-nanoscience-and-surface-science-towards-origins-of-life-research/feed/ 0
Information and regulation at the origins of life https://eon.elsi.jp/information-and-regulation-at-the-origins-of-life/ https://eon.elsi.jp/information-and-regulation-at-the-origins-of-life/#respond Tue, 13 Feb 2018 13:50:21 +0000 https://eon.elsi.jp/?p=1746 An ongoing debate in artificial life involves the definition and characterisation of living systems. This definition is often taken for granted in some fields, listing either a set of functions (e.g. reproduction, self-maintenance, metabolism) or a set of properties (e.g. dna) that attempt to fully describe living systems. It is however common nowadays to question most of these attempts, finding possible counterexamples to these arguments (e.g. mules don’t reproduce and viruses, if alive at all, do not contain any DNA), and showing how elusive the definition of life really is. Recent efforts in addressing the origins of life propose to include information theory in order to describe the blurry line between non-living and living architectures.

My work here at ELSI is largely based on this last idea, with a strong drive to include concepts derived from dynamical systems and control theory alongside information theoretical measures. Control theory is the study of regulatory processes mainly deployed in engineering systems, the simplest examples being a thermostat, built for instance to regulate the temperature inside a building, or a cruise controller on a car, maintaining a constant speed despite changes in the environment like wind or the slope of a road. Biological systems show an extraordinary ability to regulate their internal states, from temperature to pH, to different chemical levels. These processes are usually addressed under the umbrella term “homeostasis”, representing the ability of living systems to finely tune conditions for their persistence. (“Homeorhesis” might be more correct in this context, with the former referring to static equilibria and the latter to stable trajectories. For simplicity we will however use the perhaps more familiar term, homeostasis.)

 

 

Is there a way to use this remarkable and in some ways maybe unique adaptation to formally characterise the origins of life? Homeostasis is most definitely a requirement for living systems but is it a sufficient condition? Probably not. Replication and other crucial functions of biological systems are, in my opinion, not easily described in terms of regulation alone. Artificial systems can also clearly be built to show some similar properties (e.g. the thermostat mentioned previously). I believe it is however vital to consider its role at the origins of life, as perhaps suggested in the formulation of autopoiesis and related theories.

In order to define homeostasis more rigorously, I explicitly refer to frameworks of control theory and one of their most influential applications to the study of natural sciences in the last century, cybernetics. In particular, a few key results: the “law of requisite variety” then included into the “good regulator theorem” (GRT) and the “internal model principle” (IMP). All of these results seem to point at a common concept for control and homeostasis: regulation implies the presence of a predictive model within the system being regulated. In other words, to maintain certain properties within bounds (i.e. to be a  “good regulator”), a system must be able to generate an output (control signal) capable of counteracting the effects of the input disturbances that may affect the system itself. For instance, let’s consider a thermostat trying to regulate the temperature of a room to be around 20°C. If the temperature is 12°, the thermostat must be able to increase the temperature by 8°, if it can’t then it won’t be a good regulator for this system. While this last statement may sound trivially true, it really isn’t since it allows us to say that predictive (or generative) models are present in a system of interest (e.g. one could write down a model of a good thermostat with the necessary information to tune the temperature).

The metaphorical relationship between a thermostat and homeostasis in living systems.

 

Living systems could metaphorically be seen as very complicated thermostats. They respond to most disturbances avoiding decay, they regulate their temperature alongside several other variables including pH, oxygen intake and various chemical levels. Models of the origins of life should, in my opinion, be able to characterise the abundance of regulatory mechanisms in living systems starting from simpler chemical reactions. Here at EON/ELSI I began investigating models of reaction-diffusion systems. These models try to capture the spatial and temporal changes in concentration of one or more interacting chemicals. The one I focused on, the Grey-Scott system, represents one of the most studied autocatalytic models (an autocatalytic model is one where the product of a chemical reaction is also a catalyst for the same reaction). This model has been previously proposed as a testing ground for theories of the origins of life, as shown in work by NathanielVirgo (now here at EON/ELSI) and colleagues.

Gray-Scott model of a reaction diffusion system. P is taken to be an inert product.

The Grey-Scott system shows a wide variety of patterns emerging through the reaction of as few as two chemicals. Different parameters in the model also allow for different behaviours to emerge, moving patterns as well as ones that don’t move, or blobs that divide in a mitosis-like fashion (for an example refer to this video). In my simulations I focused on two specific patterns, a moving one commonly addressed as “u-skate” given its u-shape and its movements on a straight line if left in isolation, and a stable one also known as a type of “soliton” (non-moving patterns that don’t divide/replicate). Work in this area focuses on the emergence of complex behaviour via the interactions of several patterns, my focus here is however on the analysis of the properties of a single one. For simplicity then, I set up the initial conditions (i.e. dropping a specific quantity of a chemical) necessary for the formation of only one shape per simulation. The goal of these simulations was to identify significant changes in some information quantities between the pattern of interest and its environment. Part of my intentions was also to avoid pre-specifying conditions and rules in order to recognise the pattern itself, looking for ways in which information measures alone could define whether a shape is formed and whether information is stored within it, in a meaningful way. During this study I focused on two questions:

  1. Can one of these patterns show better predictions of its future states, if compared to its “environment” (i.e. chemicals not forming patterns)? Is it a relevant way of describing the formation of complex structures that could lead to life?
  2. Can one of these patterns be shown to encode information from its environment? Is there a flow of information from the environment to the pattern showing how information is aggregated in the pattern itself?
On the left side, the non-moving spot emerging after some chemical is dropped in the middle of a 50×50 units grid. On the right side, the moving pattern emerging after some chemical is dropped in the middle of a 100×100 units grid.

For the analysis, I focused on Predictive Information (PI) and Transfer Entropy (TE) measures. The former was used to investigate if a blob’s ability to predict its future states is qualitatively different when compared to its surroundings or in other words, is self-prediction a good indicator for the emergence of complex structures, maybe life? The latter was used to see whether there is an emergence of directed information exchange between a blob and its environment or in other words, is there a relevant information flow emerging when chemicals organise in more robust structures?

Preliminary results show that PI appears to correlate well with the dynamics of the patterns’ formation, indicating perhaps how information is stored in the pattern itself during its formation. In the case of a soliton over a very long simulation time however, PI shows no difference between solitons and their environments, suggesting that when the chemical system has reached a stable enough state, this quantity is not meaningfully capturing differences between a stable blob and its surroundings. In the case of the u-skate, I tracked the movement of the shape(thus dropping momentarily one of my goals, the automatic recognition of patterns from information measures alone) and measured PI in a moving framework in an attempt to capture the dynamics of this pattern. The results are the moment being analysed, with waves of PI that seem to propagate from the pattern at its formation (probably due to its movement, generated by changes in chemical gradients around the shape). In the long run, for the u-skate too we can see how PI seems not to capture differences between the pattern and the surrounding, with similar levels of PI for different parts of the systems. In both the soliton and the u-skate, the most promising results at the moment seem to emerge from a perturbation analysis that I started recently, with small amounts of chemicals dropped in several areas of the grid (including on the shape itself) at a quite high frequency. In this case, preliminary evidence might suggest how the shapes seem to maintain high PI if compared to the environment in spite of perturbations. My speculation is at the moment that PI might be capturing the robustness to perturbations of the shape, the patterns are more robust and therefore better at predicting their future states when compared to chemicals not organised in patterns.

Example of the average (over 10000 time units) Predictive Information (PI) for the u-skate moving pattern. PI is measured in bits (colorbar on the right side). We restricted the measure to a portion of the initial grid (100×100 units), focusing on a 40×40 units partition tracking the moving shape. Lag for PI measure: 10 time units. The grid was perturbed with units of chemicals dropped at random locations centred around the pattern.

 

The analysis using Transfer Entropy is still in the workings, with issues mostly due to the fact that it is not computationally feasible to measure TE between all possible combinations of time series even on a discretised grid. We are the moment considering ways to coarse-grain the system if a meaningful way.

 

 

To summarise my work (mostly in progress):

  1. How can we define the emergence of life from chemical systems?
  2. Can information theory in conjunction with control theory be used to quantify and explain information contents in simple chemical systems that are relevant for the origins of life?
  3. What are (if any) the relevant informational features of a living system?

In attempt to answer this questions, I set up some simulations with a model of reaction-diffusion equations of two chemicals, the Gray-Scott system. This system is known to show the emergence of many different patterns with quite diverse behaviour and has been suggested before to be a possible test ground for theories of the origins of life (1.).

One of the processes that I would consider general to living systems is homeostasis, the ability to maintain some quantities within boundaries (e.g. temperature, oxygen level). In control theory it is well known that regulation processes (like homeostasis) require the presence of an internal (generative or predictive) model storing information about environmental disturbances affecting the system (2.). In my first attempts to investigate this idea, I focused on two information measures that might help quantifying information in simple patterns of the Gray-Scott system that might relate to the presence of such predictive model. The two measures are Predictive Information (i.e. how much the past of a variable tells you about its own future) and Transfer Entropy (i.e. how much the past of a variable tells you about another variable’s future). The results are still very preliminary, but there is some small hint suggesting that information measures like PI might correlate to a general concept of robustness to perturbations that I believe to be fundamental for living systems.

 

 

 

About the author: Manuel Baltieri

Manuel is a PhD candidate at the University of Sussex, Brighton, UK. His research interests include questions regarding the relationship between information and control theory, biological and physical systems. His research project focuses at the moment on theories of Bayesian inference applied to biology and neuroscience, such as the active inference and predictive coding, and their connections to embodied theories of cognition. In his work, Manuel uses a combination of analytical and computational tools derived from probability/information theory, control theory and dynamical systems theory.

Twitter: @manuelbaltieri

]]>
https://eon.elsi.jp/information-and-regulation-at-the-origins-of-life/feed/ 0
Studying the origins of learning and memory using neuroevolution https://eon.elsi.jp/studying-the-origins-of-learning-and-memory-using-neuroevolution/ https://eon.elsi.jp/studying-the-origins-of-learning-and-memory-using-neuroevolution/#respond Fri, 02 Feb 2018 09:04:32 +0000 https://eon.elsi.jp/?p=1672 Memory and learning are two of the cornerstones of our cognition. Without learning, organisms would be restricted to the behaviors described by their genetic code. This can pose a problem when the environment they are born in is not entirely predictable. With learning, organisms can acquire new skills and increase their chances of survival and of reproduction by fully exploiting their surroundings.

The current trend in research on learning and memory focuses on synapses. Since the discovery that experiences modify synapses, every memory present in the brain has been studied within that framework. But if we look at the origins of life and the advantages provided by learning in the everyday life of an organism, it is not implausible that other mechanisms for learning were present before the appearance of plastic synapses. For instance, an early example of learning without synapses can be seen the slime mold Physarum Polycephalum. Without having a brain, this organism can measure the duration between two stimuli, memorise it and predict its next appearance. Its ability originates from the complex internal dynamics of the chemicals composing it.

In my work, I am interested in finding out if learning mechanisms similar to the ones found in Physarum Polycephalum can exist in the brain, albeit using the interactions between neurons instead of chemicals. To that end, in my previous research, I used artificial neural networks to model the brain, and tuned their parameters using evolutionary algorithms to complete tasks requiring specific learning abilities to be completed.

My research during my visit at ELSI focuses on the origins of time perception and symbolic memories. During my previous work, I evolved neural networks without synaptic plasticity that were capable of memorising symbolic information. The memory was stored inside a fixed attractor, or in its attractor basin. In another research, I applied the same methodology to study the origins of time perception and discovered that the memory could also be stored in different trajectories within the dynamical landscape of the network (see figure 1). These two phenomena require learning and memory, but the dynamics that evolved in both research differed greatly. This made me wonder what kind of mechanism would evolve if a task requires symbolic memory and time perception to be completed. Maybe both mechanisms would exist in different neural modules, or one would evolve faster and the second would rely on the neural systems evolved for the first, or if a single mechanism can implement everything. I setup a evolutionary robotic experiment to find out which of the choices it could be. My goal is that at the end of this experiment I will have a better understanding of how early mechanisms for learning and memory evolved, and how they might have interacted.

Figure 1: Each colored line represents one trajectory in the internal dynamics of a complex neural network when asked to measure and remember the duration of one stimulus. Five durations could be memorised (1s to 5s).
]]>
https://eon.elsi.jp/studying-the-origins-of-learning-and-memory-using-neuroevolution/feed/ 0
Measurement of Ganymede’s tidal deformation can be an effective way to determine the presence/absence of a subsurface deep ocean worlds https://eon.elsi.jp/tidal-deformation-of-ganymede-sensitivity-of-love-numbers-on-the-interior-structure/ https://eon.elsi.jp/tidal-deformation-of-ganymede-sensitivity-of-love-numbers-on-the-interior-structure/#respond Thu, 14 Jul 2016 00:31:54 +0000 https://eon.elsi.jp/?p=1200 Our new paper has been just published on the Journal of Geophysical Research – Planets.

Tidal deformation of icy satellites provides crucial information on their subsurface structures. In this study, we investigate the parameter dependence of the tidal displacement and potential Love numbers (i.e., h2 and k2, respectively) of Ganymede. Our results indicate that Love numbers for Ganymede models without a subsurface ocean are not necessarily smaller than those with a subsurface ocean. The phase lag, however, depends primarily on the presence/absence of a subsurface ocean. Thus, the determination of the phase lag would be of importance to infer whether Ganymede possesses a subsurface ocean or not based only on geodetic measurements. Our results also indicate that the major control on Love numbers is the thickness of the ice shell if Ganymede possesses a subsurface ocean. This result, however, does not necessarily indicate that measurement of either of h2 or k2 alone is sufficient to estimate the shell thickness; while a thin shell leads to large h2 and k2 independent of parameters, a thick shell does not necessarily lead to small h2 and k2. We found that, to reduce the uncertainty in the shell thickness, constraining k2 in addition to h2 is necessary, highlighting the importance of collaborative analyses of topography and gravity field data.

 

Paper title: Tidal deformation of Ganymede: Sensitivity of Love numbers on the interior structure
Authors: Shunichi Kamata, Jun Kimura, Koji Matsumoto, Francis Nimmo, Kiyoshi Kuramoto, and Noriyuki Namiki
Journal: Journal of Geophysical Research, doi:10.1002/2016JE005071, 2016.
Read the paper here

]]>
https://eon.elsi.jp/tidal-deformation-of-ganymede-sensitivity-of-love-numbers-on-the-interior-structure/feed/ 0
The EON Workshop on Planetary Diversity https://eon.elsi.jp/upcoming-the-eon-workshop-on-planetary-diversity/ https://eon.elsi.jp/upcoming-the-eon-workshop-on-planetary-diversity/#respond Wed, 08 Jun 2016 06:09:04 +0000 https://eon.elsi.jp/?p=1153 We have an upcoming workshop at ELSI on November 14-18, 2016, organised by Matthieu Laneuville (ELSI), Lena Noack (Royal Observatory of Belgium), Johanna Teske (Carnegie Institution) and Cayman Unterborn (Arizona State University).

Venue: ELSI-2 building, Rm 407.

The workshop aims to bring together a small group of international researchers in different fields related to planet formation, evolution and observation to discuss planetary diversity in our galaxy, as well as potential methodological pitfalls and observational constraints.

The theory of planetary formation and evolution has been built mostly from observations of our Solar System and the extensive data coverage of the Earth and its close neighbors. This is by construction very biased and it is time to reinvestigate some of the accepted knowledge in light of the new exoplanetary dataset. The apparent diversity of planetary system dynamic states has already revolutionized how we think planetary formation proceeded in our Solar System. A natural next step is to understand how this dataset can now revolutionize our understanding of how planets evolve.

There is already a wealth of new data available regarding the mass and radius distribution of exoplanets, which can be refined by the composition of the parent star. In addition, the next decade will see an increase in planetary spectra data from both James Webb Space Telescope and the European Extremely Large Telescope. The range of possible conditions and resulting dynamic states
on terrestrial planets is important to understand for origins of life in the Universe, but also to test our current understanding of planetary evolution. However, how to meaningfully include these datasets in models of planetary diversity is still largely debated.

The goal of this workshop is to build the tools and professional relationships to help us extract as much meaning from the dataset, while remaining in the realm of predictive science.

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab
]]>
https://eon.elsi.jp/upcoming-the-eon-workshop-on-planetary-diversity/feed/ 0
Complex Autocatalysis in Simple Chemistries https://eon.elsi.jp/1116-2/ https://eon.elsi.jp/1116-2/#respond Mon, 30 May 2016 07:18:55 +0000 https://eon.elsi.jp/?p=1116 I’ve just published a paper on my work with Takashi Ikegami and Simon McGregor on self-organisation in chemical systems, and the concept of autocatalysis. Autocatalysis means chemical self-production – a set of chemical species that can collectively produce more of the same set of species. What we found was that under some circumstances, chemical systems seem to “want” to become autocatalytic. The harder you try to make it for the system to find an autocatalytic system, the more clever it will be in coming up with one anyway. The reason for this has to do with thermodynamics, and the general tendency of all physical systems to seek a minimum of the free energy.

Virgo, N., Ikegami, T. and McGregor, S. (2016) Complex Autocatalysis in Simple Chemistries. Artificial Life 22(2), pp. 138-152. doi:10.1162/ARTL_a_00195

Read the paper here

]]>
https://eon.elsi.jp/1116-2/feed/ 0
“The deep sea, the origin of life, and astrobiology ” – A short article appeared in the Institute Letter https://eon.elsi.jp/the-deep-sea-the-origin-of-life-and-astrobiology-a-short-article-appeared-in-the-institute-letter/ https://eon.elsi.jp/the-deep-sea-the-origin-of-life-and-astrobiology-a-short-article-appeared-in-the-institute-letter/#respond Tue, 17 May 2016 03:59:56 +0000 https://eon.elsi.jp/?p=1105 Deep-sea hydrothemal vent
A deep-sea hydrothermal vent photographed through the DSV Alvin porthole by Donato Giovannelli during a dive at 2500 m

Recently a short article I wrote on Earth’s last frontiersthe deep-sea—appeared in the Institute for Advanced Studies Institute Letter. In the article I briefly speak about the discovery of deep-sea hydrothermal vents, and how deep-sea exploration has changed our view on life and habitability. You can read the article at the following link https://www.ias.edu/ideas/2016/giovannelli-last-frontier. I strongly believe that deep-sea exploration, and a better understanding of the largest ecosystem of our planet could help us to shed light on the emergence and evolution of life on our planet.

Enjoy!

]]>
https://eon.elsi.jp/the-deep-sea-the-origin-of-life-and-astrobiology-a-short-article-appeared-in-the-institute-letter/feed/ 0