« Entropic Neuronal Summation » : différence entre les versions

De Assothink Wiki
Aller à la navigation Aller à la recherche
Contenu ajouté Contenu supprimé
Aucun résumé des modifications
Aucun résumé des modifications
 
(46 versions intermédiaires par le même utilisateur non affichées)
Ligne 1 : Ligne 1 :
== Context ==
== Context ==

These notes are part of the [[Accueil|Assothink]] project.


Natural intelligences, like the human brain, and artificial intelligences, like Assothink, involve numerous nodes exchanging signals.
Natural intelligences, like the human brain, and artificial intelligences, like Assothink, involve numerous nodes exchanging signals.


Focusing on one node, it is assumed that this node receives a finite (discrete) number of input signals (inflows). The input signals may be considered and descibed as positive real values.
Focusing on one node, it is assumed that this node receives a finite (discrete) number of input signals (inflows). The input signals may be considered and described as positive real values.


These signals determine the local excitationn level of the node.
These signals determine the local excitation level of the node.


The question is purely mathematical: how do the various inflow units combine into a global inflow value ?
The question is purely mathematical: how do the various inflow units combine into a global inflow value ?


<span style="color: rgb(128, 128, 128);">[ Note: les notations mathématiques utilisées ici sont construites à partir de cette </span>[http://meta.wikimedia.org/wiki/Help:Formula <span style="color: rgb(128, 128, 128);">aide</span>]<span style="color: rgb(128, 128, 128);"> ]&nbsp;</span><br>
<span style="color: rgb(128, 128, 128);">[ Note: les notations mathématiques utilisées ici sont construites à partir de cette </span>[http://meta.wikimedia.org/wiki/Help:Formula <span style="color: rgb(128, 128, 128);">aide</span>]<span style="color: rgb(128, 128, 128);"> ]&nbsp;</span><br>


== Principle ==
== Principle ==


The first answer to the question above is 'summation'. Neuroscience document do not discuss this point (as far as we know), but it is generaly assumed that summation effects occur. It would be hard to prove that the summation effect is the correct model for the combination of inflows, but hard also to prove that it is not the correct model. So we can just make assumptions.
The first answer to the question above is 'summation'. Neuroscience document do not discuss this point (as far as we know), but it is generally assumed that summation effects occur. It would be hard to prove that the ''simple'' summation effect is the correct model for the combination of inflows, but hard also to prove that it is not the correct model. So we can just make assumptions.


In this document, another assumption is built: the ENS&nbsp;(Entropic Neuronal Summation).
In this document, another assumption is built: the ENS&nbsp;(Entropic Neuronal Summation).


Obviously the model is quite similar to a simple summation model, but there is one point attracting our attention. The inflow coming from two ''different'' sources operates more than the inflow coming from one single source, event when the simple summation of the values is identical. There is no demonstration for this. It is based on introspective considerations.
Obviously the model is quite similar to a simple summation model, but there is one point attracting our attention. The inflow coming from two ''different'' sources operates ''more'' than the inflow coming from one single source, event when the simple summation of the values is identical. There is no demonstration for this. It is based on introspective considerations.


But anyway, why would it be ''worse'' than the simple summation model&nbsp;?
But anyway, why would it be ''worse'' than the simple summation model&nbsp;?


The critical point mentioned above, the difference between the summation model and the ENS model is that we want for the ENS&nbsp;a function <math>\Psi()</math> , such that, when <math>a > 0</math>
The critical point mentioned above, the difference between the summation model and the ENS model is that we want for the ENS&nbsp;a function <math>\Psi()</math> , such that, when <math>a > 0</math>
<center><math>\Psi(a/2,a/2) > \Psi(a)</math></center>

<math>\Psi(a/2,a/2) > \Psi(a)</math>

In other words, the <math>\Psi()</math> function has (besides some properties common with the simple summation model) some specific properties.
In other words, the <math>\Psi()</math> function has (besides some properties common with the simple summation model) some specific properties.


Ligne 30 : Ligne 30 :


The <math>\psi()</math> function shares these properties with the simple summation model:
The <math>\psi()</math> function shares these properties with the simple summation model:
<center><math>\psi(...) \ge 0</math></center> <center><math>\psi() = 0</math></center> <center><math>\psi(0) = 0</math></center>

<math>\psi(...) \ge 0</math>
<center><math>\psi(...,0) = \psi(...)</math></center>
<center><math>\psi(...,a,b,...) = \psi(...,b,a,...)</math></center>

<math>\psi() = 0</math>
<center><math>\psi(...,z) > \psi(...)\ \ \ \ (z>0)</math></center>
We also want some additional properties:

<center><math>\Psi(a/2,a/2) > \Psi(a)</math></center> <center><math>\Psi(a,a) > \Psi(x,2a-x)\ \ \;(0 < x < a)</math></center>
<math>\psi(0) = 0</math>

<math>\psi(...,0) = \psi(...)</math>


== ENS&nbsp;model ==
== ENS&nbsp;model ==


Let us define various writings. The arguments of the functions are written
Let us first define various writings. The arguments of the functions (all <math>a_i</math> positive) are written
<center><math>\Psi(a_i)\ \ or\ \ \Psi(a_0,a_1,a_2,...)</math></center>

<math>\Psi(a_i)\ \ or\ \ \Psi(a_0,a_1,a_2,...)</math>

The sum of the arguments (the simple summation value) is
The sum of the arguments (the simple summation value) is
<center><math>\Sigma = \Sigma(a_0,a_1,a_2,...) = \Sigma_i(a_i)</math></center>

<math>\Sigma = \Sigma(a_0,a_1,a_2,...) = \Sigma_i(a_i)</math>

The relative weight of the arguments are
The relative weight of the arguments are
<center><math>p_i = a_i / \Sigma</math></center>
And we also define
<center><math>q_i = \Sigma / a_i = p_i^{-1}</math></center>
In this document all logarithms use base 2
<center><math>log(x) = log_2(x)</math></center>
As a convenient short writing, we also use
<center><math>flog(x) = x\ log(x)</math></center>
Let <math>\Epsilon</math> be the entropic uncertainty measure (see explanations [http://en.wikipedia.org/wiki/Entropy_%28information_theory%29 there]) of the <math>p_i</math> values
<center><math>\Epsilon(a_i) = - \Sigma (p_i\ log(p_i)) = - \Sigma (flog(p_i))</math></center>
Now we define our ENS&nbsp;function:<br>
<center><math>\Psi(a_i) = \Sigma(a_i)\ (\kappa\ +\ (1-\kappa)\ \Epsilon(a_i))</math></center>
where the <math>\kappa</math> value is a null or positive real parameter value.


== Developments ==
<math>p_i = a_i / \Sigma</math>


Some simple mathematical steps allow to transform the <math>\Psi</math> formulation. They lead to:
And we also define
<center><math>\Psi = \kappa\Sigma\ +(1-\kappa)(flog(\Sigma)\ -\ \Sigma_i(flog(a_i)))</math></center>
When the <math>\kappa</math> parameters is 1.0 the ENS&nbsp;function falls back to the simple summation model, losing the specific properties specified above.


When the <math>\kappa</math> parameter is 0.0, it turns into a strictly entropic summation, involving for instance <math>\Psi(x) = 0</math> (null when only 1 argument is provided!).
<math>q_i = \Sigma / a_i = p_i^{-1}</math>


Realistic values used for the Assothink developments should reside around <math>\kappa</math> = 0.5.
In this document all logarithms use base 2


== Computation ==
<math>log(x) = log_2(x)</math>


The ENS&nbsp;formula has a practical convenient property. It may be computed incrementally. In other words, it is not necessary to memorize all the input values before computing the ENS&nbsp;result. It is sufficient to accumulate the sum of the <math>a_i</math> values and the sum of the <math>flog(a_i)</math> values.When all input is terminated, the accumulated values are used to produce the ENS&nbsp;result.<br>
As a convenient short writing, we also use


On the other hand, any flow occurring in the Assothink engine requires a logarithm computation (accurate or not), and this consumes a lot of CPU&nbsp;(given flow event more than once per nanosec on usual computers).
<math>flog(x) = x\ log(x)</math>


So the trade-off between the CPU cost and the improved targeting of the flows has to be decided according to practical tests.
Let <math>\Epsilon</math> be the entropic uncertainty measure of the <math>p_i</math> values


== Fast x.log(x) computation ==
<math>\Epsilon(a_i) = - \Sigma (p_i\ log(p_i)) = - \Sigma (flog(p_i))</math>


It is possible, with a low accuracy, to quickly compute
Now we define
<center><math>flog(x) = x\ log(x)</math></center>
by using the following approximation (Taylor development around 1.0, see [http://www.wolframalpha.com/input/?i=x+log%28x%29+taylor wolfram])
<center><math> flog(x) \asymp </math></center>


== Results example<br> ==
<math>\Psi(a_i) = \Sigma(a_i)\ (\kappa\ +\ (1-\kappa)\ \Epsilon(a_i))</math>


Here are some instructive results computed with <math>\kappa</math>=0.5:
where the <math>\kappa</math> value is a null or positive real parameter value.
<pre>args=(2.0,2.0) SUM=4.00 ENS=4.00

args=(3.0,3.0) SUM=6.00 ENS=6.00
== Developments ==
args=(2.0,2.0,2.0) SUM=6.00 ENS=7.75
args=(4.0,4.0) SUM=8.00 ENS=8.00
args=(4.0) SUM=4.00 ENS=2.00
args=(3.0,1.0) SUM=4.00 ENS=3.62
args=(3.9,0.1) SUM=4.00 ENS=2.34
</pre>
All expected properties are respected (which was obvious from the formulae).


== Conclusion ==
Various simple mathematical steps allow to transform the <math>\Psi</math> formulation. They finally lead to:


As part of the Assothink model, it is assumed (believed?) that the ENS&nbsp;function is a useful model for the computation of excitation flows between nodes of the active jelly.
<math>\Psi = \kappa\Sigma\ +(1-\kappa)*(flog(\Sigma)\ -\ \Sigma_i(flog(a_i)))</math>


Its is however not ''proved''.
== Computation ==


Anyway it is tested as part of the Assothink software implementation from April 2013.<br>
The ENS&nbsp;formula has a practical convenient property. It may be computed incrementally. In other words, it is not necessary to memorize all the input values before computing the ENS&nbsp;result. It is sufficient to accumulate the sum of the ai values and the sum of the flog(ai) values.

Dernière version du 25 avril 2013 à 17:18

Context

These notes are part of the Assothink project.

Natural intelligences, like the human brain, and artificial intelligences, like Assothink, involve numerous nodes exchanging signals.

Focusing on one node, it is assumed that this node receives a finite (discrete) number of input signals (inflows). The input signals may be considered and described as positive real values.

These signals determine the local excitation level of the node.

The question is purely mathematical: how do the various inflow units combine into a global inflow value ?

[ Note: les notations mathématiques utilisées ici sont construites à partir de cette aide

Principle

The first answer to the question above is 'summation'. Neuroscience document do not discuss this point (as far as we know), but it is generally assumed that summation effects occur. It would be hard to prove that the simple summation effect is the correct model for the combination of inflows, but hard also to prove that it is not the correct model. So we can just make assumptions.

In this document, another assumption is built: the ENS (Entropic Neuronal Summation).

Obviously the model is quite similar to a simple summation model, but there is one point attracting our attention. The inflow coming from two different sources operates more than the inflow coming from one single source, event when the simple summation of the values is identical. There is no demonstration for this. It is based on introspective considerations.

But anyway, why would it be worse than the simple summation model ?

The critical point mentioned above, the difference between the summation model and the ENS model is that we want for the ENS a function , such that, when

In other words, the function has (besides some properties common with the simple summation model) some specific properties.

Properties

The function shares these properties with the simple summation model:

We also want some additional properties:

ENS model

Let us first define various writings. The arguments of the functions (all positive) are written

The sum of the arguments (the simple summation value) is

The relative weight of the arguments are

And we also define

In this document all logarithms use base 2

As a convenient short writing, we also use

Let be the entropic uncertainty measure (see explanations there) of the values

Now we define our ENS function:

where the value is a null or positive real parameter value.

Developments

Some simple mathematical steps allow to transform the formulation. They lead to:

When the parameters is 1.0 the ENS function falls back to the simple summation model, losing the specific properties specified above.

When the parameter is 0.0, it turns into a strictly entropic summation, involving for instance (null when only 1 argument is provided!).

Realistic values used for the Assothink developments should reside around = 0.5.

Computation

The ENS formula has a practical convenient property. It may be computed incrementally. In other words, it is not necessary to memorize all the input values before computing the ENS result. It is sufficient to accumulate the sum of the values and the sum of the values.When all input is terminated, the accumulated values are used to produce the ENS result.

On the other hand, any flow occurring in the Assothink engine requires a logarithm computation (accurate or not), and this consumes a lot of CPU (given flow event more than once per nanosec on usual computers).

So the trade-off between the CPU cost and the improved targeting of the flows has to be decided according to practical tests.

Fast x.log(x) computation

It is possible, with a low accuracy, to quickly compute

by using the following approximation (Taylor development around 1.0, see wolfram)

Results example

Here are some instructive results computed with =0.5:

args=(2.0,2.0)                SUM=4.00 ENS=4.00
args=(3.0,3.0)                SUM=6.00 ENS=6.00
args=(2.0,2.0,2.0)            SUM=6.00 ENS=7.75
args=(4.0,4.0)                SUM=8.00 ENS=8.00
args=(4.0)                    SUM=4.00 ENS=2.00
args=(3.0,1.0)                SUM=4.00 ENS=3.62
args=(3.9,0.1)                SUM=4.00 ENS=2.34

All expected properties are respected (which was obvious from the formulae).

Conclusion

As part of the Assothink model, it is assumed (believed?) that the ENS function is a useful model for the computation of excitation flows between nodes of the active jelly.

Its is however not proved.

Anyway it is tested as part of the Assothink software implementation from April 2013.