Evolutionary methods for problem solving and artificial development

in crypto-news •  8 years ago  (edited)

enter image description here

One of the principles I follow for problem solving is that many of the best solutions can be found in nature. The basic axiom of all knowledge as self knowledge applies to the study of computer science and artificial intelligence.

By studying nature we are studying ourselves and what we learn from nature can give us initial designs for DApps (decentralized applications).

The SAFE Network example

SAFE Network for example is following these principles by utilizing biomimicry (ant colony algorithm) for the initial design of the SAFE Network. If SAFE Network is designed appropriately then it will have an evolutionary method so that over time by our participation with it can fine tune it. There should be both a symbiosis between human and AI as well as a way to make sure changes are always made according to the preferences of mankind. In essence SAFE Network should be able to optimize it’s design going into the future to meet human defined “fitness” criteria. How they will go about achieving this is unknown at this time but my opinion is that it will require a democratization or collaborative filtering layer. A possible result of SAFE Network’s evolutionary process could be a sort of artificial neuro-network.

The Wikipedia example

Wikipedia is an example of an evolving knowledge resource. It uses an evolutionary method (human based genetic algorithm) to curate, structure and maintain human knowledge. Human beings

One of the main problems with WIkipedia is that it is centralized and that it does not generate any profits. This may be partially due to the fact that the ideal situation is that knowledge should be free to access but it does not factor in that knowledge isn’t free to generate. It also doesn’t factor in that knowledge has to be stored somewhere and that if Wikipedia is centralized then it can be taken down just as the library of Alexandria once was. A decentralized Wikipedia could begin it’s life by mirroring Wikipedia and then use the evolutionary methods to create a Wikipedia which does not contain the same risk profile or model.

Benefits of applying the evolutionary methods to Wikipedia style DApps

One of the benefits is that is that there could be many different DApps which can compete in a market place so that successful design features could result in an incentive to continue to innovate. We can think of the market in this instance as the human based genetic algorithm where all DApps are candidate solutions to solve the problem of optimizing knowledge diffusion. The human beings would be the innovators, the selectors, and the initializers. The token system would represent the incentive layer but also be for signalling so that humans can give an information signal which indicates their preferences to the market.

Wikipedia is not based on nature currently and does not evolve it’s design to adapt to it’s environment. Wikipedia “eats” when humans donate money to a centralized foundation which directs the development of Wikipedia. A decentralized evolutionary model would not have a centralized foundation and Wikipedia would instead adapt it’s survival strategy to it’s environment. This would mean Wikipedia following the evolutionary model would seek to profit in competition with other Wikipedia’s until the best (most fit) adaptation to the environment is evolved. Users would be able to use micropayments to signal through their participation and usage which Wikipedia pages are preferred over others and at the same time you can have pseudo-anonymous academic experts with good reputations rate the accuracy.

In order for the human based genetic algorithm to work, in order for the collaborative filtering to work, the participants should not know the scores of different pages in real time because this could bias the results. Also participants do not need to know what different experts scored different pages because personality cults could skew the results and influence the rating behavior of other experts. Finally it would have to be global and decentralized so that experts cannot easily coordinate and conspire. These problems would not be easy to solve but Wikipedia currently has similar problems in centralized form.

Artificial development as a design process

Quote from artificial development:

Human designs are often limited by their ability to scale, and adapt to chang-ing needs. Our rigid design processes often constrain the design to solving the immediate problem, with only limited scope for change. Organisms, on the other hand, appear to be able to maintain functionality through all stages of de-velopment, despite a vast change in the number of cells from the embryo to a mature individual. It would be advantageous to empower human designs withthis on-line adaptability through scaling, whereby a system can change com-plexity depending on conditions.

The quote above summarizes one of the main differences between an evolutionary design model and a human design model. The human designs have limited adaptability to the environment because human beings are not good at trying to predict and account for the possible disruptive environmental changes which can take place in the future. Businesses which take on these static inflexible human designs are easily disrupted by technological changes because human beings have great difficulty making a design which is “future proof”. It is my own conclusion that Wikipedia in it’s current design iteration suffers from this even though it does have a limited evolutionary design. The limitation of Wikipedia is that the foundation is centralized and it’s built on top of a network which isn’t as resilient to political change as it could be. In order for the designs of DApps to be future proof they have to utilize evolutionary design models. Additionally it would be good if DApps are forced to compete against each other for fitness so that the best evolutionary design models rise to the top of the heap.
References

Clune, J., Beckmann, B. E., Pennock, R. T., & Ofria, C. (2011). HybrID: A hybridization of indirect and direct encodings for evolutionary computation. In Advances in Artificial Life. Darwin Meets von Neumann (pp. 134-141). Springer Berlin Heidelberg.
Cussat-Blanc, S., Bredeche, N., Luga, H., Duthen, Y., & Schoenauer, M. Artificial Gene Regulatory Networks and Spatial Computation: A Case Study.

Doursat, R. (2008). Organically grown architectures: Creating decentralized, autonomous systems by embryomorphic engineering. In Organic computing (pp. 167-199). Springer Berlin Heidelberg.

Harding, S., & Banzhaf, W. (2008). Artificial development. In Oganic Computing (pp. 201-219). Springer Berlin Heidelberg.
Palla, R. S. AN APPROACH FOR SELF-CREATING SOFTWARE CODE IN BIONETS WITH ARTIFICIAL EMBRYOGENY.
Ulieru, M., & Doursat, R. (2011). Emergent engineering: a radical paradigm shift. International Journal of Autonomous and Adaptive Communications Systems, 4(1), 39-60.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

@dana-edwards Woah! It looks like we're totally on the same wavelength tonight.
The problem here isn't that humans suck at anticipating every possible thing that can go wrong. It's that we don't think in terms of failure and recovery modes. We barely get things working and then we ship them out and move onto the next problem.

Life is survivable because nature has forced the evolution of self healing characteristics. We didn't get "engineered" to fight every random virus that wants to hijack our cellular machinery. We evolved an immune cascade and a repair system.

This self healing characteristic is a type of "error correction" and over billions of years life has gotten pretty good at it. But we're just barely scratching the surface of how it works.

I am planning to do an article on self healing, disruption resistant and redundant systems, highlighting freenode and ipfs as good examples. I'll be releasing it tomorrow. I totally hope you'll drop by!

BTW, not sure if you noticed this or not, but your image of a multiply connected graph looks a whole lot like an adinkra which are a method of visualizing Super Symmetry evolutions and have led to some exciting discoveries as of late, such as that the universe itself may have error correcting codes built in.

Food for thought, I hope you enjoy the read!

No man is an island. A common sayings existed a long time ago which is still true until today. We can all find answer from people around us and i agree with you, we can also find answers in nature.

We just need to get involve to gain knowledge.

I'm all for. ! A brilliant article.

Nice piece. Indeed one of the problems of wiki is summed up nice.

As Stated by Adrian Bejan in his Constructal law "For a finite-size system to persist in time (to live), it must evolve in such a way that it provides easier access to the imposed currents that flow through it." I believe this perspective gives us a good direction on how we can optimize some problems quickly.

Yesterday I shared about this here .

This NSF center takes evolution as a design strategy seriously, across several different fields of science and engineering. https://www3.beacon-center.org

This post has been linked to from another place on Steem.

Learn more about linkback bot v0.3

Upvote if you want the bot to continue posting linkbacks for your posts. Flag if otherwise. Built by @ontofractal