Recent developments in Multimodal Abstraction (e.g. Transformers, Foundation Models) are a tremendous step forward from deep learning.
These new models are able to ingest a very broad range of data (spreadsheets, poetry, romance novels, industrial process monitoring, chat logs), and also types of data, such as text, audio, video, etc.
They also have the capacity to solve thousands of different problems with one model, in comparison to deep learning systems which may be quite effective but only in a narrow range.
This new technology is also able to deal with abstract concepts in new ways. Simply by asking for something to be 'more polite' or 'less formal', these models make an appropriate interpretation. This means that one can use everyday natural language to specify generally what one wants, and then refine it closer to perfection.
In this example OpenAI's Codex system is being used to turn natural language into a working video game, in just a few minutes, with all of the associated code immediately ready to be compiled and shared.
It's clear that many aspects of development are about to be significantly deskilled, or perhaps bifurcated – many people creating in a simple way, and a smaller group of experts debugging things that the AI system cannot handle.
This wave of creativity will be as powerfully disruptive in the 2020s as the Graphical User Interface and desktop publishing has been in the 1990s.
For more,you can visit this community
JOIN WITH US ON DISCORD SERVER:
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit