Google's artificial intelligence has developed its own AI, which surpasses anything humans have done so far.
In May 2017, Google Brain researchers announced the creation of AutoML, an artificial intelligence (AI) capable of generating its own AI, even more powerful. More recently, researchers have decided to confront AutoML with its biggest challenge to date. This AI that can generate other AIs has indeed created a real "child", which has surpassed all of its counterparts designed by humans.
Now, Google's researchers have automated the design of machine learning models using an approach called reinforcement learning: AutoML acts as a neural network controller, which develops a so-called child's AI network, to perform a task. specific.
Regarding this new AI that researchers have called NASNet, the task is to recognize objects: people, cars, traffic lights, handbags, backpacks, etc., and all this in video and in real time. AutoML then evaluates NASNet's performance and uses this information to improve the new AI, while repeating this process thousands of times, to maximize its improvement.
When tested on ImageNet (an organized image database) and COCO (Common Objects in Context) image classification datasets, a set of detection, segmentation, and subtitling data large-scale objects), which Google researchers describe as "two of the most respected large-scale academic data sets in computer vision," NASNet NAS surpassed all other existing vision systems to date.
According to the researchers, NASNet has achieved an accuracy of 82,7% to predict images on the entire ImageNet. This is 1,2% better than any previously published results. The system is also 4% more efficient with an average accuracy of 43,1% (mean Average Precision - mAP). In addition, a less demanding version (in terms of calculations) of NASNet surpassed by 3,1% all the best models of similar size, for mobile platforms.
Machine learning is what gives many AI systems their ability to perform specific tasks. Although this concept is quite simple - an algorithm learns by being fed by large amounts of data - the process still requires a lot of time and calculations. By automating the process of creating accurate and efficient AI systems, an AI capable of designing another, supports this important work.
Specifically for NASNet, accurate and efficient computer vision algorithms are in high demand due to the number of potential applications. Indeed, these algorithms could be used to create sophisticated robots driven by AIs. They could also help designers to improve autonomous vehicle technologies: the faster an autonomous vehicle will be able to recognize objects in its path and surroundings, the sooner it will be able to react to them, thus increasing the safety of these vehicles.
Google researchers recognize that NASNet could be very useful for a wide range of applications and have open source AI for inference on image classification and object detection. "We hope that the largest machine learning community will be able to rely on these models to solve the myriad of computer vision problems that we have not yet imagined," say the researchers.
Although the applications for NASNet and AutoML are numerous, the fact that one AI is able to create another, also raises some concerns. For example, what would prevent the "parent" AI from transmitting unwanted elements to its "child"? What if AutoML created systems so quickly that the company could not keep up? Indeed, it is not difficult to imagine how NASNet could be used in automated surveillance systems in the near future. Perhaps even before the regulations to control this system and its limits, come into being.
Let us hope, then, that world leaders are working fairly quickly and effectively to ensure that such systems do not lead to any kind of dystopian future. You need to know that Amazon, Facebook, Apple, and other big companies are all members of the Partnership on AI to Benefit People and Society (Partnering for AI to Benefit People and Society), an organization focused on responsible and controlled development of AI.
The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a Google-owned research firm, recently announced the creation of a group focused on the ethical and ethical implications of AI.
Several governments are also working on regulations to prevent the use of AI for unsafe purposes, such as self-contained weapons. As long as control is maintained over the general direction of AI development, the benefits of having an AI capable of designing others, as is the case here, should outweigh the potential dangers.
Sources: Google, arXiv.org