Intel’s ControlFlag taps AI to automatically detect errors in code

At Labs Day, an online event showcasing innovations across Intel’s portfolio, the company unveiled ControlFlag, a machine programming system that can autonomously detect errors in code. Intel claims that even in its infancy, ControlFlag shows promise as a productivity tool to assist developers with the labor-intensive task of debugging. In preliminary tests, ControlFlag reportedly trained and learned defects on over 1 billion unlabeled lines of “production-quality” code.

According to a study published by the University of Cambridge’s Judge Business School, programmers spend 50.1% of their work time not programming; the other half is debugging. And the total estimated cost of debugging is $312 billion per year. AI-powered code suggestion and review tools, then, promise to cut development costs substantially while enabling coders to focus on more creative, less repetitive tasks.

Intel says that ControlFlag’s bug detection capabilities are enabled by machine programming, a fusion of machine learning, formal methods, programming languages, and compilers. Leveraging anomaly detection, ControlFlag learns from example to detect normal coding patterns, identifying abnormalities in code that are likely to cause a bug. The system can detect these anomalies regardless of programming language, Intel claims, and it uses what’s known as an unsupervised approach to adapt to any developer’s style. With limited inputs for the control tools that the program should be evaluating, ControlFlag can identify stylistic variations in programming language, similar to the way that readers recognize the differences between full words or use contractions in English.

ControlFlag learns to identify and tag stylistic choices and can customize error identification and solution recommendations based on its insights. This minimizes the chances that the system mischaracterizes a stylistic deviation between two developer teams as an error, according to Intel.

To date, ControlFlag has been used to identify latent bugs existing in widely-used codebases previously reviewed by software developers. One of these codebases was for cURL, a computer software project providing a library and command-line tool for transferring data using various network protocols. When ControlFlag
analyzed cURL, Intel says it identified an anomaly that had not been previously recognized, prompting cURL developers to propose a better solution. Intel claims it’s even started evaluating using ControlFlag internally to identify bugs in its own software and firmware productization.

ControlFlag is the latest in a string of tools that leverage AI and machine learning to complete and audit code. Codota is developing a platform that suggests and autocompletes scripts in Python, C, HTML, Java, Scala, Kotlin, and JavaScript. Ponicode taps AI to check the accuracy of code, and DeepCode offers a machine learning-powered system for whole-app code reviews (as does Amazon). Perhaps one of the most impressive projects to date is TransCoder, an AI system developed by Facebook researchers that converts code from one programming language into another. Another contender is a model from OpenAI that was trained on GitHub repositories to generate entire functions from English-language comments .

“Programs like these are really just trying to eliminate the minutiae of creating software,” principal scientist and director at Intel Labs Justin Gottschlich told VentureBeat in a recent interview. “They could help accelerate productivity … by taking care of debugging. And they could increase the number of jobs in tech because people who don’t have a programming background will be able to take their creative intuition and capture that via machine by these intentionality interfaces.”

Source: Read Full Article