Can we find bugs in program through Machine Learning?

Automated bug detection before the actual program running is increasingly popular feature researchers are looking for.Programming errors and other code quality issues determination is in search of big lead here i.e finding errors in the Linux kernel before the code is incorporated, probably not but can only be possible with machine learning.

Using AI, Linux kernel developer Sasha Levin looks for patches for the the Stable and Long Term Stable (LTS) trees that improve code. But did he use the ML system to find patches that contain bugs? It’s a difficult task for Levin, but he has some clues as to how that could be done.

The Microsoft employed developer Sasha Levin maintains together with Greg Kroah-Hartman the so-called stable trees of the Linux kernel. Among other things, Levin uses a machine learning approach to find the necessary patches for improvement . As the developer reported in his presentation at this year’s Open Source Summit Europe in Lyon, he had been repeatedly asked because of his work, whether it could not be found bugs before they are even incorporated into the kernel. The answer is, according to Levin, but anything , as he presents in a detailed analysis.

Because, as many developers know, detecting bad code is not a manageable task. Although there are already a variety of tools for finding errors, such as static code analysis. But from the point of view of Levin, the biggest source of error in the development of the Linux kernel is its development process itself. The developer tries to underpin this with his own analysis.

Objective analysis is difficult to implement

From his personal experience as a maintainer. Levin knows this review, that is, third-party checking of the code, as well as code testing, help prevent the introduction of bugs. It plays quite a role, who does the review, how much time it takes or even how extensively the possible disputes are formulated.

Although it is difficult to actually quantify these and other things. This applies above all to the question as to what should be considered as a bug in the sense of the original question and investigation. Nevertheless, Levin has tried to translate some of these considerations into a machine-learning model using a preselected set of code contributions to the kernel.

Of course, the model inevitably has weaknesses and can not be used directly to actually find faulty code before it is entered into the main branch / tree of the kernel. For Levin, however, the investigation thus carried out offers some very important clues.

Fast patches just before the deadline have more bugs

Probably the most important finding here, according to Levin, is that the probability of introducing errors in contributions is three times higher than normal if the code is added to RC kernels late. This seems counterintuitive, as after a two-week phase to submit new features for the upcoming Linux version (Merge Window), a mostly eight-week trial phase with bug fixes and release candidates (RC) follows, before a new Linux version appears ,

According to Levin, this result confirms his assumptions about the reviews. Thus, new features and major changes often go through a long review phase and the patches are usually discussed extensively. However, in the late RC phase of kernel development, the process of implanting is much faster and often there is no review at all.

Levin found a lot of patches for this development phase, the code of which was written, submitted and entered on a single day. Of course, with such a rapid development, the potential for error increases.

Whether and what follows from this realization but in the long term for the development process of the Linux kernel is not really clear for Levin. He had some ideas, but these were difficult to implement. This includes a real freeze phase in the development to extensively test the innovations. Possibly shifts the inclusion of short-term patches but only further back.

Similarly, Levin could imagine a kind of standardized approach to accepting patches in the main branch. As a prerequisite for a recording this could be a minimum number of days that the patches in the Linux Next branch must be present before inclusion in the main branch. Similarly, extensive reviews or tests could be forced or so-called sign-off tags. The latter in this case would be roughly “approved by” .

All these requirements would, according to Levin with a not inconsiderable share of developers and maintainers encounter resistance and are therefore not feasible.

Researchers are also using machine learning for finding trends. Here are takeaways From The First Operational ML Conference USENIX OpML 2019

 

Amram David

Senior Contributor at DFI Club
Amram is a technical analyst and partner at DFI Club Research, a high-tech research and advisory firm .He has over 10 years of technical and business experience with leading high-tech companies including Huawei,Nokia,Ericsson on ICT, Semiconductor, Microelectronics Systems and embedded systems.Amram focuses on the business critical points where new technologies drive innovations.
Amram David