MIT Researchers Forge New Weapon for Code Warriors

MIT researchers have developed a fast, accurate system for identifying security flaws in Web apps written in Ruby on Rails, according to news reports published last week.

In tests the researchers — MIT Professor Daniel Jackson and Joseph Near, a postdoctoral researcher at the University of California at Berkeley — performed on 50 popular RoR apps, they uncovered 23 previously undiscovered security flaws.

What’s more, the longest amount of time it took to analyze any one program was 64 seconds.

“Even if you wrote a small program, it sits atop a vast edifice of libraries and plug-ins and frameworks,” said Jackson, who with Near plans to present their findings at the International Conference on Software Engineering in May.

“So when you look at something like a Web application written in language like Ruby on Rails, if you try to do a conventional static analysis, you typically find yourself mired in this huge bog. And this makes it really infeasible in practice,” he continued.

Static analysis is a method for analyzing a program’s code for security flaws.

False Positives

Instead of focusing on attackers running their own code within an application, Jackson and Near have turned their attention on access control problems. They want to make sure a user is authorized to do something in an application before any code is executed.

For example, an online forum may want a visitor to log in before contributing to any topic threads. If visitors can write to threads without logging in, that would be a flaw in the code that might be flagged by the researchers’ methods.

The researchers’ debugger appears to be more accurate than the human eye in identifying programming security flaws, Jarrett noted.

“They ran the debugger against some class assignments that were graded, and they found a bunch of things in those applications that were not marked by a human reviewer as being a problem,” he said.

What’s not known about the debugger is how many false positives it identified. A false positive is code identified as buggy but that actually is fine.

“That’s a problem in the real world because if you report a lot of false positives, what will typically happen is the software developer who is receiving the report will look at it and say, ‘This is garbage,’ and not look at any of the findings,” Jarrett said.

Ruby’s Limitations

False positives are a general problem with static analysis of software, observed Slawek Ligier, vice president of engineering for security for Barracuda Networks.

“Any tool that results in too many false positives for developers will not be used or applied correctly because development teams will have to spend too much time sorting out false positives from true vulnerabilities,” he told TechNewsWorld.

“If you have to spend too much time sorting out chaff from what you really need to find, you will run into time constraints, and you won’t able to do a good job,” Ligier added.

While the MIT research shows promise, it has limitations, he said. “Because it’s focused on Ruby on Rails and how Ruby on Rails functions with its libraries, that’s definitely a big limitation.”

Ruby on Rails has gained some popularity among younger software organizations but hasn’t been widely adopted, Jarrett added.

Less than 5 percent of all applications are written in RoR, he estimated. By comparison, 45 percent of applications are written in Java.

“This is not a method that’s going to instantly make every application running on the Internet more secure — just the ones written in Ruby, and that’s a relatively small number,” Jarrett said.

“This could become interesting,” he added, “if they could figure out a way to extend this technique beyond Ruby and make it work for applications in other languages.”

Distinctive Research

To avoid the bog, Near devised a way to turn RoR itself into a static analysis tool so that running a program through the framework’s interpreter produced an easy-to-follow, formal, line-by-line description of how the program handles data.

Those methods are incorporated in a debugger called Space, which will be described in the paper to be aired in May.

With Space, a program’s data access procedures can be evaluated based on the ways applications typically access data. If it appears as though a program is accessing data in an atypical way, then chances are it has a security flaw.

Using static analysis on access control vulnerabilities makes the research distinctive, noted Tim Jarrett, director of enterprise security strategy at Veracode.

“They’ve chosen to solve a class of security problems that most static analyzers don’t try to solve,” he told TechNewsWorld.

“Most organizations that have built static analysis tools that focus on security issues are worried about attacks that allow data to be stolen from a database or deface a running website,” Jarrett noted.