Guus Bosman

software engineering director


You are here

dailylife

USENIX Security Symposium 2013

USENIX Security 2013 was a very interesting conference. It was a great way to catch up with the latest developments in the security world, on a wide range of topics. Over the course of 5 days I attended more than 40 presentations. The organization of the conference was top-notch. The venue was a hotel a few minutes from Judiciary Square in downtown Washington, D.C.

See also my notes for the first two days.

Lessons

The conference reinforced three high-level concepts about cyber security. First of all, crime often doesn't pay much, given the risks involved. The team of George Mason university and others gave several nice presentations on the economics of cyber-crime. Overall the numbers involved don't add up to astronomical incomes, though the successful "booter service" they analyzed had an income of $7,500 per month, before expenses.

Secondly: in contrast, the costs of defense are usually much higher than those of attacks. For example, renting a DDoS facility that can yield up to 800 Mbps can be had for a cheap $10 per month but mitigating DDoS attacks is very hard and thus very expensive. Similarly, advertisers stand to lose significant amounts of money from fake advertisements as described by two researchers at an ad firm.

Thirdly: no matter how old your vulnerability is, somebody will start to investigate it at some point. For example, while the sexy new Samsung S4 was analyzed pretty much the moment it was released, even the old and established IPMI protocol and its SuperMicro implementation got their (well-deserved) spot in the security limelight.

Highlights of USENIX Security 2013

The authors of the IPMI paper also introduced "ZMap", a new scanner tool. It is now possible for a relatively nimble machine to scan the entire internet to see if a particular service is listening on a particular port... in 45 minutes. Pretty amazing. Obviously you'll need a serious internet connection for this but just the fact that it's so fast make a scan much more feasible. A tool like this can give a better background for vulnerabilities. For example, in the IPMI case the authors showed that there are many IPMI nodes connected directly to the internet.

The USENIX Best Paper Award went to Control Flow Integrity for COTS Binaries. I didn't see the presentation but the paper is quite interesting. It provides an implementation on Linux of CFI that works without the need for source code and provides very good performance.

My personal favorite paper was Revolver: An Automated Approach to the Detection of Evasive Web-based Malware (see below), with an honorable mention of these two: "Jekyll on iOS: When Benign Apps Become Evil" and "Dowsing for Overflows: A Guided Fuzzer to Find Buffer Boundary Violations".

Lastly, I really enjoyed the various George Mason presentations such as Trafficking Fraudulent Accounts: The Role of the Underground Market in Twitter Spam and Abuse, There Are No Free iPads: An Analysis of Survey Scams as a Business and Understanding the Emerging Threat of DDoS-as-a-Service.

Chrome vulnerability reward system

On Friday morning Chris Evans from the Chrome security team spoke in an invited talk about their use of a reward program to rewards people who report security vulnerabilities. Clearly this program is highly succesful for Google. The amount of bugs submitted is now three times as high as it was before the introduction of the program, and the quality of reports is good. Interesting talk.

Revolver: Uncovering new detection avoidance algorithms

To verify if a website is malicious, there are several systems that use an artificial web browser to run the suspicious JavaScript on a page, and then analyze if it is harmful or not. Attackers have found several short-comings in those artificial web browsers and use those to evade detection (their JavaScript simply won't run if it detects an artificial web browser).

This very original research provides a great way of uncovering the various detection avoidance mechanisms. Using a big database of known malware, the system automatically detects new malware that looks similar in structure. Writing malware is hard, and reuse is extremely common. If a similar piece of code is found, a human analyst will then review and see what changes were introduced -- often new avoidance mechanisms.

DNSSEC deployment

DNSSEC is an important improvement to the overall security of the internet, but it's adopting has been very slow. A team of the University of California, San Diego looked at how widespread the support for DNSSEC is, and how well it is supported by the various networks. The answer: only a small fraction of clients really support DNSSEC, and it even negatively impacts some clients (particularly in Asia).

Beating the Apple App Store

Apple's App Store has a better reputation than Android's. The amount of harmful apps in the Apple ecosystem is much smaller. Just looking at the USENIX conferences over the past few years clearly shows that Android gets a lot more attention. However, in Jekyll on iOS: When Benign Apps Become Evil the authors found a smart way to beat the AppStore's analysis. By deliberately introducing vulnerabilities in their App, that they then exploited once the app is live, they were able to create an App that behaved malign (in this case, sending a Twitter message without the author's permission).

Clever, and hard to detect for Apple: finding accidental vulnerabilities is difficult enough... let alone deliberate ones.

Recent comments

Recently read

Books I've recently read: