terriko: (Pi)
Ages ago, I thought it would be a brilliant idea to write up stuff on the papers I read, much like I do book reviews, but then I promptly... didn't do it. But it's a new year with new papers, and here's the first for this year's seminar.

small toad
Photo: small toad by Scott* (Because tiny toads are adorable and compiler papers notes don't lend themselves to obvious illustration)

Superoptimizer -- A Look at the Smallest Program
Henry Massalin
1987

This is a neat little paper about optimizing assembly code. They took a program and then had the computer try to generate the smallest possible functionally equivalent version. The paper is super short and readable and filled with lots of very clever adding of registers and stuff to avoid program jumps and comparisons. They could get it to optimize only fairly small programs (12 lines of assembly), but it still seemed like a lot of these would be useful compiler optimizations and they're probably in use now.

Anyhow, it's three pages of explanation + two pages of cool examples they found, so if you're looking for a fun little bit of computing to read about to fill out some mind-expanding new year's resolution, this is an easy place to start.

Some questions we had in seminar that I don't know the answers to:

- What was the impact of this paper on modern compilers?
- Do we do any of this while compiling, or make use of the things they found in a preset kind of way?
- Has anyone tried to do this using modern computers / other assembly instruction sets?
- It seemed like there was a lot of adding... would it be possible to make reduced assembly instruction sets on the assumption that they will never be programmed by humans and thus can be super-optimal?
terriko: I am a serious academic (Twlight Sparkle looking confused) (Serious Academic)
This is the first in my series of short notes on the academic papers I'm reading. This is a paper we read for seminar last week, and I chose to review it here not only because the results are interesting but also because it's a highly readable paper in case any of you get curious and want to read along with me.

Malicious Damage |  2008

Detecting malware domains at the upper DNS hierarchy
Antonakakis, M. et al, 2011

This paper is all about detection of malware using DNS. It turns out that while "normal" domains are accessed by machines that have patterns of geographical and network locations, malware domains are accessed by a bunch of zombie machines that could pop up anywhere on any network so the dns requests are a lot more random. So if you look at DNS, you can figure out what domains are being used by malware, and you can do it on the fly as domains change without needing a manually created blacklist.

It's a pretty neat trick. Malware authors could potentially get around it by adding in more clever requests -- doing something more like facebook or google which route you to "close" servers to provide good quality of service -- but until they do, this could be a handy supplement to existing malware detection. Reminds me a lot of greylisting that way.


@INPROCEEDINGS{antonakakis2011dnsmalware,
author = {Antonakakis, M. and Perdisci, R. and Lee, W. and Vasiloglou II, N. and Dagon, D.},
title = {Detecting malware domains at the upper DNS hierarchy},
booktitle = {Proc. of the 20th USENIX Security Symposium, USENIX Security},
year = {2011},
volume = {11},
pages = {27--27}
}
terriko: I am a serious academic (Twlight Sparkle looking confused) (Serious Academic)
One of the big problems of academia is that though we produce some amazing things, they're often not available, accessible, or even noticeable for the general public. That is, articles may cost money to read (unless you have access to academic journal subscriptions), interesting results get buried in dense scientific language, and often few people are talking about the results outside of academia (or sometimes even inside academia).

Last year, I committed myself to writing more book reviews to share what I read with others, and it occurs to me that this year, maybe I should make more of an effort to do the same with the scientific papers I read as well. The usual caveats apply: I've got my own set of biases in research just like I have taste in books, and it's entirely possible that I'll interpret results in ways other than they were intended.

This is something I did occasionally with my web security blog (and hoped to do more), but I'm currently reading papers about complex adaptive systems, biology, security, and more. So for now, these public paper reviews are going here right alongside my book reviews, and they'll be drawn not only from my own research interests but from the overlapping ones of my colleagues. I have a lead on a paper about railway design using slime molds, for example. You've been warned!

Profile

terriko: (Default)
terriko

July 2014

S M T W T F S
  12345
6789101112
13 141516171819
20212223242526
2728293031  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 25th, 2014 06:15 pm
Powered by Dreamwidth Studios